Accuracy-preserving source term quadrature for third-order edge-based discretization
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Liu, Yi
2017-09-01
In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.
Performance evaluation of WAVEWATCH III model in the Persian Gulf using different wind resources
NASA Astrophysics Data System (ADS)
Kazeminezhad, Mohammad Hossein; Siadatmousavi, Seyed Mostafa
2017-07-01
The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
A study of numerical methods for hyperbolic conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Yee, H. C.
1988-01-01
The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Source term model evaluations for the low-level waste facility performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, M.S.; Su, S.I.
1995-12-31
The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.
Enhanced Elliptic Grid Generation
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
2007-01-01
An enhanced method of elliptic grid generation has been invented. Whereas prior methods require user input of certain grid parameters, this method provides for these parameters to be determined automatically. "Elliptic grid generation" signifies generation of generalized curvilinear coordinate grids through solution of elliptic partial differential equations (PDEs). Usually, such grids are fitted to bounding bodies and used in numerical solution of other PDEs like those of fluid flow, heat flow, and electromagnetics. Such a grid is smooth and has continuous first and second derivatives (and possibly also continuous higher-order derivatives), grid lines are appropriately stretched or clustered, and grid lines are orthogonal or nearly so over most of the grid domain. The source terms in the grid-generating PDEs (hereafter called "defining" PDEs) make it possible for the grid to satisfy requirements for clustering and orthogonality properties in the vicinity of specific surfaces in three dimensions or in the vicinity of specific lines in two dimensions. The grid parameters in question are decay parameters that appear in the source terms of the inhomogeneous defining PDEs. The decay parameters are characteristic lengths in exponential- decay factors that express how the influences of the boundaries decrease with distance from the boundaries. These terms govern the rates at which distance between adjacent grid lines change with distance from nearby boundaries. Heretofore, users have arbitrarily specified decay parameters. However, the characteristic lengths are coupled with the strengths of the source terms, such that arbitrary specification could lead to conflicts among parameter values. Moreover, the manual insertion of decay parameters is cumbersome for static grids and infeasible for dynamically changing grids. In the present method, manual insertion and user specification of decay parameters are neither required nor allowed. Instead, the decay parameters are determined automatically as part of the solution of the defining PDEs. Depending on the shape of the boundary segments and the physical nature of the problem to be solved on the grid, the solution of the defining PDEs may provide for rates of decay to vary along and among the boundary segments and may lend itself to interpretation in terms of one or more physical quantities associated with the problem.
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
Local tsunamis and earthquake source parameters
Geist, Eric L.; Dmowska, Renata; Saltzman, Barry
1999-01-01
This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.
NASA Astrophysics Data System (ADS)
Perez, Pedro B.; Hamawi, John N.
2017-09-01
Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.
Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier
2016-12-01
The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a potential long-term source of particulate contaminated matter that will require diligent management for the foreseeable future. Copyright © 2016 Elsevier Ltd. All rights reserved.
1989-05-22
Stress- Strain Relation . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3 Equivalent Transversely Isotropic Elastic Constants for Periodi- cally...a vertical wavenumber parameters for compressional waves. # : vertical wavenumber parameters for shear waves. 6 dip angle, refer to Fig 3.2. E strain ...been pursued along two different lines[1] : First, in terms of body forces ; second, in terms of disconti- nuities in displacement or strain across a
NASA Technical Reports Server (NTRS)
Greenwood, Eric, II; Schmitz, Fredric H.
2010-01-01
A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.
X-Ray Emission from Compact Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cominsky, L
2004-03-23
This paper presents a review of the physical parameters of neutron stars and black holes that have been derived from X-ray observations. I then explain how these physical parameters can be used to learn about the extreme conditions occurring in regions of strong gravity, and present some recent evidence for relativistic effects seen in these systems. A glossary of commonly used terms and a short tutorial on the names of X-ray sources are also included.
Observation-based source terms in the third-generation wave model WAVEWATCH
NASA Astrophysics Data System (ADS)
Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.
2015-12-01
Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.
NASA Astrophysics Data System (ADS)
Kim, R.-S.; Cho, K.-S.; Moon, Y.-J.; Dryer, M.; Lee, J.; Yi, Y.; Kim, K.-H.; Wang, H.; Park, Y.-D.; Kim, Yong Ha
2010-12-01
In this study, we discuss the general behaviors of geomagnetic storm strength associated with observed parameters of coronal mass ejection (CME) such as speed (V) and earthward direction (D) of CMEs as well as the longitude (L) and magnetic field orientation (M) of overlaying potential fields of the CME source region, and we develop an empirical model to predict geomagnetic storm occurrence with its strength (gauged by the Dst index) in terms of these CME parameters. For this we select 66 halo or partial halo CMEs associated with M-class and X-class solar flares, which have clearly identifiable source regions, from 1997 to 2003. After examining how each of these CME parameters correlates with the geoeffectiveness of the CMEs, we find several properties as follows: (1) Parameter D best correlates with storm strength Dst; (2) the majority of geoeffective CMEs have been originated from solar longitude 15°W, and CMEs originated away from this longitude tend to produce weaker storms; (3) correlations between Dst and the CME parameters improve if CMEs are separated into two groups depending on whether their magnetic fields are oriented southward or northward in their source regions. Based on these observations, we present two empirical expressions for Dst in terms of L, V, and D for two groups of CMEs, respectively. This is a new attempt to predict not only the occurrence of geomagnetic storms, but also the storm strength (Dst) solely based on the CME parameters.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
NASA Astrophysics Data System (ADS)
Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben
2005-09-01
An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.
Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions
NASA Astrophysics Data System (ADS)
Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.
2017-12-01
Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.
NASA Astrophysics Data System (ADS)
Kwiatek, Grzegorz; Martínez-Garzón, Patricia; Dresen, Georg; Bohnhoff, Marco; Sone, Hiroki; Hartline, Craig
2015-10-01
The long-term temporal and spatial changes in statistical, source, and stress characteristics of one cluster of induced seismicity recorded at The Geysers geothermal field (U.S.) are analyzed in relation to the field operations, fluid migration, and constraints on the maximum likely magnitude. Two injection wells, Prati-9 and Prati-29, located in the northwestern part of the field and their associated seismicity composed of 1776 events recorded throughout a 7 year period were analyzed. The seismicity catalog was relocated, and the source characteristics including focal mechanisms and static source parameters were refined using first-motion polarity, spectral fitting, and mesh spectral ratio analysis techniques. The source characteristics together with statistical parameters (b value) and cluster dynamics were used to investigate and understand the details of fluid migration scheme in the vicinity of injection wells. The observed temporal, spatial, and source characteristics were clearly attributed to fluid injection and fluid migration toward greater depths, involving increasing pore pressure in the reservoir. The seasonal changes of injection rates were found to directly impact the shape and spatial extent of the seismic cloud. A tendency of larger seismic events to occur closer to injection wells and a correlation between the spatial extent of the seismic cloud and source sizes of the largest events was observed suggesting geometrical constraints on the maximum likely magnitude and its correlation to the average injection rate and volume of fluids present in the reservoir.
NASA Technical Reports Server (NTRS)
Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter
2015-01-01
Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.
Mesas-Carrascosa, Francisco Javier; Verdú Santano, Daniel; Meroño de Larriva, Jose Emilio; Ortíz Cordero, Rafael; Hidalgo Fernández, Rafael Enrique; García-Ferrer, Alfonso
2016-01-01
A number of physical factors can adversely affect cultural heritage. Therefore, monitoring parameters involved in the deterioration process, principally temperature and relative humidity, is useful for preventive conservation. In this study, a total of 15 microclimate stations using open source hardware were developed and stationed at the Mosque-Cathedral of Córdoba, which is registered with UNESCO for its outstanding universal value, to assess the behavior of interior temperature and relative humidity in relation to exterior weather conditions, public hours and interior design. Long-term monitoring of these parameters is of interest in terms of preservation and reducing the costs of future conservation strategies. Results from monitoring are presented to demonstrate the usefulness of this system. PMID:27690056
Gravitational wave source counts at high redshift and in models with extra dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel, E-mail: juan.garciabellido@uam.es, E-mail: savvas.nesseris@csic.es, E-mail: manuel.trashorras@csic.es
2016-07-01
Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z ∼< 1, where we show it is possible to find an analytical approximation for the source counts dN / d ( S /more » N ). This can be done in terms of cosmological parameters, such as the matter density Ω {sub m} {sub ,0} of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S / N . We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ω {sub m} {sub ,0} on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.« less
NASA Astrophysics Data System (ADS)
Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.
1992-09-01
A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.
Variability Search in GALFACTS
NASA Astrophysics Data System (ADS)
Kania, Joseph; Wenger, Trey; Ghosh, Tapasi; Salter, Christopher J.
2015-01-01
The Galactic ALFA Continuum Transit Survey (GALFACTS) is an all-Arecibo-sky survey using the seven-beam Arecibo L-band Feed Array (ALFA). The Survey is centered at 1.375 GHz with 300-MHz bandwidth, and measures all four Stokes parameters. We are looking for compact sources that vary in intensity or polarization on timescales of about a month via intra-survey comparisons and long term variations through comparisons with the NRAO VLA Sky Survey. Data processing includes locating and rejecting radio frequency interference, recognizing sources, two-dimensional Gaussian fitting to multiple cuts through the same source, and gain corrections. Our Python code is being used on the calibrations sources observed in conjunction with the survey measurements to determine the calibration parameters that will then be applied to data for the main field.
Numerical modeling of heat transfer in the fuel oil storage tank at thermal power plant
NASA Astrophysics Data System (ADS)
Kuznetsova, Svetlana A.
2015-01-01
Presents results of mathematical modeling of convection of a viscous incompressible fluid in a rectangular cavity with conducting walls of finite thickness in the presence of a local source of heat in the bottom of the field in terms of convective heat exchange with the environment. A mathematical model is formulated in terms of dimensionless variables "stream function - vorticity vector speed - temperature" in the Cartesian coordinate system. As the results show the distributions of hydrodynamic parameters and temperatures using different boundary conditions on the local heat source.
Finite Moment Tensors of Southern California Earthquakes
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Chen, P.; Zhao, L.
2003-12-01
We have developed procedures for inverting broadband waveforms for the finite moment tensors (FMTs) of regional earthquakes. The FMT is defined in terms of second-order polynomial moments of the source space-time function and provides the lowest order representation of a finite fault rupture; it removes the fault-plane ambiguity of the centroid moment tensor (CMT) and yields several additional parameters of seismological interest: the characteristic length L{c}, width W{c}, and duration T{c} of the faulting, as well as the directivity vector {v}{d} of the fault slip. To formulate the inverse problem, we follow and extend the methods of McGuire et al. [2001, 2002], who have successfully recovered the second-order moments of large earthquakes using low-frequency teleseismic data. We express the Fourier spectra of a synthetic point-source waveform in its exponential (Rytov) form and represent the observed waveform relative to the synthetic in terms two frequency-dependent differential times, a phase delay δ τ {p}(ω ) and an amplitude-reduction time δ τ {q}(ω ), which we measure using Gee and Jordan's [1992] isolation-filter technique. We numerically calculate the FMT partial derivatives in terms of second-order spatiotemporal gradients, which allows us to use 3D finite-difference seismograms as our isolation filters. We have applied our methodology to a set of small to medium-sized earthquakes in Southern California. The errors in anelastic structure introduced perturbations larger than the signal level caused by finite source effect. We have therefore employed a joint inversion technique that recovers the CMT parameters of the aftershocks, as well as the CMT and FMT parameters of the mainshock, under the assumption that the source finiteness of the aftershocks can be ignored. The joint system of equations relating the δ τ {p} and δ τ {q} data to the source parameters of the mainshock-aftershock cluster is denuisanced for path anomalies in both observables; this projection operation effectively corrects the mainshock data for path-related amplitude anomalies in a way similar to, but more flexible than, empirical Green function (EGF) techniques.
2011-09-01
a NSS that lies in this negative explosion positive CLVD quadrant due to the large degree of tectonic release in this event that reversed the phase...Mellman (1986) in their analysis of fundamental model Love and Rayleigh wave amplitude and phase for nuclear and tectonic release source terms, and...1986). Estimating explosion and tectonic release source parameters of underground nuclear explosions from Rayleigh and Love wave observations, Air
Modeling and observations of an elevated, moving infrasonic source: Eigenray methods.
Blom, Philip; Waxler, Roger
2017-04-01
The acoustic ray tracing relations are extended by the inclusion of auxiliary parameters describing variations in the spatial ray coordinates and eikonal vector due to changes in the initial conditions. Computation of these parameters allows one to define the geometric spreading factor along individual ray paths and assists in identification of caustic surfaces so that phase shifts can be easily identified. A method is developed leveraging the auxiliary parameters to identify propagation paths connecting specific source-receiver geometries, termed eigenrays. The newly introduced method is found to be highly efficient in cases where propagation is non-planar due to horizontal variations in the propagation medium or the presence of cross winds. The eigenray method is utilized in analysis of infrasonic signals produced by a multi-stage sounding rocket launch with promising results for applications of tracking aeroacoustic sources in the atmosphere and specifically to analysis of motor performance during dynamic tests.
Orejas, Jaime; Pfeuffer, Kevin P; Ray, Steven J; Pisonero, Jorge; Sanz-Medel, Alfredo; Hieftje, Gary M
2014-11-01
Ambient desorption/ionization (ADI) sources coupled to mass spectrometry (MS) offer outstanding analytical features: direct analysis of real samples without sample pretreatment, combined with the selectivity and sensitivity of MS. Since ADI sources typically work in the open atmosphere, ambient conditions can affect the desorption and ionization processes. Here, the effects of internal source parameters and ambient humidity on the ionization processes of the flowing atmospheric pressure afterglow (FAPA) source are investigated. The interaction of reagent ions with a range of analytes is studied in terms of sensitivity and based upon the processes that occur in the ionization reactions. The results show that internal parameters which lead to higher gas temperatures afforded higher sensitivities, although fragmentation is also affected. In the case of humidity, only extremely dry conditions led to higher sensitivities, while fragmentation remained unaffected.
NASA Astrophysics Data System (ADS)
Davoine, X.; Bocquet, M.
2007-03-01
The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).
Standardization of terminology in field of ionizing radiations and their measurements
NASA Astrophysics Data System (ADS)
Yudin, M. F.; Karaveyev, F. M.
1984-03-01
A new standard terminology was introduced on 1 January 1982 by the Scientific-Technical Commission on All-Union State Standards to cover ionizing radiations and their measurements. It is based on earlier standards such as GOST 15484-74/81, 18445-70/73, 19849-74, 22490-77 as well as the latest recommendations by international committees. One hundred eighty-six terms and definitions in 14 paragraphs are contained. Fundamental concepts, sources and forms of ionizing radiations, characteristics and parameters of ionizing radiations, and methods of measuring their characteristics and parameters are covered. New terms have been added to existing ones. The equivalent English, French, and German terms are also given. The terms measurement of ionizing radiation and transfer of ionizing particles (equivalent of particle fluence of energy fluence) are still under discussion.
What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.
2012-12-01
A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less
Uncertainty, variability, and earthquake physics in ground‐motion prediction equations
Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.
2017-01-01
Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20 km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.
NASA Astrophysics Data System (ADS)
Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan
2016-03-01
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.
Electron Energy Deposition in Atomic Nitrogen
1987-10-06
knovn theoretical results, and their relative accuracy in comparison to existing measurements and calculations is given elsevhere. 20 2.1 The Source Term...with the proper choice of parameters, reduces to vell-known theoretical results. 20 Table 2 gives the parameters for collisional excitation of the...calculations of McGuire 36 and experimental measurements of Brook et al.3 7 Additional theoretical and experimental results are discussed in detail elsevhere
Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2011-01-01
A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
The MIT/OSO 7 catalog of X-ray sources - Intensities, spectra, and long-term variability
NASA Technical Reports Server (NTRS)
Markert, T. H.; Laird, F. N.; Clark, G. W.; Hearn, D. R.; Sprott, G. F.; Li, F. K.; Bradt, H. V.; Lewin, W. H. G.; Schnopper, H. W.; Winkler, P. F.
1979-01-01
This paper is a summary of the observations of the cosmic X-ray sky performed by the MIT 1-40-keV X-ray detectors on OSO 7 between October 1971 and May 1973. Specifically, mean intensities or upper limits of all third Uhuru or OSO 7 cataloged sources (185 sources) in the 3-10-keV range are computed. For those sources for which a statistically significant (greater than 20) intensity was found in the 3-10-keV band (138 sources), further intensity determinations were made in the 1-15-keV, 1-6-keV, and 15-40-keV energy bands. Graphs and other simple techniques are provided to aid the user in converting the observed counting rates to convenient units and in determining spectral parameters. Long-term light curves (counting rates in one or more energy bands as a function of time) are plotted for 86 of the brighter sources.
NASA Astrophysics Data System (ADS)
Azhar, Waqas Ali; Vieru, Dumitru; Fetecau, Constantin
2017-08-01
Free convection flow of some water based fractional nanofluids over a moving infinite vertical plate with uniform heat flux and heat source is analytically and graphically studied. Exact solutions for dimensionless temperature and velocity fields, Nusselt numbers, and skin friction coefficients are established in integral form in terms of modified Bessel functions of the first kind. These solutions satisfy all imposed initial and boundary conditions and reduce to the similar solutions for ordinary nanofluids when the fractional parameters tend to one. Furthermore, they reduce to the known solutions from the literature when the plate is fixed and the heat source is absent. The influence of fractional parameters on heat transfer and fluid motion is graphically underlined and discussed. The enhancement of heat transfer in such flows is higher for fractional nanofluids in comparison with ordinary nanofluids. Moreover, the use of fractional models allows us to choose the fractional parameters in order to get a very good agreement between experimental and theoretical results.
Karlinger, M.R.; Troutman, B.M.
1985-01-01
An instantaneous unit hydrograph (iuh) based on the theory of topologically random networks (topological iuh) is evaluated in terms of sets of basin characteristics and hydraulic parameters. Hydrographs were computed using two linear routing methods for each of two drainage basins in the southeastern United States and are the basis of comparison for the topological iuh's. Elements in the sets of basin characteristics for the topological iuh's are the number of first-order streams only, (N), or the nuber of sources together with the number of channel links in the topological diameter (N, D); the hydraulic parameters are values of the celerity and diffusivity constant. Sensitivity analyses indicate that the mean celerity of the internal links in the network is the critical hydraulic parameter for determining the shape of the topological iuh, while the diffusivity constant has minimal effect on the topological iuh. Asymptotic results (source-only) indicate the number of sources need not be large to approximate the topological iuh with the Weibull probability density function.
NASA Astrophysics Data System (ADS)
Vasu, B.; Gorla, Rama Subba Reddy; Murthy, P. V. S. N.
2017-05-01
The Walters-B liquid model is employed to simulate medical creams and other rheological liquids encountered in biotechnology and chemical engineering. This rheological model introduces supplementary terms into the momentum conservation equation. The combined effects of thermal radiation and heat sink/source on transient free convective, laminar flow and mass transfer in a viscoelastic fluid past a vertical plate are presented by taking thermophoresis effect into account. The transformed conservation equations are solved using a stable, robust finite difference method. A parametric study illustrating the influence of viscoelasticity parameter ( Γ), thermophoretic parameter ( τ), thermal radiation parameter ( F), heat sink/source ( ϕ), Prandtl number ( Pr), Schmidt number ( Sc), thermal Grashof number ( Gr), solutal Grashof number ( Gm), temperature and concentration profiles as well as local skin-friction, Nusselt and Sherwood number is conducted. The results of this parametric study are shown graphically and inform of table. The study has applications in polymer materials processing.
NASA Astrophysics Data System (ADS)
D'Amico, Sebastiano; Akinci, Aybige; Pischiutta, Marta
2018-03-01
In this paper we characterize the high frequency (1.0 - 10 Hz) seismic wave crustal attenuation and the source excitation in the Sicily Channel and surrounding regions using background seismicity from weak-motion database. The data set includes 15995 waveforms related to earthquakes having local magnitude ranging from 2.0 to 4.5 recorded between 2006 and 2012. The observed and predicted ground motions form the weak-motion data are evaluated in several narrow frequency bands from 0.25 to 20.0 Hz. The filtered observed peaks are regressed to specify a proper functional form for the regional attenuation, excitation and site specific term separately. The results are then used to calibrate effective theoretical attenuation and source excitation models using the Random Vibration Theory (RVT). In the log-log domain, the regional seismic wave attenuation and the geometrical spreading coefficient are modeled together. The geometrical spreading coefficient, g (r), modeled with a bilinear piecewise functional form and given as g (r) ∝ r-1.0 for the short distances (r < 50 km) and as g (r) ∝ r-0.8 for the larger distances (r < 50 km). A frequency-dependent quality factor, inverse of the seismic attenuation parameter, Q(f) = 160 f/fref 0. 35 (where fref = 1.0 Hz), is combined to the geometrical spreading. The source excitation terms are defined at a selected reference distance with a magnitude independent roll-off spectral parameter, κ 0.04 s and with a Brune stress drop parameter increasing with moment magnitude, from Δσ = 2 MPa for Mw = 2.0 to Δσ = 13 MPa for Mw = 4.5. For events M≤4.5 (being Mwmax = 4.5 available in the dataset) the stress parameters are obtained by correlating the empirical/excitation source spectra with the Brune spectral model as function of magnitude. For the larger magnitudes (Mw>4.5) outside the range available in the calibration dataset where we do not have recorded data, we extrapolate our results through the calibration of the stress parameters of the Brune source spectrum over the Bindi et al. (2011) ground motion prediction equation (GMPE) selected as a reference model (hereafter also ITA10).
Simulations of cold electroweak baryogenesis: dependence on the source of CP-violation
NASA Astrophysics Data System (ADS)
Mou, Zong-Gang; Saffin, Paul M.; Tranberg, Anders
2018-05-01
We compute the baryon asymmetry created in a tachyonic electroweak symmetry breaking transition, focusing on the dependence on the source of effective CP-violation. Earlier simulations of Cold Electroweak Baryogenesis have almost exclusively considered a very specific CP-violating term explicitly biasing Chern-Simons number. We compare four different dimension six, scalar-gauge CP-violating terms, involving both the Higgs field and another dynamical scalar coupled to SU(2) or U(1) gauge fields. We find that for sensible values of parameters, all implementations can generate a baryon asymmetry consistent with observations, showing that baryogenesis is a generic outcome of a fast tachyonic electroweak transition.
Influence of heat conducting substrates on explosive crystallization in thin layers
NASA Astrophysics Data System (ADS)
Schneider, Wilhelm
2017-09-01
Crystallization in a thin, initially amorphous layer is considered. The layer is in thermal contact with a substrate of very large dimensions. The energy equation of the layer contains source and sink terms. The source term is due to liberation of latent heat in the crystallization process, while the sink term is due to conduction of heat into the substrate. To determine the latter, the heat diffusion equation for the substrate is solved by applying Duhamel's integral. Thus, the energy equation of the layer becomes a heat diffusion equation with a time integral as an additional term. The latter term indicates that the heat loss due to the substrate depends on the history of the process. To complete the set of equations, the crystallization process is described by a rate equation for the degree of crystallization. The governing equations are then transformed to a moving co-ordinate system in order to analyze crystallization waves that propagate with invariant properties. Dual solutions are found by an asymptotic expansion for large activation energies of molecular diffusion. By introducing suitable variables, the results can be presented in a universal form that comprises the influence of all non-dimensional parameters that govern the process. Of particular interest for applications is the prediction of a critical heat loss parameter for the existence of crystallization waves with invariant properties.
Free-electron laser emission architecture impact on extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Hosler, Erik R.; Wood, Obert R.; Barletta, William A.
2017-10-01
Laser-produced plasma (LPP) EUV sources have demonstrated ˜125 W at customer sites, establishing confidence in EUV lithography (EUVL) as a viable manufacturing technology. However, for extension to the 3-nm technology node and beyond, existing scanner/source technology must enable higher-NA imaging systems (requiring increased resist dose and providing half-field exposures) and/or EUV multipatterning (requiring increased wafer throughput proportional to the number of exposure passes). Both development paths will require a substantial increase in EUV source power to maintain the economic viability of the technology, creating an opportunity for free-electron laser (FEL) EUV sources. FEL-based EUV sources offer an economic, high-power/single-source alternative to LPP EUV sources. Should FELs become the preferred next-generation EUV source, the choice of FEL emission architecture will greatly affect its operational stability and overall capability. A near-term industrialized FEL is expected to utilize one of the following three existing emission architectures: (1) self-amplified spontaneous emission, (2) regenerative amplifier, or (3) self-seeding. Model accelerator parameters are put forward to evaluate the impact of emission architecture on FEL output. Then, variations in the parameter space are applied to assess the potential impact to lithography operations, thereby establishing component sensitivity. The operating range of various accelerator components is discussed based on current accelerator performance demonstrated at various scientific user facilities. Finally, comparison of the performance between the model accelerator parameters and the variation in parameter space provides a means to evaluate the potential emission architectures. A scorecard is presented to facilitate this evaluation and provides a framework for future FEL design and enablement for EUVL applications.
Free Electron coherent sources: From microwave to X-rays
NASA Astrophysics Data System (ADS)
Dattoli, Giuseppe; Di Palma, Emanuele; Pagnutti, Simonetta; Sabia, Elio
2018-04-01
The term Free Electron Laser (FEL) will be used, in this paper, to indicate a wide collection of devices aimed at providing coherent electromagnetic radiation from a beam of "free" electrons, unbound at the atomic or molecular states. This article reviews the similarities that link different sources of coherent radiation across the electromagnetic spectrum from microwaves to X-rays, and compares the analogies with conventional laser sources. We explore developing a point of view that allows a unified analytical treatment of these devices, by the introduction of appropriate global variables (e.g. gain, saturation intensity, inhomogeneous broadening parameters, longitudinal mode coupling strength), yielding a very effective way for the determination of the relevant design parameters. The paper looks also at more speculative aspects of FEL physics, which may address the relevance of quantum effects in the lasing process.
Towards the theory of pollinator-mediated gene flow.
Cresswell, James E
2003-01-01
I present a new exposition of a model of gene flow by animal-mediated pollination between a source population and a sink population. The model's parameters describe two elements: (i) the expected portion of the source's paternity that extends to the sink population; and (ii) the dilution of this portion by within-sink pollinations. The model is termed the portion-dilution model (PDM). The PDM is a parametric restatement of the conventional view of animal-mediated pollination. In principle, it can be applied to plant species in general. I formulate a theoretical value of the portion parameter that maximizes gene flow and prescribe this as a benchmark against which to judge the performance of real systems. Existing foraging theory can be used in solving part of the PDM, but a theory for source-to-sink transitions by pollinators is currently elusive. PMID:12831465
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
Volcanic eruption source parameters from active and passive microwave sensors
NASA Astrophysics Data System (ADS)
Montopoli, Mario; Marzano, Frank S.; Cimini, Domenico; Mereu, Luigi
2016-04-01
It is well known, in the volcanology community, that precise information of the source parameters characterising an eruption are of predominant interest for the initialization of the Volcanic Transport and Dispersion Models (VTDM). Source parameters of main interest would be the top altitude of the volcanic plume, the flux of the mass ejected at the emission source, which is strictly related to the cloud top altitude, the distribution of volcanic mass concentration along the vertical column as well as the duration of the eruption and the erupted volume. Usually, the combination of a-posteriori field and numerical studies allow constraining the eruption source parameters for a given volcanic event thus making possible the forecast of ash dispersion and deposition from future volcanic eruptions. So far, remote sensors working at visible and infrared channels (cameras and radiometers) have been mainly used to detect, track and provide estimates of the concentration content and the prevailing size of the particles propagating within the ash clouds up to several thousand of kilometres far from the source as well as track back, a-posteriori, the accuracy of the VATDM outputs thus testing the initial choice made for the source parameters. Acoustic wave (infrasound) and microwave fixed scan radar (voldorad) were also used to infer source parameters. In this work we want to put our attention on the role of sensors operating at microwave wavelengths as complementary tools for the real time estimations of source parameters. Microwaves can benefit of the operability during night and day and a relatively negligible sensitivity to the presence of clouds (non precipitating weather clouds) at the cost of a limited coverage and larger spatial resolution when compared with infrared sensors. Thanks to the aforementioned advantages, the products from microwaves sensors are expected to be sensible mostly to the whole path traversed along the tephra cloud making microwaves particularly appealing for estimates close to the volcano emission source. Near the source the cloud optical thickness is expected to be large enough to induce saturation effects at the infrared sensor receiver thus vanishing the brightness temperature difference methods for the ash cloud identification. In the light of the introduction above, some case studies at Eyjafjallajökull 2010 (Iceland), Etna (Italy) and Calbuco (Cile), on 5-10 May 2010, 23rd Nov., 2013 and 23 Apr., 2015, respectively, are analysed in terms of source parameter estimates (manly the cloud top and mass flax rate) from ground based microwave weather radar (9.6 GHz) and satellite Low Earth Orbit microwave radiometers (50 - 183 GH). A special highlight will be given to the advantages and limitations of microwave-related products with respect to more conventional tools.
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
On the scale dependence of earthquake stress drop
NASA Astrophysics Data System (ADS)
Cocco, Massimo; Tinti, Elisa; Cirella, Antonella
2016-10-01
We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.
NASA Astrophysics Data System (ADS)
D'Amico, Sebastiano; Akinci, Aybige; Pischiutta, Marta
2018-07-01
In this paper we characterize the high-frequency (1.0-10 Hz) seismic wave crustal attenuation and the source excitation in the Sicily Channel and surrounding regions using background seismicity from weak-motion database. The data set includes 15 995 waveforms related to earthquakes having local magnitude ranging from 2.0 to 4.5 recorded between 2006 and 2012. The observed and predicted ground motions form the weak-motion data are evaluated in several narrow frequency bands from 0.25 to 20.0 Hz. The filtered observed peaks are regressed to specify a proper functional form for the regional attenuation, excitation and site specific term separately. The results are then used to calibrate effective theoretical attenuation and source excitation models using the random vibration theory. In the log-log domain, the regional seismic wave attenuation and the geometrical spreading coefficient are modelled together. The geometrical spreading coefficient, g(r), modelled with a bilinear piecewise functional form and given as g(r) ∝ r-1.0 for the short distances (r < 50 km) and as g(r) ∝ r-0.8 for the larger distances (r < 50 km). A frequency-dependent quality factor, inverse of the seismic attenuation parameter, Q(f)=160f/fref0. 35 (where fref = 1.0 Hz), is combined to the geometrical spreading. The source excitation terms are defined at a selected reference distance with a magnitude-independent roll-off spectral parameter, κ 0.04 s and with a Brune stress drop parameter increasing with moment magnitude, from Δσ = 2 MPa for Mw = 2.0 to Δσ = 13 MPa for Mw = 4.5. For events M ≤ 4.5 (being Mwmax = 4.5 available in the data set) the stress parameters are obtained by correlating the empirical/excitation source spectra with the Brune spectral model as function of magnitude. For the larger magnitudes (Mw>4.5) outside the range available in the calibration data set where we do not have recorded data, we extrapolate our results through the calibration of the stress parameters of the Brune source spectrum over the Bindi et al.ground-motion prediction equation selected as a reference model (hereafter also ITA10). Finally, the weak-motion-based model parameters are used through a stochastic approach in order to predict a set of region specific spectral ground-motion parameters (peak ground acceleration, peak ground velocity, and 0.3 and 1.0 Hz spectral acceleration) relative to the generic rock site as a function of distance between 10 and 250 km and magnitude between M 2.0 and M 7.0.
Restoration of the covariant gauge α in the initial field of gravity in de Sitter spacetime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheong, Lee Yen; Yan, Chew Xiao
2014-03-05
The gravitational field generated by a mass term and the initial surface through covariant retarded Green's function for linearized gravity in de Sitter spacetime was studied recently [4, 5] with the covariant gauges set to β = 2/3 and α = 5/3. In this paper we extend the work to restore the gauge parameter α in the field coming from the initial data using the method of shifting the parameter. The α terms in the initial field cancels exactly with the one coming from the source term. Consequently, the correct field configuration, with two equal mass points moving in itsmore » geodesic, one located at the North pole and another one located at the South pole, is reproduced in the whole manifold of de Sitter spacetime.« less
Laser magnetic resonance in supersonic plasmas - The rotational spectrum of SH(+)
NASA Technical Reports Server (NTRS)
Hovde, David C.; Saykally, Richard J.
1987-01-01
The rotational spectrum of v = 0 and v = 1X3Sigma(-)SH(+) was measured by laser magnetic resonance. Rotationally cold (Tr = 30 K), vibrationally excited (Tv = 3000 K) ions were generated in a corona excited supersonic expansion. The use of this source to identify ion signals is described. Improved molecular parameters were obtained; term values are presented from which astrophysically important transitions may be calculated. Accurate hyperfine parameters for both vibrational levels were determined and the vibrational dependence of the Fermi contact interaction was resolved. The hyperfine parameters agree well with recent many-body perturbation theory calculations.
Natural convection in symmetrically heated vertical parallel plates with discrete heat sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manca, O.; Nardini, S.; Naso, V.
Laminar air natural convection in a symmetrically heated vertical channel with uniform flush-mounted discrete heat sources has been experimentally investigated. The effects of heated strips location and of their number are pointed out in terms of the maximum wall temperatures. A flow visualization in the entrance region of the channel was carried out and air temperatures and velocities in two cross sections have been measured. Dimensionless local heat transfer coefficients have been evaluated and monomial correlations among relevant parameters have bee derived in the local Rayleigh number range 10--10{sup 6}. Channel Nusselt number has been correlated in a polynomial formmore » in terms of channel Rayleigh number.« less
Correlation between Ti source/drain contact and performance of InGaZnO-based thin film transistors
NASA Astrophysics Data System (ADS)
Choi, Kwang-Hyuk; Kim, Han-Ki
2013-02-01
Ti contact properties and their electrical contribution to an amorphous InGaZnO (a-IGZO) semiconductor-based thin film transistor (TFT) were investigated in terms of chemical, structural, and electrical considerations. TFT device parameters were quantitatively studied by a transmission line method. By comparing various a-IGZO TFT parameters with those of different Ag and Ti source/drain electrodes, Ti S/D contact with an a-IGZO channel was found to lead to a negative shift in VT (-Δ 0.52 V). This resulted in higher saturation mobility (8.48 cm2/Vs) of a-IGZO TFTs due to effective interfacial reaction between Ti and an a-IGZO semiconducting layer. Based on transmission electron microcopy, x-ray photoelectron depth profile analyses, and numerical calculation of TFT parameters, we suggest a possible Ti contact mechanism on semiconducting a-IGZO channel layers for TFTs.
Excitation of Love waves in a thin film layer by a line source.
NASA Technical Reports Server (NTRS)
Tuan, H.-S.; Ponamgi, S. R.
1972-01-01
The excitation of a Love surface wave guided by a thin film layer deposited on a semiinfinite substrate is studied in this paper. Both the thin film and the substrate are considered to be elastically isotropic. Amplitudes of the surface wave in the thin film region and the substrate are found in terms of the strength of a line source vibrating in a direction transverse to the propagating wave. In addition to the surface wave, the bulk shear wave excited by the source is also studied. Analytical expressions for the bulk wave amplitude as a function of the direction of propagation, the acoustic powers transported by the surface and bulk waves, and the efficiency of surface wave excitation are obtained. A numerical example is given to show how the bulk wave radiation pattern depends upon the source frequency, the film thickness and other important parameters of the problem. The efficiency of surface wave excitation is also calculated for various parameter values.
NASA Astrophysics Data System (ADS)
Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.
2001-06-01
We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, S R; Dreger, D S; Phillips, W S
2008-07-16
Inversions for regional attenuation (1/Q) of Lg are performed in two different regions. The path attenuation component of the Lg spectrum is isolated using the coda-source normalization method, which corrects the Lg spectral amplitude for the source using the stable, coda-derived source spectra. Tomographic images of Northern California agree well with one-dimensional (1-D) Lg Q estimated from five different methods. We note there is some tendency for tomographic smoothing to increase Q relative to targeted 1-D methods. For example in the San Francisco Bay Area, which contains high attenuation relative to the rest of it's region, Q is over-estimated bymore » {approx}30. Coda-source normalized attenuation tomography is also carried out for the Yellow Sea/Korean Peninsula (YSKP) where output parameters (site, source, and path terms) are compared with those from the amplitude tomography method of Phillips et al. (2005) as well as a new method that ties the source term to the MDAC formulation (Walter and Taylor, 2001). The source terms show similar scatter between coda-source corrected and MDAC source perturbation methods, whereas the amplitude method has the greatest correlation with estimated true source magnitude. The coda-source better represents the source spectra compared to the estimated magnitude and could be the cause of the scatter. The similarity in the source terms between the coda-source and MDAC-linked methods shows that the latter method may approximate the effect of the former, and therefore could be useful in regions without coda-derived sources. The site terms from the MDAC-linked method correlate slightly with global Vs30 measurements. While the coda-source and amplitude ratio methods do not correlate with Vs30 measurements, they do correlate with one another, which provides confidence that the two methods are consistent. The path Q{sup -1} values are very similar between the coda-source and amplitude ratio methods except for small differences in the Da-xin-anling Mountains, in the northern YSKP. However there is one large difference between the MDAC-linked method and the others in the region near stations TJN and INCN, which point to site-effect as the cause for the difference.« less
Pecha, Petr; Šmídl, Václav
2016-11-01
A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.
The pyroelectric properties of TGS for application in infrared detection
NASA Technical Reports Server (NTRS)
Kroes, R. L.; Reiss, D.
1981-01-01
The pyroelectric property of triglycine sulfate and its application in the detection of infrared radiation are described. The detectivities of pyroelectric detectors and other types of infrared detectors are compared. The thermal response of a pyroelectric detector element and the resulting electrical response are derived in terms of the material parameters. The noise sources which limit the sensitivity of pyroelectric detectors are described, and the noise equivalent power for each noise source is given as a function of frequency and detector area.
1983-11-01
successfully. I- Accession For NTIS -GO iiiONa DTIC TAB t Unannounced - Justificatio Distribution/ I Availability Codes vail and/or DIst Special IA-11...terms of initial signal power. An active sensor must be excited externally. Such a sensor receives its power from an external source and merely modulates...electrons in the material to gain L enough energy to be emitted. The voltage source causes a positive potential to be felt on the collector, thus causing the
Relativistic effects in local inertial frames including parametrized-post-Newtonian effects
NASA Astrophysics Data System (ADS)
Shahid-Saless, Bahman; Ashby, Neil
1988-09-01
We use the concept of a generalized Fermi frame to describe relativistic effects, due to local and distant sources of gravitation, on a body placed in a local inertial frame of reference. In particular we have considered a model of two spherically symmetric gravitating point sources, moving in circular orbits around a common barycenter where one of the bodies is chosen to be the local and the other the distant one. This has been done using the slow-motion, weak-field approximation and including four of the parametrized-post-Newtonian (PPN) parameters. The position of the classical center of mass must be modified when the PPN parameter ζ2 is included. We show that the main relativistic effect on a local satellite is described by the Schwarzschild field of the local body and the nonlinear term corresponding to the self-interaction of the local source with itself. There are also much smaller terms that are proportional, respectively, to the product of the potentials of local and distant bodies and to the distant body's self-interactions. The spatial axes of the local frame undergo geodetic precession. In addition we have an acceleration of the order of 10-11 cm sec-2 that vanish in the case of general relativity, which is discussed in detail.
Modeling the contribution of point sources and non-point sources to Thachin River water pollution.
Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth
2009-08-15
Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.
NASA Astrophysics Data System (ADS)
Tape, C.; Alvizuri, C. R.; Silwal, V.; Tape, W.
2017-12-01
When considered as a point source, a seismic source can be characterized in terms of its origin time, hypocenter, moment tensor, and source time function. The seismologist's task is to estimate these parameters--and their uncertainties--from three-component ground motion recorded at irregularly spaced stations. We will focus on one portion of this problem: the estimation of the moment tensor and its uncertainties. With magnitude estimated separately, we are left with five parameters describing the normalized moment tensor. A lune of normalized eigenvalue triples can be used to visualize the two parameters (lune longitude and lune latitude) describing the source type, while the conventional strike, dip, and rake angles can be used to characterize the orientation. Slight modifications of these five parameters lead to a uniform parameterization of moment tensors--uniform in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. For a moment tensor m that we have inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighborhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in our inference of m. The calculation of P(V) requires knowing both the probability P(w) and the fractional volume V(w) of the set of moment tensors within a given angular radius w of m. We apply this approach to several different data sets, including nuclear explosions from the Nevada Test Site, volcanic events from Uturuncu (Bolivia), and earthquakes. Several challenges remain: choosing an appropriate misfit function, handling time shifts between data and synthetic waveforms, and extending the uncertainty estimation to include more source parameters (e.g., hypocenter and source time function).
40 CFR 63.11395 - What are the standards and compliance requirements for existing sources?
Code of Federal Regulations, 2013 CFR
2013-07-01
... routine and long-term maintenance) and continuous monitoring system. (4) A list of operating parameters... polymerization process equipment and monomer recovery process equipment and convey the collected gas stream.... (2) 0.05 lb/hr of AN from the control device for monomer recovery process equipment. (3) If you do...
40 CFR 63.11395 - What are the standards and compliance requirements for existing sources?
Code of Federal Regulations, 2012 CFR
2012-07-01
... routine and long-term maintenance) and continuous monitoring system. (4) A list of operating parameters... polymerization process equipment and monomer recovery process equipment and convey the collected gas stream.... (2) 0.05 lb/hr of AN from the control device for monomer recovery process equipment. (3) If you do...
40 CFR 63.11395 - What are the standards and compliance requirements for existing sources?
Code of Federal Regulations, 2014 CFR
2014-07-01
... routine and long-term maintenance) and continuous monitoring system. (4) A list of operating parameters... polymerization process equipment and monomer recovery process equipment and convey the collected gas stream.... (2) 0.05 lb/hr of AN from the control device for monomer recovery process equipment. (3) If you do...
Reducing DoD Fossil-Fuel Dependence
2006-09-01
hour: the amount of energy available from one gigawatt in one hour. HFCS High - fructose corn syrup HHV High -heat value HICE Hydrogen internal combustion...63 Ethanol derived from corn .................................................... 63...particular, alternate fuels and energy sources are to be assessed in terms of multiple parameters, to include (but not limited to) stability, high & low
Coast of California Storm and Tidal Waves Study. Southern California Coastal Processes Data Summary,
1986-02-01
distribution of tracers injected on the beach. The suspended load was obtained from in situ measurements of the water column in the surf zone (Zampol and...wind waves. 3.2.2 Wave Climate There are relatively few in situ long-term measurements of the deep ocean (i.e. unaffected by the channel islands and...climate parameters and were not intended for that purpose. In the literature reviewed, the principal source of long-term in situ measurements is the
Regularized Semiparametric Estimation for Ordinary Differential Equations
Li, Yun; Zhu, Ji; Wang, Naisyin
2015-01-01
Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
Improved methods for the measurement and analysis of stellar magnetic fields
NASA Technical Reports Server (NTRS)
Saar, Steven H.
1988-01-01
The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Gingival Retraction Methods: A Systematic Review.
Tabassum, Sadia; Adnan, Samira; Khan, Farhan Raza
2017-12-01
The aim of this systematic review was to assess the gingival retraction methods in terms of the amount of gingival retraction achieved and changes observed in various clinical parameters: gingival index (GI), plaque index (PI), probing depth (PD), and attachment loss (AL). Data sources included three major databases, PubMed, CINAHL plus (Ebsco), and Cochrane, along with hand search. Search was made using the key terms in different permutations of gingival retraction* AND displacement method* OR technique* OR agents OR material* OR medicament*. The initial search results yielded 145 articles which were narrowed down to 10 articles using a strict eligibility criteria of including clinical trials or experimental studies on gingival retraction methods with the amount of tooth structure gained and assessment of clinical parameters as the outcomes conducted on human permanent teeth only. Gingival retraction was measured in 6/10 studies whereas the clinical parameters were assessed in 5/10 studies. The total number of teeth assessed in the 10 included studies was 400. The most common method used for gingival retraction was chemomechanical. The results were heterogeneous with regards to the outcome variables. No method seemed to be significantly superior to the other in terms of gingival retraction achieved. Clinical parameters were not significantly affected by the gingival retraction method. © 2016 by the American College of Prosthodontists.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake
NASA Astrophysics Data System (ADS)
Muller, S. J.; Gerber, S.
2013-12-01
The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better constrain projections for the land carbon sink.
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
Kelly Elder; Don Cline; Angus Goodbody; Paul Houser; Glen E. Liston; Larry Mahrt; Nick Rutter
2009-01-01
A short-term meteorological database has been developed for the Cold Land Processes Experiment (CLPX). This database includes meteorological observations from stations designed and deployed exclusively for CLPXas well as observations available from other sources located in the small regional study area (SRSA) in north-central Colorado. The measured weather parameters...
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Physicsdesign point for a 1MW fusion neutron source
NASA Astrophysics Data System (ADS)
Woodruff, Simon; Melnik, Paul; Sieck, Paul; Stuber, James; Romero-Talamas, Carlos; O'Bryan, John; Miller, Ronald
2016-10-01
We are developing a design point for a spheromak experiment heated by adiabatic compression for use as a compact neutron source. We utilize the CORSICA and NIMROD MHD codes as well as analytic modeling to assess a concept with target parameters R0 =0.5m, Rf =0.17m, T0 =1keV, Tf =8keV, n0 =2e20m-3 and nf = 5e21m-3, with radial convergence of C =R0/Rf =3. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. We present results simulations of magnetic compression using the NIMROD code to examine the role of rotation on the stability and confinement of the spheromak as it is compressed. Supported by DARPA Grant N66001-14-1-4044 and IAEA CRP on Compact Fusion Neutron Sources.
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
Evaluation of actuator energy storage and power sources for spacecraft applications
NASA Technical Reports Server (NTRS)
Simon, William E.; Young, Fred M.
1993-01-01
The objective of this evaluation is to determine an optimum energy storage/power source combination for electrical actuation systems for existing (Solid Rocket Booster (SRB), Shuttle) and future (Advanced Launch System (ALS), Shuttle Derivative) vehicles. Characteristic of these applications is the requirement for high power pulses (50-200 kW) for short times (milliseconds to seconds), coupled with longer-term base or 'housekeeping' requirements (5-16 kW). Specific study parameters (e.g., weight, volume, etc.) as stated in the proposal and specified in the Statement of Work (SOW) are included.
A New Characterization of the Compton Process in the ULX Spectra
NASA Astrophysics Data System (ADS)
Kobayashi, S.; Nakazawa, K.; Makishima, K.
2015-07-01
Ultra Luminous X-ray sources (ULXs) are unusually luminous point sources located at arms of spiral galaxies, and are candidates for the intermediate mass black holes (Makishima+2000). Their spectra make transition betweens power-law shapes (PL state) and convex shapes (disk-like state). The latter state can be explained with either the multi-color disk (MCD)+thermal Comptonization (THC) model or a Slim disk model (Watari+2000). We adopt the former modeling, because it generally gives physically more reasonable parameters (Miyawaki+2009). To characterize the ULXs spectra with a unified way, we applied the MCD+THC model to several datasets of ULXs obtained by Suzaku, XMM-Newton, and Nu-Star. The model well explains all the spectra, in terms of cool disk (T_{in}˜0.2 keV), and a cool thick (T_{e}˜2 keV, τ ˜10) corona. The derived parameters can be characterized by two new parameters. One is Q≡ T_{e}/T_{in} which describes balance between the Compton cooling and gravitational heating of the corona, while the other is f≡ L_{raw}/L_{tot}, namely, the directly-visible (without Comptonization) MCD luminosity. Then, the PL state spectra have been found to show Q˜10 and f˜0.7, while those of the disk-like state Q˜ 3 and f≤0.01. Thus, the two states are clearly separated in terms of Q and f.
Bayesian source term estimation of atmospheric releases in urban areas using LES approach.
Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo
2018-05-05
The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Numerical modeling of materials processing applications of a pulsed cold cathode electron gun
NASA Astrophysics Data System (ADS)
Etcheverry, J. I.; Martínez, O. E.; Mingolo, N.
1998-04-01
A numerical study of the application of a pulsed cold cathode electron gun to materials processing is performed. A simple semiempirical model of the discharge is used, together with backscattering and energy deposition profiles obtained by a Monte Carlo technique, in order to evaluate the energy source term inside the material. The numerical computation of the heat equation with the calculated source term is performed in order to obtain useful information on melting and vaporization thresholds, melted radius and depth, and on the dependence of these variables on processing parameters such as operating pressure, initial voltage of the discharge and cathode-sample distance. Numerical results for stainless steel are presented, which demonstrate the need for several modifications of the experimental design in order to achieve a better efficiency.
NASA Astrophysics Data System (ADS)
Ross, Z. E.; Ben-Zion, Y.; Zhu, L.
2015-02-01
We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.
QCD sum rules study of meson-baryon sigma terms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erkol, Gueray; Oka, Makoto; Turan, Guersevil
2008-11-01
The pion-baryon sigma terms and the strange-quark condensates of the octet and the decuplet baryons are calculated by employing the method of QCD sum rules. We evaluate the vacuum-to-vacuum transition matrix elements of two baryon interpolating fields in an external isoscalar-scalar field and use a Monte Carlo-based approach to systematically analyze the sum rules and the uncertainties in the results. We extract the ratios of the sigma terms, which have rather high accuracy and minimal dependence on QCD parameters. We discuss the sources of uncertainties and comment on possible strangeness content of the nucleon and the Delta.
Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment
NASA Astrophysics Data System (ADS)
Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.
2008-05-01
The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.
Uras, Yusuf; Uysal, Yagmur; Arikan, Tugba Atilan; Kop, Alican; Caliskan, Mustafa
2015-06-01
The aim of this study was to investigate the sources of drinking water for Derebogazi Village, Kahramanmaras Province, Turkey, in terms of hydrogeochemistry, isotope geochemistry, and medical geology. Water samples were obtained from seven different water sources in the area, all of which are located within quartzite units of Paleozoic age, and isotopic analyses of (18)O and (2)H (deuterium) were conducted on the samples. Samples were collected from the region for 1 year. Water quality of the samples was assessed in terms of various water quality parameters, such as temperature, pH, conductivity, alkalinity, trace element concentrations, anion-cation measurements, and metal concentrations, using ion chromatography, inductively coupled plasma (ICP) mass spectrometry, ICP-optical emission spectrometry techniques. Regional health surveys had revealed that the heights of local people are significantly below the average for the country. In terms of medical geology, the sampled drinking water from the seven sources was deficient in calcium and magnesium ions, which promote bone development. Bone mineral density screening tests were conducted on ten females using dual energy X-ray absorptiometry to investigate possible developmental disorder(s) and potential for mineral loss in the region. Of these ten women, three had T-scores close to the osteoporosis range (T-score < -2.5).
NASA Astrophysics Data System (ADS)
Soulsby, Chris; Birkel, Christian; Geris, Josie; Tetzlaff, Doerthe
2016-04-01
Advances in the use of hydrological tracers and their integration into rainfall runoff models is facilitating improved quantification of stream water age distributions. This is of fundamental importance to understanding water quality dynamics over both short- and long-time scales, particularly as water quality parameters are often associated with water sources of markedly different ages. For example, legacy nitrate pollution may reflect deeper waters that have resided in catchments for decades, whilst more dynamics parameters from anthropogenic sources (e.g. P, pathogens etc) are mobilised by very young (<1 day) near-surface water sources. It is increasingly recognised that water age distributions of stream water is non-stationary in both the short (i.e. event dynamics) and longer-term (i.e. in relation to hydroclimatic variability). This provides a crucial context for interpreting water quality time series. Here, we will use longer-term (>5 year), high resolution (daily) isotope time series in modelling studies for different catchments to show how variable stream water age distributions can be a result of hydroclimatic variability and the implications for understanding water quality. We will also use examples from catchments undergoing rapid urbanisation, how the resulting age distributions of stream water change in a predictable way as a result of modified flow paths. The implication for the management of water quality in urban catchments will be discussed.
Textural Maturity Analysis and Sedimentary Environment Discrimination Based on Grain Shape Data
NASA Astrophysics Data System (ADS)
Tunwal, M.; Mulchrone, K. F.; Meere, P. A.
2017-12-01
Morphological analysis of clastic sedimentary grains is an important source of information regarding the processes involved in their formation, transportation and deposition. However, a standardised approach for quantitative grain shape analysis is generally lacking. In this contribution we report on a study where fully automated image analysis techniques were applied to loose sediment samples collected from glacial, aeolian, beach and fluvial environments. A range of shape parameters are evaluated for their usefulness in textural characterisation of populations of grains. The utility of grain shape data in ranking textural maturity of samples within a given sedimentary environment is evaluated. Furthermore, discrimination of sedimentary environment on the basis of grain shape information is explored. The data gathered demonstrates a clear progression in textural maturity in terms of roundness, angularity, irregularity, fractal dimension, convexity, solidity and rectangularity. Textural maturity can be readily categorised using automated grain shape parameter analysis. However, absolute discrimination between different depositional environments on the basis of shape parameters alone is less certain. For example, the aeolian environment is quite distinct whereas fluvial, glacial and beach samples are inherently variable and tend to overlap each other in terms of textural maturity. This is most likely due to a collection of similar processes and sources operating within these environments. This study strongly demonstrates the merit of quantitative population-based shape parameter analysis of texture and indicates that it can play a key role in characterising both loose and consolidated sediments. This project is funded by the Irish Petroleum Infrastructure Programme (www.pip.ie)
A Laboratory Study of River Discharges into Shallow Seas
NASA Astrophysics Data System (ADS)
Crawford, T. J.; Linden, P. F.
2016-02-01
We present an experimental study that aims to simulate the buoyancy driven coastal currents produced by estuarine freshwater discharges into the ocean. The currents are generated inside a rotating tank filled with saltwater by the continuous release of buoyant freshwater from a source structure located at the fluid surface. The freshwater is discharged horizontally from a finite-depth source, giving rise to significant momentum-flux effects and a non-zero potential vorticity. We perform a parametric study in which we vary the rotation rate, freshwater discharge magnitude, the density difference and the source cross-sectional area. The parameter values are chosen to match the regimes appropriate to the River Rhine and River Elbe when entering the North Sea. Persistent features of an anticyclonic outflow vortex and a propagating boundary current were identified and their properties quantified. We also present a finite potential vorticity, geostrophic model that provides theoretical predictions for the current height, width and velocity as functions of the experimental parameters. The experiments and model are compared with each other in terms of a set of non-dimensional parameters identified in the theoretical analysis of the problem. Good agreement between the model and the experimental data is found. The effect of mixing in the turbulent ocean is also addressed with the addition of an oscillating grid to the experimental setup. The grid generates turbulence in the saltwater ambient that is designed to represent the mixing effects of the wind, tides and bathymetry in a shallow shelf sea. The impact of the addition of turbulence is discussed in terms of the experimental data and through modifications to the theoretical model to include mixing. Once again, good agreement is seen between the experiments and the model.
Modeling Source Water Threshold Exceedances with Extreme Value Theory
NASA Astrophysics Data System (ADS)
Rajagopalan, B.; Samson, C.; Summers, R. S.
2016-12-01
Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.
Impact of relativistic effects on cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.
2018-01-01
Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.
Bouza, Marcos; Orejas, Jaime; López-Vidal, Silvia; Pisonero, Jorge; Bordel, Nerea; Pereiro, Rosario; Sanz-Medel, Alfredo
2016-05-23
Atmospheric pressure glow discharges have been widely used in the last decade as ion sources in ambient mass spectrometry analyses. Here, an in-house flowing atmospheric pressure afterglow (FAPA) has been developed as an alternative ion source for differential mobility analysis (DMA). The discharge source parameters (inter-electrode distance, current and helium flow rate) determining the atmospheric plasma characteristics have been optimized in terms of DMA spectral simplicity with the highest achievable sensitivity while keeping an adequate plasma stability and so the FAPA working conditions finally selected were: 35 mA, 1 L min(-1) of He and an inter-electrode distance of 8 mm. Room temperature in the DMA proved to be adequate for the coupling and chemical analysis with the FAPA source. Positive and negative ions for different volatile organic compounds were tested and analysed by FAPA-DMA using a Faraday cup as a detector and proper operation in both modes was possible (without changes in FAPA operational parameters). The FAPA ionization source showed simpler ion mobility spectra with narrower peaks and a better, or similar, sensitivity than conventional UV-photoionization for DMA analysis in positive mode. Particularly, the negative mode proved to be a promising field of further research for the FAPA ion source coupled to ion mobility, clearly competitive with other more conventional plasmas such as corona discharge.
The Backscattering Enigma in Natural Waters
2006-09-30
down because the effects of changing particle composition are not adequately understood. Our long term goal is to better understand the source of...natural waters. APPROACH A key focus over the last year has been determining the scattering properties of phytoplankton populations and...spaces, etc.), and forms the basis of the terrestrial biomass parameter NDVI (normalized difference vegetation index). However, plant cell structures
Potential for a Near Term Very Low Energy Antiproton Source at Brookhaven National Laboratory.
1989-04-01
9 Table III-1: Cost Summary . . . . * . . .. . * 10 IV. Lattice and Stretcher Properties . . . . . . .............. 11 Fig. IV-1 Cell... lattice functions . . . . . . . . . . 12 Fig. IV-2 Insertion region lattice . . . . . . . . . 12 Fig. IV-3 Superperiod lattice functions . . . . . . 12...8217 * . . . 13 Table IV-Ib Parameters after lattice matching . . . . 13 Table IV-lc Components specification. . . 13 Table IV-2 Random multipoles. .. . . .. 15
Review of clinical brachytherapy uncertainties: Analysis guidelines of GEC-ESTRO and the AAPM☆
Kirisits, Christian; Rivard, Mark J.; Baltas, Dimos; Ballester, Facundo; De Brabandere, Marisol; van der Laarse, Rob; Niatsetski, Yury; Papagiannis, Panagiotis; Hellebust, Taran Paulsen; Perez-Calatayud, Jose; Tanderup, Kari; Venselaar, Jack L.M.; Siebert, Frank-André
2014-01-01
Background and purpose A substantial reduction of uncertainties in clinical brachytherapy should result in improved outcome in terms of increased local control and reduced side effects. Types of uncertainties have to be identified, grouped, and quantified. Methods A detailed literature review was performed to identify uncertainty components and their relative importance to the combined overall uncertainty. Results Very few components (e.g., source strength and afterloader timer) are independent of clinical disease site and location of administered dose. While the influence of medium on dose calculation can be substantial for low energy sources or non-deeply seated implants, the influence of medium is of minor importance for high-energy sources in the pelvic region. The level of uncertainties due to target, organ, applicator, and/or source movement in relation to the geometry assumed for treatment planning is highly dependent on fractionation and the level of image guided adaptive treatment. Most studies to date report the results in a manner that allows no direct reproduction and further comparison with other studies. Often, no distinction is made between variations, uncertainties, and errors or mistakes. The literature review facilitated the drafting of recommendations for uniform uncertainty reporting in clinical BT, which are also provided. The recommended comprehensive uncertainty investigations are key to obtain a general impression of uncertainties, and may help to identify elements of the brachytherapy treatment process that need improvement in terms of diminishing their dosimetric uncertainties. It is recommended to present data on the analyzed parameters (distance shifts, volume changes, source or applicator position, etc.), and also their influence on absorbed dose for clinically-relevant dose parameters (e.g., target parameters such as D90 or OAR doses). Publications on brachytherapy should include a statement of total dose uncertainty for the entire treatment course, taking into account the fractionation schedule and level of image guidance for adaptation. Conclusions This report on brachytherapy clinical uncertainties represents a working project developed by the Brachytherapy Physics Quality Assurances System (BRAPHYQS) subcommittee to the Physics Committee within GEC-ESTRO. Further, this report has been reviewed and approved by the American Association of Physicists in Medicine. PMID:24299968
Inner Radiation Belt Dynamics and Climatology
NASA Astrophysics Data System (ADS)
Guild, T. B.; O'Brien, P. P.; Looper, M. D.
2012-12-01
We present preliminary results of inner belt proton data assimilation using an augmented version of the Selesnick et al. Inner Zone Model (SIZM). By varying modeled physics parameters and solar particle injection parameters to generate many ensembles of the inner belt, then optimizing the ensemble weights according to inner belt observations from SAMPEX/PET at LEO and HEO/DOS at high altitude, we obtain the best-fit state of the inner belt. We need to fully sample the range of solar proton injection sources among the ensemble members to ensure reasonable agreement between the model ensembles and observations. Once this is accomplished, we find the method is fairly robust. We will demonstrate the data assimilation by presenting an extended interval of solar proton injections and losses, illustrating how these short-term dynamics dominate long-term inner belt climatology.
NASA Astrophysics Data System (ADS)
Thrysøe, A. S.; Løiten, M.; Madsen, J.; Naulin, V.; Nielsen, A. H.; Rasmussen, J. Juul
2018-03-01
The conditions in the edge and scrape-off layer (SOL) of magnetically confined plasmas determine the overall performance of the device, and it is of great importance to study and understand the mechanics that drive transport in those regions. If a significant amount of neutral molecules and atoms is present in the edge and SOL regions, those will influence the plasma parameters and thus the plasma confinement. In this paper, it is displayed how neutrals, described by a fluid model, introduce source terms in a plasma drift-fluid model due to inelastic collisions. The resulting source terms are included in a four-field drift-fluid model, and it is shown how an increasing neutral particle density in the edge and SOL regions influences the plasma particle transport across the last-closed-flux-surface. It is found that an appropriate gas puffing rate allows for the edge density in the simulation to be self-consistently maintained due to ionization of neutrals in the confined region.
Dynamic power balance analysis in JET
NASA Astrophysics Data System (ADS)
Matthews, G. F.; Silburn, S. A.; Challis, C. D.; Eich, T.; Iglesias, D.; King, D.; Sieglin, B.; Contributors, JET
2017-12-01
The full scale realisation of nuclear fusion as an energy source requires a detailed understanding of power and energy balance in current experimental devices. In this we explore whether a global power balance model in which some of the calibration factors applied to the source or sink terms are fitted to the data can provide insight into possible causes of any discrepancies in power and energy balance seen in the JET tokamak. We show that the dynamics in the power balance can only be properly reproduced by including the changes in the thermal stored energy which therefore provides an additional opportunity to cross calibrate other terms in the power balance equation. Although the results are inconclusive with respect to the original goal of identifying the source of the discrepancies in the energy balance, we do find that with optimised parameters an extremely good prediction of the total power measured at the outer divertor target can be obtained over a wide range of pulses with time resolution up to ∼25 ms.
Long-Term Variations of the EOP and ICRF2
NASA Technical Reports Server (NTRS)
Zharov, Vladimir; Sazhin, Mikhail; Sementsov, Valerian; Sazhina, Olga
2010-01-01
We analyzed the time series of the coordinates of the ICRF radio sources. We show that part of the radio sources, including the defining sources, shows a significant apparent motion. The stability of the celestial reference frame is provided by a no-net-rotation condition applied to the defining sources. In our case this condition leads to a rotation of the frame axes with time. We calculated the effect of this rotation on the Earth orientation parameters (EOP). In order to improve the stability of the celestial reference frame we suggest a new method for the selection of the defining sources. The method consists of two criteria: the first one we call cosmological and the second one kinematical. It is shown that a subset of the ICRF sources selected according to cosmological criteria provides the most stable reference frame for the next decade.
French, N P; Clancy, D; Davison, H C; Trees, A J
1999-10-01
The transmission and control of Neospora caninum infection in dairy cattle was examined using deterministic and stochastic models. Parameter estimates were derived from recent studies conducted in the UK and from the published literature. Three routes of transmission were considered: maternal vertical transmission with a high probability (0.95), horizontal transmission from infected cattle within the herd, and horizontal transmission from an independent external source. Putative infection via pooled colostrum was used as an example of within-herd horizontal transmission, and the recent finding that the dog is a definitive host of N. caninum supported the inclusion of an external independent source of infection. The predicted amount of horizontal transmission required to maintain infection at levels commonly observed in field studies in the UK and elsewhere, was consistent with that observed in studies of post-natal seroconversion (0.85-9.0 per 100 cow-years). A stochastic version of the model was used to simulate the spread of infection in herds of 100 cattle, with a mean infection prevalence similar to that observed in UK studies (around 20%). The distributions of infected and uninfected cattle corresponded closely to Normal distributions, with S.D.s of 6.3 and 7.0, respectively. Control measures were considered by altering birth, death and horizontal transmission parameters. A policy of annual culling of infected cattle very rapidly reduced the prevalence of infection, and was shown to be the most effective method of control in the short term. Not breeding replacements from infected cattle was also effective in the short term, particularly in herds with a higher turnover of cattle. However, the long-term effectiveness of these measures depended on the amount and source of horizontal infection. If the level of within-herd transmission was above a critical threshold, then a combination of reducing within-herd, and blocking external sources of transmission was required to permanently eliminate infection.
Evaluation of gamma dose effect on PIN photodiode using analytical model
NASA Astrophysics Data System (ADS)
Jafari, H.; Feghhi, S. A. H.; Boorboor, S.
2018-03-01
The PIN silicon photodiodes are widely used in the applications which may be found in radiation environment such as space mission, medical imaging and non-destructive testing. Radiation-induced damage in these devices causes to degrade the photodiode parameters. In this work, we have used new approach to evaluate gamma dose effects on a commercial PIN photodiode (BPX65) based on an analytical model. In this approach, the NIEL parameter has been calculated for gamma rays from a 60Co source by GEANT4. The radiation damage mechanisms have been considered by solving numerically the Poisson and continuity equations with the appropriate boundary conditions, parameters and physical models. Defects caused by radiation in silicon have been formulated in terms of the damage coefficient for the minority carriers' lifetime. The gamma induced degradation parameters of the silicon PIN photodiode have been analyzed in detail and the results were compared with experimental measurements and as well as the results of ATLAS semiconductor simulator to verify and parameterize the analytical model calculations. The results showed reasonable agreement between them for BPX65 silicon photodiode irradiated by 60Co gamma source at total doses up to 5 kGy under different reverse voltages.
Time Variations in Forecasts and Occurrences of Large Solar Energetic Particle Events
NASA Astrophysics Data System (ADS)
Kahler, S. W.
2015-12-01
The onsets and development of large solar energetic (E > 10 MeV) particle (SEP) events have been characterized in many studies. The statistics of SEP event onset delay times from associated solar flares and coronal mass ejections (CMEs), which depend on solar source longitudes, can be used to provide better predictions of whether a SEP event will occur following a large flare or fast CME. In addition, size distributions of peak SEP event intensities provide a means for a probabilistic forecast of peak intensities attained in observed SEP increases. SEP event peak intensities have been compared with their rise and decay times for insight into the acceleration and transport processes. These two time scales are generally treated as independent parameters describing the development of a SEP event, but we can invoke an alternative two-parameter description based on the assumption that decay times exceed rise times for all events. These two parameters, from the well known Weibull distribution, provide an event description in terms of its basic shape and duration. We apply this distribution to several large SEP events and ask what the characteristic parameters and their dependence on source longitudes can tell us about the origins of these important events.
NASA Technical Reports Server (NTRS)
Traversi, M.; Barbarek, L. A. C.
1979-01-01
A handy reference for JPL minimum requirements and guidelines is presented as well as information on the use of the fundamental information source represented by the Nationwide Personal Transportation Survey. Data on U.S. demographic statistics and highway speeds are included along with methodology for normal parameters evaluation, synthesis of daily distance distributions, and projection of car ownership distributions. The synthesis of tentative mission quantification results, of intermediate mission quantification results, and of mission quantification parameters are considered and 1985 in place fleet fuel economy data are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu
2017-03-27
A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).
NASA Astrophysics Data System (ADS)
Casalbuoni, S.; Cecilia, A.; Gerstl, S.; Glamann, N.; Grau, A. W.; Holubek, T.; Meuter, C.; de Jauregui, D. Saez; Voutta, R.; Boffo, C.; Gerhard, Th.; Turenne, M.; Walter, W.
2016-11-01
A new cryogen-free full scale (1.5 m long) superconducting undulator with a period length of 15 mm (SCU15) has been successfully tested in the ANKA storage ring. This represents a very important milestone in the development of superconducting undulators for third and fourth generation light sources carried on by the collaboration between the Karlsruhe Institute of Technology and the industrial partner Babcock Noell GmbH. SCU15 is the first full length device worldwide that with beam reaches a higher peak field than what expected with the same geometry (vacuum gap and period length) with an ideal cryogenic permanent magnet undulator built with the best material available PrFeB. After a summary on the design and main parameters of the device, we present here the characterization in terms of spectral properties and the long term operation of the SCU15 in the ANKA storage ring.
Simplified contaminant source depletion models as analogs of multiphase simulators
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-04-01
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Simplified contaminant source depletion models as analogs of multiphase simulators.
Basu, Nandita B; Fure, Adrian D; Jawitz, James W
2008-04-28
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field=0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Lee, Chi-Yuan; Li, Shih-Chun; Chen, Chia-Hung; Huang, Yen-Ting; Wang, Yu-Syuan
2018-03-15
Looking for alternative energy sources has been an inevitable trend since the oil crisis, and close attentioned has been paid to hydrogen energy. The proton exchange membrane (PEM) water electrolyzer is characterized by high energy efficiency, high yield, simple system and low operating temperature. The electrolyzer generates hydrogen from water free of any carbon sources (provided the electrons come from renewable sources such as solar and wind), so it is very clean and completely satisfies the environmental requirement. However, in long-term operation of the PEM water electrolyzer, the membrane material durability, catalyst corrosion and nonuniformity of local flow, voltage and current in the electrolyzer can influence the overall performance. It is difficult to measure the internal physical parameters of the PEM water electrolyzer, and the physical parameters are interrelated. Therefore, this study uses micro-electro-mechanical systems (MEMS) technology to develop a flexible integrated microsensor; internal multiple physical information is extracted to determine the optimal working parameters for the PEM water electrolyzer. The real operational data of local flow, voltage and current in the PEM water electrolyzer are measured simultaneously by the flexible integrated microsensor, so as to enhance the performance of the PEM water electrolyzer and to prolong the service life.
Lee, Chi-Yuan; Li, Shih-Chun; Chen, Chia-Hung; Huang, Yen-Ting; Wang, Yu-Syuan
2018-01-01
Looking for alternative energy sources has been an inevitable trend since the oil crisis, and close attentioned has been paid to hydrogen energy. The proton exchange membrane (PEM) water electrolyzer is characterized by high energy efficiency, high yield, simple system and low operating temperature. The electrolyzer generates hydrogen from water free of any carbon sources (provided the electrons come from renewable sources such as solar and wind), so it is very clean and completely satisfies the environmental requirement. However, in long-term operation of the PEM water electrolyzer, the membrane material durability, catalyst corrosion and nonuniformity of local flow, voltage and current in the electrolyzer can influence the overall performance. It is difficult to measure the internal physical parameters of the PEM water electrolyzer, and the physical parameters are interrelated. Therefore, this study uses micro-electro-mechanical systems (MEMS) technology to develop a flexible integrated microsensor; internal multiple physical information is extracted to determine the optimal working parameters for the PEM water electrolyzer. The real operational data of local flow, voltage and current in the PEM water electrolyzer are measured simultaneously by the flexible integrated microsensor, so as to enhance the performance of the PEM water electrolyzer and to prolong the service life. PMID:29543734
Variational estimation of process parameters in a simplified atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
NASA Astrophysics Data System (ADS)
Bordage, M. C.; Hagelaar, G. J. M.; Pitchford, L. C.; Biagi, S. F.; Puech, V.
2011-10-01
Xenon is used in a number of application areas ranging from light sources to x-ray detectors for imaging in medicine, border security and high-energy particle physics. There is a correspondingly large body of data available for electron scattering cross sections and swarm parameters in Xe, whereas data for Kr are more limited. In this communication we show intercomparisons of the cross section sets in Xe and Kr presently available on the LXCat site. Swarm parameters calculated using these cross sections sets are compared with experimental data, also available on the LXCat site. As was found for Ar, diffusion coefficients calculated using these cross section data in a 2-term Boltzmann solver are higher than Monte Carlo results by about 30% over a range of E/N from 1 to 100 Td. We find otherwise good agreement in Xe between 2-term and Monte Carlo results and between measured and calculated values of electron mobility, ionization rates and light emission (dimer) at atmospheric pressure. The available cross section data in Kr yield swarm parameters in agreement with the limited experimental data. The cross section compilations and measured swarm parameters used in this work are available on-line at www.lxcat.laplace. univ-tlse.fr.
Recording and quantification of ultrasonic echolocation clicks from free-ranging toothed whales
NASA Astrophysics Data System (ADS)
Madsen, P. T.; Wahlberg, M.
2007-08-01
Toothed whales produce short, ultrasonic clicks of high directionality and source level to probe their environment acoustically. This process, termed echolocation, is to a large part governed by the properties of the emitted clicks. Therefore derivation of click source parameters from free-ranging animals is of increasing importance to understand both how toothed whales use echolocation in the wild and how they may be monitored acoustically. This paper addresses how source parameters can be derived from free-ranging toothed whales in the wild using calibrated multi-hydrophone arrays and digital recorders. We outline the properties required of hydrophones, amplifiers and analog to digital converters, and discuss the problems of recording echolocation clicks on the axis of a directional sound beam. For accurate localization the hydrophone array apertures must be adapted and scaled to the behavior of, and the range to, the clicking animal, and precise information on hydrophone locations is critical. We provide examples of localization routines and outline sources of error that lead to uncertainties in localizing clicking animals in time and space. Furthermore we explore approaches to time series analysis of discrete versions of toothed whale clicks that are meaningful in a biosonar context.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
On the long term evolution of white dwarfs in cataclysmic variables and their recurrence times
NASA Technical Reports Server (NTRS)
Sion, E. M.; Starrfield, S. G.
1985-01-01
The relevance of the long term quasi-static evolution of accreting white dwarfs to the outbursts of Z Andromeda-like symbiotics; the masses and accretion rates of classical nova white dwarfs; and the observed properties of white dwarfs detected optically and with IUE in low M dot cataclysmic variables is discussed. A surface luminosity versus time plot for a massive, hot white dwarf bears a remarkable similarity to the outburst behavior of the hot blue source in Z Andromeda. The long term quasi-static models of hot accreting white dwarfs provide convenient constraints on the theoretically permissible parameters to give a dynamical (nova-like) outburst of classic white dwarfs.
Methods for the behavioral, educational, and social sciences: an R package.
Kelley, Ken
2007-11-01
Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.
Geist, Eric L.
2014-01-01
Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.
NASA Technical Reports Server (NTRS)
Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.
2007-01-01
In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.
Coral proxy record of decadal-scale reduction in base flow from Moloka'i, Hawaii
Prouty, Nancy G.; Jupiter, Stacy D.; Field, Michael E.; McCulloch, Malcolm T.
2009-01-01
Groundwater is a major resource in Hawaii and is the principal source of water for municipal, agricultural, and industrial use. With a growing population, a long-term downward trend in rainfall, and the need for proper groundwater management, a better understanding of the hydroclimatological system is essential. Proxy records from corals can supplement long-term observational networks, offering an accessible source of hydrologic and climate information. To develop a qualitative proxy for historic groundwater discharge to coastal waters, a suite of rare earth elements and yttrium (REYs) were analyzed from coral cores collected along the south shore of Moloka'i, Hawaii. The coral REY to calcium (Ca) ratios were evaluated against hydrological parameters, yielding the strongest relationship to base flow. Dissolution of REYs from labradorite and olivine in the basaltic rock aquifers is likely the primary source of coastal ocean REYs. There was a statistically significant downward trend (−40%) in subannually resolved REY/Ca ratios over the last century. This is consistent with long-term records of stream discharge from Moloka'i, which imply a downward trend in base flow since 1913. A decrease in base flow is observed statewide, consistent with the long-term downward trend in annual rainfall over much of the state. With greater demands on freshwater resources, it is appropriate for withdrawal scenarios to consider long-term trends and short-term climate variability. It is possible that coral paleohydrological records can be used to conduct model-data comparisons in groundwater flow models used to simulate changes in groundwater level and coastal discharge.
Design of HIFU transducers for generating specified nonlinear ultrasound fields
Rosnitskiy, Pavel B.; Yuldashev, Petr V.; Sapozhnikov, Oleg A.; Maxwell, Adam; Kreider, Wayne; Bailey, Michael R.; Khokhlova, Vera A.
2016-01-01
Various clinical applications of high intensity focused ultrasound (HIFU) have different requirements for the pressure levels and degree of nonlinear waveform distortion at the focus. The goal of this work was to determine transducer design parameters that produce either a specified shock amplitude in the focal waveform or specified peak pressures while still maintaining quasilinear conditions at the focus. Multi-parametric nonlinear modeling based on the KZK equation with an equivalent source boundary condition was employed. Peak pressures, shock amplitudes at the focus, and corresponding source outputs were determined for different transducer geometries and levels of nonlinear distortion. Results are presented in terms of the parameters of an equivalent single-element, spherically shaped transducer. The accuracy of the method and its applicability to cases of strongly focused transducers were validated by comparing the KZK modeling data with measurements and nonlinear full-diffraction simulations for a single-element source and arrays with 7 and 256 elements. The results provide look-up data for evaluating nonlinear distortions at the focus of existing therapeutic systems as well as for guiding the design of new transducers that generate specified nonlinear fields. PMID:27775904
Experimental study of the thermal-acoustic efficiency in a long turbulent diffusion-flame burner
NASA Technical Reports Server (NTRS)
Mahan, J. R.
1983-01-01
A two-year study of noise production in a long tubular burner is described. The research was motivated by an interest in understanding and eventually reducing core noise in gas turbine engines. The general approach is to employ an acoustic source/propagation model to interpret the sound pressure spectrum in the acoustic far field of the burner in terms of the source spectrum that must have produced it. In the model the sources are assumed to be due uniquely to the unsteady component of combustion heat release; thus only direct combustion-noise is considered. The source spectrum is then the variation with frequency of the thermal-acoustic efficiency, defined as the fraction of combustion heat release which is converted into acoustic energy at a given frequency. The thrust of the research was to study the variation of the source spectrum with the design and operating parameters of the burner.
Amplitude loss of sonic waveform due to source coupling to the medium
NASA Astrophysics Data System (ADS)
Lee, Myung W.; Waite, William F.
2007-03-01
In contrast to hydrate-free sediments, sonic waveforms acquired in gas hydrate-bearing sediments indicate strong amplitude attenuation associated with a sonic velocity increase. The amplitude attenuation increase has been used to quantify pore-space hydrate content by attributing observed attenuation to the hydrate-bearing sediment's intrinsic attenuation. A second attenuation mechanism must be considered, however. Theoretically, energy radiation from sources inside fluid-filled boreholes strongly depends on the elastic parameters of materials surrounding the borehole. It is therefore plausible to interpret amplitude loss in terms of source coupling to the surrounding medium as well as to intrinsic attenuation. Analyses of sonic waveforms from the Mallik 5L-38 well, Northwest Territories, Canada, indicate a significant component of sonic waveform amplitude loss is due to source coupling. Accordingly, all sonic waveform amplitude analyses should include the effect of source coupling to accurately characterize a formation's intrinsic attenuation.
Amplitude loss of sonic waveform due to source coupling to the medium
Lee, Myung W.; Waite, William F.
2007-01-01
In contrast to hydrate-free sediments, sonic waveforms acquired in gas hydrate-bearing sediments indicate strong amplitude attenuation associated with a sonic velocity increase. The amplitude attenuation increase has been used to quantify pore-space hydrate content by attributing observed attenuation to the hydrate-bearing sediment's intrinsic attenuation. A second attenuation mechanism must be considered, however. Theoretically, energy radiation from sources inside fluid-filled boreholes strongly depends on the elastic parameters of materials surrounding the borehole. It is therefore plausible to interpret amplitude loss in terms of source coupling to the surrounding medium as well as to intrinsic attenuation. Analyses of sonic waveforms from the Mallik 5L-38 well, Northwest Territories, Canada, indicate a significant component of sonic waveform amplitude loss is due to source coupling. Accordingly, all sonic waveform amplitude analyses should include the effect of source coupling to accurately characterize a formation's intrinsic attenuation.
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
Analysis of airframe/engine interactions - An integrated control perspective
NASA Technical Reports Server (NTRS)
Schmidt, David K.; Schierman, John D.; Garg, Sanjay
1990-01-01
Techniques for the analysis of the dynamic interactions between airframe/engine dynamical systems are presented. Critical coupling terms are developed that determine the significance of these interactions with regard to the closed loop stability and performance of the feedback systems. A conceptual model is first used to indicate the potential sources of the coupling, how the coupling manifests itself, and how the magnitudes of these critical coupling terms are used to quantify the effects of the airframe/engine interactions. A case study is also presented involving an unstable airframe with thrust vectoring for attitude control. It is shown for this system with classical, decentralized control laws that there is little airframe/engine interaction, and the stability and performance with those control laws is not affected. Implications of parameter uncertainty in the coupling dynamics is also discussed, and effects of these parameter variations are also demonstrated to be small for this vehicle configuration.
Magnetostrophic balance in planetary dynamos - Predictions for Neptune's magnetosphere
NASA Technical Reports Server (NTRS)
Curtis, S. A.; Ness, N. F.
1986-01-01
With the purpose of estimating Neptune's magnetic field and its implications for nonthermal Neptune radio emissions, a new scaling law for planetary magnetic fields was developed in terms of externally observable parameters (the planet's mean density, radius, mass, rotation rate, and internal heat source luminosity). From a comparison of theory and observations by Voyager it was concluded that planetary dynamos are two-state systems with either zero intrinsic magnetic field (for planets with low internal heat source) or (for planets with the internal heat source sufficiently strong to drive convection) a magnetic field near the upper bound determined from magnetostrophic balance. It is noted that mass loading of the Neptune magnetosphere by Triton may play an important role in the generation of nonthermal radio emissions.
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.
2013-12-01
Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.
Nnane, Daniel Ekane
2011-11-15
Contamination of surface waters is a pervasive threat to human health, hence, the need to better understand the sources and spatio-temporal variations of contaminants within river catchments. River catchment managers are required to sustainably monitor and manage the quality of surface waters. Catchment managers therefore need cost-effective low-cost long-term sustainable water quality monitoring and management designs to proactively protect public health and aquatic ecosystems. Multivariate and phage-lysis techniques were used to investigate spatio-temporal variations of water quality, main polluting chemophysical and microbial parameters, faecal micro-organisms sources, and to establish 'sentry' sampling sites in the Ouse River catchment, southeast England, UK. 350 river water samples were analysed for fourteen chemophysical and microbial water quality parameters in conjunction with the novel human-specific phages of Bacteroides GB-124 (Bacteroides GB-124). Annual, autumn, spring, summer, and winter principal components (PCs) explained approximately 54%, 75%, 62%, 48%, and 60%, respectively, of the total variance present in the datasets. Significant loadings of Escherichia coli, intestinal enterococci, turbidity, and human-specific Bacteroides GB-124 were observed in all datasets. Cluster analysis successfully grouped sampling sites into five clusters. Importantly, multivariate and phage-lysis techniques were useful in determining the sources and spatial extent of water contamination in the catchment. Though human faecal contamination was significant during dry periods, the main source of contamination was non-human. Bacteroides GB-124 could potentially be used for catchment routine microbial water quality monitoring. For a cost-effective low-cost long-term sustainable water quality monitoring design, E. coli or intestinal enterococci, turbidity, and Bacteroides GB-124 should be monitored all-year round in this river catchment. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
Engineering description of the ascent/descent bet product
NASA Technical Reports Server (NTRS)
Seacord, A. W., II
1986-01-01
The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.
IGR J16318-4848: 7 Years of INTEGRAL Observations
NASA Technical Reports Server (NTRS)
Barragan, Laura; Wilms, Joern; kreykenbohm, Ingo; Hanke, manfred; Fuerst, Felix; Pottschmidt, Katja; Rothschild, Richard
2011-01-01
Since the discovery of IGR 116318-4848 in 2003 January, INTEGRAL has accumulated more than 5.8 Ms in IBIS/ISGRI. We present the first extensive analysis of the archival INTEGRAL data (IBIS/ISGRI, and JEM-X when available) for this source, together with the observations carried out by XMM-Newton (twice in 2003, and twice in 2004) and Suzaku (2006). The source is very variable in the long-term, with periods of low activity, where the source is almost not detected, and flares with a luminosity approximately 10 times greater than its average value (5.4 cts/s). IGR 116318-4848 is a HMXB containing a sgB[e] star and a compact object (most probably a neutron star) deeply embedded in the stellar wind of the mass donor. The variability of the source (also in the short-term) can be ascribed to the wind of the optical star being very clumpy. We study the variation of the spectral parameters in time scales of INTEGRAL revolutions. The photoelectric absorption is, with NH around 10(exp 24)/ square cm, unusually high. During brighter phases the strong K-alpha iron line known from XMM-Newton and Suzaku observations is also detectable with the JEM-X instrument.
Porous elastic system with nonlinear damping and sources terms
NASA Astrophysics Data System (ADS)
Freitas, Mirelson M.; Santos, M. L.; Langa, José A.
2018-02-01
We study the long-time behavior of porous-elastic system, focusing on the interplay between nonlinear damping and source terms. The sources may represent restoring forces, but may also be focusing thus potentially amplifying the total energy which is the primary scenario of interest. By employing nonlinear semigroups and the theory of monotone operators, we obtain several results on the existence of local and global weak solutions, and uniqueness of weak solutions. Moreover, we prove that such unique solutions depend continuously on the initial data. Under some restrictions on the parameters, we also prove that every weak solution to our system blows up in finite time, provided the initial energy is negative and the sources are more dominant than the damping in the system. Additional results are obtained via careful analysis involving the Nehari Manifold. Specifically, we prove the existence of a unique global weak solution with initial data coming from the "good" part of the potential well. For such a global solution, we prove that the total energy of the system decays exponentially or algebraically, depending on the behavior of the dissipation in the system near the origin. We also prove the existence of a global attractor.
Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y
2014-09-15
Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.
Ma, Xiao-xue; Wang, La-chun; Liao, Ling-ling
2015-01-01
Identifying the temp-spatial distribution and sources of water pollutants is of great significance for efficient water quality management pollution control in Wenruitang River watershed, China. A total of twelve water quality parameters, including temperature, pH, dissolved oxygen (DO), total nitrogen (TN), ammonia nitrogen (NH4+ -N), electrical conductivity (EC), turbidity (Turb), nitrite-N (NO2-), nitrate-N(NO3-), phosphate-P(PO4(3-), total organic carbon (TOC) and silicate (SiO3(2-)), were analyzed from September, 2008 to October, 2009. Geographic information system(GIS) and principal component analysis(PCA) were used to determine the spatial distribution and to apportion the sources of pollutants. The results demonstrated that TN, NH4+ -N, PO4(3-) were the main pollutants during flow period, wet period, dry period, respectively, which was mainly caused by urban point sources and agricultural and rural non-point sources. In spatial terms, the order of pollution was tertiary river > secondary river > primary river, while the water quality was worse in city zones than in the suburb and wetland zone regardless of the river classification. In temporal terms, the order of pollution was dry period > wet period > flow period. Population density, land use type and water transfer affected the water quality in Wenruitang River.
X-ray variability of Seyfert 1.8/1.9 galaxies
NASA Astrophysics Data System (ADS)
Hernández-García, L.; Masegosa, J.; González-Martín, O.; Márquez, I.; Guainazzi, M.; Panessa, F.
2017-06-01
Context. Seyfert 1.8/1.9 are sources showing weak broad Hα components in their optical spectra. According to unification schemes, they are seen with an edge-on inclination, similar to type 2 Seyfert galaxies, but with slightly lower inclination angles. Aims: We aim to test whether Seyfert 1.8/1.9 have similar properties at UV and X-ray wavelengths. Methods: We used the 15 Seyfert 1.8/1.9 in the Véron Cetty and Véron catalog with public data available from the Chandra and/or XMM-Newton archives at different dates, with timescales between observations ranging from days to years. All the spectra of the same source were simultaneously fit with the same model and different parameters were left free to vary in order to select the variable parameter(s). Whenever possible, short-term variations from the analysis of the X-ray light curves and long-term UV variations from the optical monitor onboard XMM-Newton were studied. Our results are homogeneously compared with a previous work using the same methodology applied to a sample of Seyfert 2. Results: X-ray variability is found in all 15 nuclei over the aforementioned ranges of timescales. The main variability pattern is related to intrinsic changes in the sources, which are observed in ten nuclei. Changes in the column density are also frequent, as they are observed in six nuclei, and variations at soft energies, possibly related to scattered nuclear emission, are detected in six sources. X-ray intra-day variations are detected in six out of the eight studied sources. Variations at UV frequencies are detected in seven out of nine sources. Conclusions: A comparison between the samples of Seyfert 1.8/1.9 and 2 shows that, even if the main variability pattern is due to intrinsic changes of the sources in the two families, these nuclei exhibit different variability properties in the UV and X-ray domains. In particular, variations in the broad X-ray band on short timescales (days to weeks), and variations in the soft X-rays and UV on long timescales (months to years) are detected in Seyfert 1.8/1.9 but not in Seyfert 2. Overall, we suggest that optically classified Seyfert 1.8/1.9 should be kept separated from Seyfert 2 galaxies in UV/X-ray studies of the obscured AGN population because their intrinsic properties might be different.
Nomenclature in laboratory robotics and automation (IUPAC Recommendation 1994)
(Skip) Kingston, H. M.; Kingstonz, M. L.
1994-01-01
These recommended terms have been prepared to help provide a uniform approach to terminology and notation in laboratory automation and robotics. Since the terminology used in laboratory automation and robotics has been derived from diverse backgrounds, it is often vague, imprecise, and in some cases, in conflict with classical automation and robotic nomenclature. These dejinitions have been assembled from standards, monographs, dictionaries, journal articles, and documents of international organizations emphasizing laboratory and industrial automation and robotics. When appropriate, definitions have been taken directly from the original source and identified with that source. However, in some cases no acceptable definition could be found and a new definition was prepared to define the object, term, or action. Attention has been given to defining specific robot types, coordinate systems, parameters, attributes, communication protocols and associated workstations and hardware. Diagrams are included to illustrate specific concepts that can best be understood by visualization. PMID:18924684
Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media
Gabitto, Jorge; Tsouris, Costas
2015-05-05
Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less
NASA Astrophysics Data System (ADS)
Valença, J. V. B.; Silveira, I. S.; Silva, A. C. A.; Dantas, N. O.; Antonio, P. L.; Caldas, L. V. E.; d'Errico, F.; Souza, S. O.
2017-11-01
The OSL characteristics of three different borate glass matrices containing magnesia (LMB), quicklime (LCB) or potassium carbonate (LKB) were examined. Five different formulations for each composition were produced using a melt-quenching method and analyzed in terms of both dose-response curves and OSL shape decay. The samples were irradiated using a 90Sr/90Y beta source with doses up to 30 Gy. Dose-response curves were plotted using the initial OSL intensity as the chosen parameter. The OSL analysis showed that LKB glasses are the most sensitive to beta irradiation. For the most sensitive LKB composition, the irradiation process was also done using a 60Co gamma source in a dose range from 200 to 800 Gy. In all cases, no saturation was observed. A fitting process using a three-term exponential function was performed for the most sensitive formulations of each composition, which suggested a similar behavior in the OSL decay.
Interrogation of electrical connector faults using miniaturized UWB sources
NASA Astrophysics Data System (ADS)
Tokgöz, Çaǧata; Dardona, Sameh
2017-01-01
A diagnostic method for the detection, identification, and characterization of precursors of faults due to partial insertion of pin-socket contacts within electrical connectors commonly used in avionics systems is presented. It is demonstrated that a miniaturized ultrawideband (UWB) source and a minispectrum analyzer can be employed to measure resonant frequency shifts in connector S parameters as a small and low-cost alternative to a large and expensive network analyzer. The transfer function of an electrical connector is represented as a ratio of the spectra measured using the spectrum analyzer with and without the connector. Alternatively, the transfer function is derived in terms of the connector S parameters and the reflection coefficients at both ports of the connector. The transfer function data obtained using this derivation agreed well with its representation as a measured spectral ratio. The derivation enabled the extraction of the connector S parameters from the measured transfer function data as a function of the insertion depth of a pin-socket contact within the connector. In comparison with the S parameters measured directly using a network analyzer at multiple insertion depths, the S parameters extracted from the measured transfer function showed consistent and reliable representation of the electrical connector fault. The results demonstrate the potential of integrating a low-cost miniaturized UWB device into a connector harness for real-time detection of precursors to partially inserted connector faults.
The pure rotational spectrum of CaNC
NASA Astrophysics Data System (ADS)
Scurlock, C. T.; Steimle, T. C.; Suenram, R. D.; Lovas, F. J.
1994-03-01
The pure rotational spectrum of calcium isocyanide, CaNC, in its (0,0,0) X 2Σ+ vibronic state was measured using a combination of Fourier transform microwave (FTMW) and pump/probe microwave-optical double resonance (PPMODR) spectroscopy. Gaseous CaNC was generated using a laser ablation/supersonic expansion source. The determined spectroscopic parameters are (in MHz), B=4048.754 332 (29); γ=18.055 06 (23); bF=12.481 49 (93); c=2.0735 (14); and eQq0=-2.6974 (11). The hyperfine parameters are qualitatively interpreted in terms of a plausible molecular orbital descriptions and a comparison with the alkaline earth monohalides and the alkali monocyanides is given.
NASA Technical Reports Server (NTRS)
Symons, E. P.
1974-01-01
An investigation was conducted to determine the magnitude of the wicking rates of liquids in various screens. Evaluation of the parameters characterizing the wicking process resulted in the development of an expression which defined the wicking velocity in terms of screen and system geometry, liquid properties, and gravitational effects. Experiment data obtained both in normal gravity and in weightlessness demonstrated that the model successfully predicted the functional relation of the liquid properties and the distance from the liquid source to the wicking velocity. Because the pore geometry in the screens was complex, several screen geometric parameters were lumped into a single constant which was determined experimentally for each screen.
Comparing the contributions of ionospheric outflow and high-altitude production to O+ loss at Mars
NASA Astrophysics Data System (ADS)
Liemohn, Michael; Curry, Shannon; Fang, Xiaohua; Johnson, Blake; Fraenz, Markus; Ma, Yingjuan
2013-04-01
The Mars total O+ escape rate is highly dependent on both the ionospheric and high-altitude source terms. Because of their different source locations, they appear in velocity space distributions as distinct populations. The Mars Test Particle model is used (with background parameters from the BATS-R-US magnetohydrodynamic code) to simulate the transport of ions in the near-Mars space environment. Because it is a collisionless model, the MTP's inner boundary is placed at 300 km altitude for this study. The MHD values at this altitude are used to define an ionospheric outflow source of ions for the MTP. The resulting loss distributions (in both real and velocity space) from this ionospheric source term are compared against those from high-altitude ionization mechanisms, in particular photoionization, charge exchange, and electron impact ionization, each of which have their own (albeit overlapping) source regions. In subsequent simulations, the MHD values defining the ionospheric outflow are systematically varied to parametrically explore possible ionospheric outflow scenarios. For the nominal MHD ionospheric outflow settings, this source contributes only 10% to the total O+ loss rate, nearly all via the central tail region. There is very little dependence of this percentage on the initial temperature, but a change in the initial density or bulk velocity directly alters this loss through the central tail. However, a density or bulk velocity increase of a factor of 10 makes the ionospheric outflow loss comparable in magnitude to the loss from the combined high-altitude sources. The spatial and velocity space distributions of escaping O+ are examined and compared for the various source terms, identifying features specific to each ion source mechanism. These results are applied to a specific Mars Express orbit and used to interpret high-altitude observations from the ion mass analyzer onboard MEX.
Remote detection of chem/bio hazards via coherent anti-Stokes Raman spectroscopy
2017-09-12
hour per response, including the time for reviewing lnstnJctions, searching existing data sources, gathering and maintaining the data needed, and... time remote detection of hazardous microparticles in atmosphere and to evaluate the range of distances for typical species and the parameters of laser...detectable photons from a prototype molecule at a distance. 1S. SUBJECT TERMS Stimulated Raman scattering, Remote detection, biochemical agents, explosives
Aström, Johan; Pettersson, Thomas J R; Reischer, Georg H; Hermansson, Malte
2013-09-01
The protection of drinking water from pathogens such as Cryptosporidium and Giardia requires an understanding of the short-term microbial release from faecal contamination sources in the catchment. Flow-weighted samples were collected during two rainfall events in a stream draining an area with on-site sewers and during two rainfall events in surface runoff from a bovine cattle pasture. Samples were analysed for human (BacH) and ruminant (BacR) Bacteroidales genetic markers through quantitative polymerase chain reaction (qPCR) and for sorbitol-fermenting bifidobacteria through culturing as a complement to traditional faecal indicator bacteria, somatic coliphages and the parasitic protozoa Cryptosporidium spp. and Giardia spp. analysed by standard methods. Significant positive correlations were observed between BacH, Escherichia coli, intestinal enterococci, sulphite-reducing Clostridia, turbidity, conductivity and UV254 in the stream contaminated by on-site sewers. For the cattle pasture, no correlation was found between any of the genetic markers and the other parameters. Although parasitic protozoa were not detected, the analysis for genetic markers provided baseline data on the short-term faecal contamination due to these potential sources of parasites. Background levels of BacH and BacR makers in soil emphasise the need to including soil reference samples in qPCR-based analyses for Bacteroidales genetic markers.
Dynamics of two competing species in the presence of Lévy noise sources.
La Cognata, A; Valenti, D; Dubkov, A A; Spagnolo, B
2010-07-01
We consider a Lotka-Volterra system of two competing species subject to multiplicative α-stable Lévy noise. The interaction parameter between the species is a random process which obeys a stochastic differential equation with a generalized bistable potential in the presence both of a periodic driving term and an additive α-stable Lévy noise. We study the species dynamics, which is characterized by two different regimes, exclusion of one species and coexistence of both. We find quasiperiodic oscillations and stochastic resonance phenomenon in the dynamics of the competing species, analyzing the role of the Lévy noise sources.
Dynamics of two competing species in the presence of Lévy noise sources
NASA Astrophysics Data System (ADS)
La Cognata, A.; Valenti, D.; Dubkov, A. A.; Spagnolo, B.
2010-07-01
We consider a Lotka-Volterra system of two competing species subject to multiplicative α -stable Lévy noise. The interaction parameter between the species is a random process which obeys a stochastic differential equation with a generalized bistable potential in the presence both of a periodic driving term and an additive α -stable Lévy noise. We study the species dynamics, which is characterized by two different regimes, exclusion of one species and coexistence of both. We find quasiperiodic oscillations and stochastic resonance phenomenon in the dynamics of the competing species, analyzing the role of the Lévy noise sources.
The "Overdrive" Mode in the "Complete Vocal Technique": A Preliminary Study.
Sundberg, Johan; Bitelli, Maddalena; Holmberg, Annika; Laaksonen, Ville
2017-09-01
"Complete Vocal Technique," or CVT, is an internationally widespread method for teaching voice. It classifies voicing into four types, referred to as "vocal modes," one of which is called "Overdrive." The physiological correlates of these types are unclear. This study presents an attempt to analyze its voice source and formant frequency characteristics. A male and a female expert of CVT sang a set of "Overdrive" and falsetto tones on the syllable /pᴂ/. The voice source could be analyzed by inverse filtering in the case of the male subject. Results showed that subglottal pressure, measured as the oral pressure during /p/ occlusion, was low in falsetto and high in "Overdrive", and it was strongly correlated with each of the voice source parameters. These correlations could be described in terms of equations. The deviations from these equations of the different voice source parameters for the various voice samples suggested that "Overdrive" phonation was produced with stronger vocal fold adduction than the falsetto tones. Further, the subject was also found to tune the first formant to the second partial in "Overdrive" tones. The results support the conclusion that the method used, to compensate for the influence of subglottal pressure on the voice source, seems promising to use for analyses of other CVT vocal modes and also for other types of phonation. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Tracking slow modulations in synaptic gain using dynamic causal modelling: validation in epilepsy.
Papadopoulou, Margarita; Leite, Marco; van Mierlo, Pieter; Vonck, Kristl; Lemieux, Louis; Friston, Karl; Marinazzo, Daniele
2015-02-15
In this work we propose a proof of principle that dynamic causal modelling can identify plausible mechanisms at the synaptic level underlying brain state changes over a timescale of seconds. As a benchmark example for validation we used intracranial electroencephalographic signals in a human subject. These data were used to infer the (effective connectivity) architecture of synaptic connections among neural populations assumed to generate seizure activity. Dynamic causal modelling allowed us to quantify empirical changes in spectral activity in terms of a trajectory in parameter space - identifying key synaptic parameters or connections that cause observed signals. Using recordings from three seizures in one patient, we considered a network of two sources (within and just outside the putative ictal zone). Bayesian model selection was used to identify the intrinsic (within-source) and extrinsic (between-source) connectivity. Having established the underlying architecture, we were able to track the evolution of key connectivity parameters (e.g., inhibitory connections to superficial pyramidal cells) and test specific hypotheses about the synaptic mechanisms involved in ictogenesis. Our key finding was that intrinsic synaptic changes were sufficient to explain seizure onset, where these changes showed dissociable time courses over several seconds. Crucially, these changes spoke to an increase in the sensitivity of principal cells to intrinsic inhibitory afferents and a transient loss of excitatory-inhibitory balance. Copyright © 2014. Published by Elsevier Inc.
Lord, Dominique; Park, Peter Young-Jin
2008-07-01
Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.
Kayikcioglu, Huseyin Husnu
2012-07-15
Approximately 70% of the world water use, including all the water diverted from rivers and pumped from underground, is used for agricultural irrigation, so the reuse of treated domestic wastewater (TWW) for purposes such as agricultural and landscape irrigation reduces the amount of water that needs to be extracted from natural water sources as well as reducing discharge of wastewater to the environment. Thus, TWW is a valuable water source for recycling and reusing in arid and semi-arid regions which are frequently confronting water shortages. In this regard, this study was planned to reveal the short-term effects of advanced-TWW irrigation on microbial parameters of Vertic xerofluvent soil. For this purpose, certain parameters were measured in the study, including soil total organic carbon (C(org)), N-mineralization (N(min)), microbial biomass carbon (C(mic)), soil microbial quotient (C(mic)/C(org)) and the activities of the enzymes dehydrogenase (DHG), urease (UA), alkaline phosphatase (ALKPA), β-glucosidase (GLU) and aryl sulphatase (ArSA) in soils irrigated with TWW and fresh water (FW). All of the microbial parameters were negatively affected by TWW irrigation. Microbial parameters decreased by 10.1%-54.1% in comparison with the FW plots. This decrease especially in enzymatic activities of soil irrigated with TWW, presumably due to some heavy metals inhibited their activity associated with the soil types and the concentrations of heavy metals in wastewater. In contrast, C(mic)/C(org) was found higher in the plots irrigated with TWW at the end of the experiment. The addition of organic matter to soil by irrigation with TWW is cause for the increase in this ratio. The dose of irrigation should be modified to reduce the quantity and to increase the frequency of application to avoid the loss of aggregation and salt accumulation. TWW irrigation is a strategy with many benefits to agricultural land management; however, long-term studies should be implemented to investigate the microbiological characteristics of soil and to assess the feasibility of wastewater reuse for irrigation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Development and Performance of a Filter Radiometer Monitor System for Integrating Sphere Sources
NASA Technical Reports Server (NTRS)
Ding, Leibo; Kowalewski, Matthew G.; Cooper, John W.; Smith, GIlbert R.; Barnes, Robert A.; Waluschka, Eugene; Butler, James J.
2011-01-01
The NASA Goddard Space Flight Center (GSFC) Radiometric Calibration Laboratory (RCL) maintains several large integrating sphere sources covering the visible to the shortwave infrared wavelength range. Two critical, functional requirements of an integrating sphere source are short and long-term operational stability and repeatability. Monitoring the source is essential in determining the origin of systemic errors, thus increasing confidence in source performance and quantifying repeatability. If monitor data falls outside the established parameters, this could be an indication that the source requires maintenance or re-calibration against the National Institute of Science and Technology (NIST) irradiance standard. The GSFC RCL has developed a Filter Radiometer Monitoring System (FRMS) to continuously monitor the performance of its integrating sphere calibration sources in the 400 to 2400nm region. Sphere output change mechanisms include lamp aging, coating (e.g. BaSO4) deterioration, and ambient water vapor level. The Filter Radiometer Monitor System (FRMS) wavelength bands are selected to quantify changes caused by these mechanisms. The FRMS design and operation are presented, as well as data from monitoring four of the RCL s integrating sphere sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less
An update of Leighton's solar dynamo model
NASA Astrophysics Data System (ADS)
Cameron, R. H.; Schüssler, M.
2017-03-01
In 1969, Leighton developed a quasi-1D mathematical model of the solar dynamo, building upon the phenomenological scenario of Babcock published in 1961. Here we present a modification and extension of Leighton's model. Using the axisymmetric component (longitudinal average) of the magnetic field, we consider the radial field component at the solar surface and the radially integrated toroidal magnetic flux in the convection zone, both as functions of latitude. No assumptions are made with regard to the radial location of the toroidal flux. The model includes the effects of (I) turbulent diffusion at the surface and in the convection zone; (II) poleward meridional flow at the surface and an equatorward return flow affecting the toroidal flux; (III) latitudinal differential rotation and the near-surface layer of radial rotational shear; (iv) downward convective pumping of magnetic flux in the shear layer; and (v) flux emergence in the form of tilted bipolar magnetic regions treated as a source term for the radial surface field. While the parameters relevant for the transport of the surface field are taken from observations, the model condenses the unknown properties of magnetic field and flow in the convection zone into a few free parameters (turbulent diffusivity, effective return flow, amplitude of the source term, and a parameter describing the effective radial shear). Comparison with the results of 2D flux transport dynamo codes shows that the model captures the essential features of these simulations. We make use of the computational efficiency of the model to carry out an extended parameter study. We cover an extended domain of the 4D parameter space and identify the parameter ranges that provide solar-like solutions. Dipole parity is always preferred and solutions with periods around 22 yr and a correct phase difference between flux emergence in low latitudes and the strength of the polar fields are found for a return flow speed around 2 m s-1, turbulent diffusivity below about 80 km2s-1, and dynamo excitation not too far above the threshold (linear growth rate less than 0.1 yr-1).
Innovative ceramic slab lasers for high power laser applications
NASA Astrophysics Data System (ADS)
Lapucci, Antonio; Ciofini, Marco
2005-09-01
Diode Pumped Solid State Lasers (DPSSL) are gaining increasing interest for high power industrial application, given the continuous improvement in high power diode laser technology reliability and affordability. These sources open new windows in the parameter space for traditional applications such as cutting , welding, marking and engraving for high reflectance metallic materials. Other interesting applications for this kind of sources include high speed thermal printing, precision drilling, selective soldering and thin film etching. In this paper we examine the most important DPSS laser source types for industrial applications and we describe in details the performances of some slab laser configurations investigated at our facilities. The different architectures' advantages and draw-backs are briefly compared in terms of performances, system complexity and ease of scalability to the multi-kW level.
Radiative Transfer in a Translucent Cloud Illuminated by an Extended Background Source
NASA Astrophysics Data System (ADS)
Biganzoli, Davide; Potenza, Marco A. C.; Robberto, Massimo
2017-05-01
We discuss the radiative transfer theory for translucent clouds illuminated by an extended background source. First, we derive a rigorous solution based on the assumption that multiple scatterings produce an isotropic flux. Then we derive a more manageable analytic approximation showing that it nicely matches the results of the rigorous approach. To validate our model, we compare our predictions with accurate laboratory measurements for various types of well-characterized grains, including purely dielectric and strongly absorbing materials representative of astronomical icy and metallic grains, respectively, finding excellent agreement without the need to add free parameters. We use our model to explore the behavior of an astrophysical cloud illuminated by a diffuse source with dust grains having parameters typical of the classic ISM grains of Draine & Lee and protoplanetary disks, with an application to the dark silhouette disk 114-426 in Orion Nebula. We find that the scattering term modifies the transmitted radiation, both in terms of intensity (extinction) and shape (reddening) of the spectral distribution. In particular, for small optical thickness, our results show that scattering makes reddening almost negligible at visible wavelengths. Once the optical thickness increases enough and the probability of scattering events becomes close to or larger than 1, reddening becomes present but is appreciably modified with respect to the standard expression for line-of-sight absorption. Moreover, variations of the grain refractive index, in particular the amount of absorption, also play an important role in changing the shape of the spectral transmission curve, with dielectric grains showing the minimum amount of reddening.
Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.
de Barros, Louis; Dietrich, Michel
2008-03-01
Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.
On a new coordinate system with astrophysical application: Spiral coordinates
NASA Astrophysics Data System (ADS)
Campos, L. M. B. C.; Gil, P. J. S.
In this presentation are introduced spiral coordinates, which are a particular case of conformal coordinates, i.e. orthogonal curvelinear coordinates with equal factors along all coordinate axis. The spiral coordinates in the plane have as coordinate curves two families of logarithmic spirals, making a constant angle, respectively phi and pi / 2-phi, with all radial lines, where phi is a parameter. They can be obtained from a complex function, representing a spiral potential flow, due to the superposition of a source/sink with a vortex; the parameter phi in this case specifies the ratio of the ass flux of source/sink to the circulation of the vortex. Regardless of hydrodynamical or other interpretations, spiral coordinates are particulary convenient in situation where physical quantities vary only along a logarithmicspiral. The example chosen is the propagation of Alfven waves along a logarithmic spiral, as an approximation to Parker's spiral. The equation of dissipative MHD are written in spiral coordinates, and eliminated to specify the Alfven wave equation in spiral coordinates; the latter is solved exactly in terms of Bessel functions, and the results analyzed for values of the parameters corresponding to the solar wind.
Observed ground-motion variabilities and implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.
2016-12-01
One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.
NASA Astrophysics Data System (ADS)
Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott
2013-10-01
The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.
Macromolecular refinement by model morphing using non-atomic parameterizations.
Cowtan, Kevin; Agirre, Jon
2018-02-01
Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.
How Unique is Any Given Seismogram? - Exploring Correlation Methods to Identify Explosions
NASA Astrophysics Data System (ADS)
Walter, W. R.; Dodge, D. A.; Ford, S. R.; Pyle, M. L.; Hauk, T. F.
2015-12-01
As with conventional wisdom about snowflakes, we would expect it unlikely that any two broadband seismograms would ever be exactly identical. However depending upon the resolution of our comparison metric, we do expect, and often find, bandpassed seismograms that correlate to very high levels (>0.99). In fact regional (e.g. Schaff and Richards, 2011) and global investigations (e.g. Dodge and Walter, 2015) find large numbers of highly correlated seismograms. Decreasing computational costs are increasing the tremendous potential for correlation in lowering detection, location and identification thresholds for explosion monitoring (e.g. Schaff et al., 2012, Gibbons and Ringdal, 2012; Zhang and Wen, 2015). We have shown in the case of Source Physics Experiment (SPE) chemical explosions, templates at local and near regional stations can detect, locate and identify very small explosions, which might be applied to monitoring active test sites (Ford and Walter, 2015). In terms of elastic theory, seismograms are the convolution between source and Green function terms. Thus high correlation implies similar sources, closely located. How do we quantify this physically? For example it is well known that as the template event and target events are increasingly separated spatially, their correlation diminishes, as the difference in the Green function between the two events grows larger. This is related to the event separation in terms of wavelength, the heterogeneity of the Earth structure, and the time-bandwidth of the correlation parameters used, but this has not been well quantified. We are using the historic dataset of nuclear explosions in southern Nevada to explore empirically where and how well these events correlate as a function of location, depth, size, time-bandwidth and other parameters. A goal is to develop more meaningful and physical metrics that go beyond the correlation coefficient and can be applied to explosion monitoring problems, particularly event identification.
Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation
De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan
2017-01-01
Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006
Equivalent source modeling of the core magnetic field using magsat data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Estes, R. H.
1983-01-01
Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
Fahnline, John B
2016-12-01
An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.
UV fatigue investigations with non-destructive tools in silica
NASA Astrophysics Data System (ADS)
Natoli, Jean-Yves; Beaudier, Alexandre; Wagner, Frank R.
2017-08-01
A fatigue effect is often observed under multiple laser irradiations, overall in UV. This decrease of LIDT, is a critical parameter for laser sources with high repetition rates and with a need of long-term life, as in spatial applications at 355nm. A challenge is also to replace excimer lasers by solid laser sources, this challenge requires to improve drastically the lifetime of optical materials at 266nm. Main applications of these sources are devoted to material surface nanostructuration, spectroscopy and medical surgeries. In this work we focus on the understanding of the laser matter interaction at 266nm in silica in order to predict the lifetime of components and study parameters links to these lifetimes to give keys of improvement for material suppliers. In order to study the mechanism involved in the case of multiple irradiations, an interesting approach is to involve the evolution of fluorescence, in order to observe the first stages of material changes just before breakdown. We will show that it is sometime possible to estimate the lifetime of component only with the fluorescence measurement, saving time and materials. Moreover, the data from the diagnostics give relevant informations to highlight "defects" induced by multiple laser irradiations.
Design of HIFU Transducers for Generating Specified Nonlinear Ultrasound Fields.
Rosnitskiy, Pavel B; Yuldashev, Petr V; Sapozhnikov, Oleg A; Maxwell, Adam D; Kreider, Wayne; Bailey, Michael R; Khokhlova, Vera A
2017-02-01
Various clinical applications of high-intensity focused ultrasound have different requirements for the pressure levels and degree of nonlinear waveform distortion at the focus. The goal of this paper is to determine transducer design parameters that produce either a specified shock amplitude in the focal waveform or specified peak pressures while still maintaining quasi-linear conditions at the focus. Multiparametric nonlinear modeling based on the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation with an equivalent source boundary condition was employed. Peak pressures, shock amplitudes at the focus, and corresponding source outputs were determined for different transducer geometries and levels of nonlinear distortion. The results are presented in terms of the parameters of an equivalent single-element spherically shaped transducer. The accuracy of the method and its applicability to cases of strongly focused transducers were validated by comparing the KZK modeling data with measurements and nonlinear full diffraction simulations for a single-element source and arrays with 7 and 256 elements. The results provide look-up data for evaluating nonlinear distortions at the focus of existing therapeutic systems as well as for guiding the design of new transducers that generate specified nonlinear fields.
Operation of large RF sources for H-: Lessons learned at ELISE
NASA Astrophysics Data System (ADS)
Fantz, U.; Wünderlich, D.; Heinemann, B.; Kraus, W.; Riedl, R.
2017-08-01
The goal of the ELISE test facility is to demonstrate that large RF-driven negative ion sources (1 × 1 m2 source area with 360 kW installed RF power) can achieve the parameters required for the ITER beam sources in terms of current densities and beam homogeneity at a filling pressure of 0.3 Pa for pulse lengths of up to one hour. With the experience in operation of the test facility, the beam source inspection and maintenance as well as with the results of the achieved source performance so far, conclusions are drawn for commissioning and operation of the ITER beam sources. Addressed are critical technical RF issues, extrapolations to the required RF power, Cs consumption and Cs ovens, the need of adjusting the magnetic filter field strength as well as the temporal dynamic and spatial asymmetry of the co-extracted electron current. It is proposed to relax the low pressure limit to 0.4 Pa and to replace the fixed electron-to-ion ratio by a power density limit for the extraction grid. This would be highly beneficial for controlling the co-extracted electrons.
Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-08-05
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.
NASA Astrophysics Data System (ADS)
Nikolaeva, E. A.; Bikmaev, I. F.; Shimansky, V. V.; Sakhibullin, N. A.
2017-06-01
We investigate parameters of two high-mass X-ray binary systems IGR J17544-2619 and IGR J21343+4738 discovered by INTEGRAL space observatory by using optical data of Russian Turkish Telescope (RTT-150). Long-term optical observations of X-ray binary systems IGR J17544-2619 and IGR J21343+4738 were carried out in 2007-2015. Based on the RTT-150 data we estimated orbital periods of these systems. We have modeled the profiles of line HeI 6678 Å in spectra of studied optical stars and obtain the parameters of the star's atmosphere.
Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions
NASA Astrophysics Data System (ADS)
Buddala, Santhoshi Snigdha
Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
Non-Poissonian Distribution of Tsunami Waiting Times
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2007-12-01
Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min
2018-03-01
Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.
Deflection of light to second order in conformal Weyl gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sultana, Joseph, E-mail: joseph.sultana@um.edu.mt
2013-04-01
We reexamine the deflection of light in conformal Weyl gravity obtained in Sultana and Kazanas (2010), by extending the calculation based on the procedure by Rindler and Ishak, for the bending angle by a centrally concentrated spherically symmetric matter distribution, to second order in M/R, where M is the mass of the source and R is the impact parameter. It has recently been reported in Bhattacharya et al. (JCAP 09 (2010) 004; JCAP 02 (2011) 028), that when this calculation is done to second order, the term γr in the Mannheim-Kazanas metric, yields again the paradoxical contribution γR (where themore » bending angle is proportional to the impact parameter) obtained by standard formalisms appropriate to asymptotically flat spacetimes. We show that no such contribution is obtained for a second order calculation and the effects of the term γr in the metric are again insignificant as reported in our earlier work.« less
NASA Astrophysics Data System (ADS)
Lucchi, M.; Lorenzini, M.; Valdiserri, P.
2017-01-01
This work presents a numerical simulation of the annual performance of two different systems: a traditional one composed by a gas boiler-chiller pair and one consisting of a ground source heat pump (GSHP) both coupled to two thermal storage tanks. The systems serve a bloc of flats located in northern Italy and are assessed over a typical weather year, covering both the heating and cooling seasons. The air handling unit (AHU) coupled with the GSHP exhibits excellent characteristics in terms of temperature control, and has high performance parameters (EER and COP), which make conduction costs about 30% lower than those estimated for the traditional plant.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Comparison of different wavelength pump sources for Tm subnanosecond amplifier
NASA Astrophysics Data System (ADS)
Cserteg, Andras; Guillemet, Sébastien; Hernandez, Yves; Giannone, Domenico
2012-06-01
We report here a comparison of different pumping wavelengths for short pulse Thulium fibre amplifiers. We compare the results in terms of efficiency and required fibre length. As we operate the laser in the sub-nanosecond regime, the fibre length is a critical parameter regarding non linear effects. With 793 nm clad-pumping, a 4 m long active fibre was necessary, leading to strong spectral deformation through Self Phase Modulation (SPM). Core-pumping scheme was then more in-depth investigated with several wavelengths tested. Good results with Erbium and Raman shifted pumping sources were obtained, with very short fibre length, aiming to reach a few micro-joules per pulse without (or with limited) SPM.
SPECTRAL SURVEY OF X-RAY BRIGHT ACTIVE GALACTIC NUCLEI FROM THE ROSSI X-RAY TIMING EXPLORER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivers, Elizabeth; Markowitz, Alex; Rothschild, Richard, E-mail: erivers@ucsd.edu
2011-03-15
Using long-term monitoring data from the Rossi X-ray Timing Explorer (RXTE), we have selected 23 active galactic nuclei (AGNs) with sufficient brightness and overall observation time to derive broadband X-ray spectra from 3 to {approx}>100 keV. Our sample includes mainly radio-quiet Seyferts, as well as seven radio-loud sources. Given the longevity of the RXTE mission, the greater part of our data is spread out over more than a decade, providing truly long-term average spectra and eliminating inconsistencies arising from variability. We present long-term average values of absorption, Fe line parameters, Compton reflection strengths, and photon indices, as well as fluxesmore » and luminosities for the hard and very hard energy bands, 2-10 keV and 20-100 keV, respectively. We find tentative evidence for high-energy rollovers in three of our objects. We improve upon previous surveys of the very hard X-ray energy band in terms of accuracy and sensitivity, particularly with respect to confirming and quantifying the Compton reflection component. This survey is meant to provide a baseline for future analysis with respect to the long-term averages for these sources and to cement the legacy of RXTE, and especially its High Energy X-ray Timing Experiment, as a contributor to AGN spectral science.« less
NASA Astrophysics Data System (ADS)
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
Hynds, Paul D; Misstear, Bruce D; Gill, Laurence W
2013-09-30
While the safety of public drinking water supplies in the Republic of Ireland is governed and monitored at both local and national levels, there are currently no legislative tools in place relating to private supplies. It is therefore paramount that private well owners (and users) be aware of source specifications and potential contamination risks, to ensure adequate water quality. The objective of this study was to investigate the level of awareness among private well owners in the Republic of Ireland, relating to source characterisation and groundwater contamination issues. This was undertaken through interviews with 245 private well owners. Statistical analysis indicates that respondents' source type significantly influences owner awareness, particularly regarding well construction and design parameters. Water treatment, source maintenance and regular water quality testing are considered the three primary "protective actions" (or "stewardship activities") to consumption of contaminated groundwater and were reported as being absent in 64%, 72% and 40% of cases, respectively. Results indicate that the level of awareness exhibited by well users did not significantly affect the likelihood of their source being contaminated (source susceptibility); increased awareness on behalf of well users was associated with increased levels of protective action, particularly among borehole owners. Hence, lower levels of awareness may result in increased contraction of waterborne illnesses where contaminants have entered the well. Accordingly, focused educational strategies to increase awareness among private groundwater users are advocated in the short-term; the development and introdiction of formal legislation is recommended in the long-term, including an integrated programme of well inspections and risk assessments. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi
2017-07-01
While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
Evaluation of the communications impact of a low power arcjet thruster
NASA Technical Reports Server (NTRS)
Carney, Lynnette M.
1988-01-01
The interaction of a 1 kW arcjet thruster plume with a communications signal is evaluated. A two-parameter, source flow equation has been used to represent the far flow field distribution of the arcjet plume in a realistic spacecraft configuration. Modelling the plume as a plasma slab, the interaction of the plume with a 4 GHz communications signal is then evaluated in terms of signal attenuation and phase shift between transmitting and receiving antennas. Except for propagation paths which pass very near the arcjet source, the impacts to transmission appear to be negligible. The dominant signal loss mechanism is refraction of the beam rather than absorption losses due to collisions. However, significant reflection of the signal at the sharp vacuum-plasma boundary may also occur for propagation paths which pass near the source.
A comparative study of radiofrequency antennas for Helicon plasma sources
NASA Astrophysics Data System (ADS)
Melazzi, D.; Lancellotti, V.
2015-04-01
Since Helicon plasma sources can efficiently couple power and generate high-density plasma, they have received interest also as spacecraft propulsive devices, among other applications. In order to maximize the power deposited into the plasma, it is necessary to assess the performance of the radiofrequency (RF) antenna that drives the discharge, as typical plasma parameters (e.g. the density) are varied. For this reason, we have conducted a comparative analysis of three Helicon sources which feature different RF antennas, namely, the single-loop, the Nagoya type-III and the fractional helix. These antennas are compared in terms of input impedance and induced current density; in particular, the real part of the impedance constitutes a measure of the antenna ability to couple power into the plasma. The results presented in this work have been obtained through a full-wave approach which (being hinged on the numerical solution of a system of integral equations) allows computing the antenna current and impedance self-consistently. Our findings indicate that certain combinations of plasma parameters can indeed maximize the real part of the input impedance and, thus, the deposited power, and that one of the three antennas analyzed performs best for a given plasma. Furthermore, unlike other strategies which rely on approximate antenna models, our approach enables us to reveal that the antenna current density is not spatially uniform, and that a correlation exists between the plasma parameters and the spatial distribution of the current density.
Cassette, Philippe
2016-03-01
In Liquid Scintillation Counting (LSC), the scintillating source is part of the measurement system and its detection efficiency varies with the scintillator used, the vial and the volume and the chemistry of the sample. The detection efficiency is generally determined using a quenching curve, describing, for a specific radionuclide, the relationship between a quenching index given by the counter and the detection efficiency. A quenched set of LS standard sources are prepared by adding a quenching agent and the quenching index and detection efficiency are determined for each source. Then a simple formula is fitted to the experimental points to define the quenching curve function. The paper describes a software package specifically devoted to the determination of quenching curves with uncertainties. The experimental measurements are described by their quenching index and detection efficiency with uncertainties on both quantities. Random Gaussian fluctuations of these experimental measurements are sampled and a polynomial or logarithmic function is fitted on each fluctuation by χ(2) minimization. This Monte Carlo procedure is repeated many times and eventually the arithmetic mean and the experimental standard deviation of each parameter are calculated, together with the covariances between these parameters. Using these parameters, the detection efficiency, corresponding to an arbitrary quenching index within the measured range, can be calculated. The associated uncertainty is calculated with the law of propagation of variances, including the covariance terms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Locating and Modeling Regional Earthquakes with Broadband Waveform Data
NASA Astrophysics Data System (ADS)
Tan, Y.; Zhu, L.; Helmberger, D.
2003-12-01
Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and focal mechanism. We show that the whole process can be easily automated and routinely provide reliable source parameter estimates with a couple of broadband stations. Two applications in the Tibet Plateau and Southern California will be presented along with comparisons of results against other methods.
Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan
2016-11-15
Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2 = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.
JAMSS: proteomics mass spectrometry simulation in Java.
Smith, Rob; Prince, John T
2015-03-01
Countless proteomics data processing algorithms have been proposed, yet few have been critically evaluated due to lack of labeled data (data with known identities and quantities). Although labeling techniques exist, they are limited in terms of confidence and accuracy. In silico simulators have recently been used to create complex data with known identities and quantities. We propose Java Mass Spectrometry Simulator (JAMSS): a fast, self-contained in silico simulator capable of generating simulated MS and LC-MS runs while providing meta information on the provenance of each generated signal. JAMSS improves upon previous in silico simulators in terms of its ease to install, minimal parameters, graphical user interface, multithreading capability, retention time shift model and reproducibility. The simulator creates mzML 1.1.0. It is open source software licensed under the GPLv3. The software and source are available at https://github.com/optimusmoose/JAMSS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Incorporation of an Energy Equation into a Pulsed Inductive Thruster Performance Model
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Reneau, Jarred P.; Sankaran, Kameshwaran
2011-01-01
A model for pulsed inductive plasma acceleration containing an energy equation to account for the various sources and sinks in such devices is presented. The model consists of a set of circuit equations coupled to an equation of motion and energy equation for the plasma. The latter two equations are obtained for the plasma current sheet by treating it as a one-element finite volume, integrating the equations over that volume, and then matching known terms or quantities already calculated in the model to the resulting current sheet-averaged terms in the equations. Calculations showing the time-evolution of the various sources and sinks in the system are presented to demonstrate the efficacy of the model, with two separate resistivity models employed to show an example of how the plasma transport properties can affect the calculation. While neither resistivity model is fully accurate, the demonstration shows that it is possible within this modeling framework to time-accurately update various plasma parameters.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi
2014-01-01
We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.
2014-01-01
Background The chemical composition of aerosols and particle size distributions are the most significant factors affecting air quality. In particular, the exposure to finer particles can cause short and long-term effects on human health. In the present paper PM10 (particulate matter with aerodynamic diameter lower than 10 μm), CO, NOx (NO and NO2), Benzene and Toluene trends monitored in six monitoring stations of Bari province are shown. The data set used was composed by bi-hourly means for all parameters (12 bi-hourly means per day for each parameter) and it’s referred to the period of time from January 2005 and May 2007. The main aim of the paper is to provide a clear illustration of how large data sets from monitoring stations can give information about the number and nature of the pollutant sources, and mainly to assess the contribution of the traffic source to PM10 concentration level by using multivariate statistical techniques such as Principal Component Analysis (PCA) and Absolute Principal Component Scores (APCS). Results Comparing the night and day mean concentrations (per day) for each parameter it has been pointed out that there is a different night and day behavior for some parameters such as CO, Benzene and Toluene than PM10. This suggests that CO, Benzene and Toluene concentrations are mainly connected with transport systems, whereas PM10 is mostly influenced by different factors. The statistical techniques identified three recurrent sources, associated with vehicular traffic and particulate transport, covering over 90% of variance. The contemporaneous analysis of gas and PM10 has allowed underlining the differences between the sources of these pollutants. Conclusions The analysis of the pollutant trends from large data set and the application of multivariate statistical techniques such as PCA and APCS can give useful information about air quality and pollutant’s sources. These knowledge can provide useful advices to environmental policies in order to reach the WHO recommended levels. PMID:24555534
Chen, Dingjiang; Guo, Yi; Hu, Minpeng; Dahlgren, Randy A
2015-08-01
Legacy nitrogen (N) sources originating from anthropogenic N inputs (NANI) may be a major cause of increasing riverine N exports in many regions, despite a significant decline in NANI. However, little quantitative knowledge exists concerning the lag effect of NANI on riverine N export. As a result, the N leaching lag effect is not well represented in most current watershed models. This study developed a lagged variable model (LVM) to address temporally dynamic export of watershed NANI to rivers. Employing a Koyck transformation approach used in economic analyses, the LVM expresses the indefinite number of lag terms from previous years' NANI with a lag term that incorporates the previous year's riverine N flux, enabling us to inversely calibrate model parameters from measurable variables using Bayesian statistics. Applying the LVM to the upper Jiaojiang watershed in eastern China for 1980-2010 indicated that ~97% of riverine export of annual NANI occurred in the current year and succeeding 10 years (~11 years lag time) and ~72% of annual riverine N flux was derived from previous years' NANI. Existing NANI over the 1993-2010 period would have required a 22% reduction to attain the target TN level (1.0 mg N L(-1)), guiding watershed N source controls considering the lag effect. The LVM was developed with parsimony of model structure and parameters (only four parameters in this study); thus, it is easy to develop and apply in other watersheds. The LVM provides a simple and effective tool for quantifying the lag effect of anthropogenic N input on riverine export in support of efficient development and evaluation of watershed N control strategies.
Liao, Renkuan; Yang, Peiling; Wu, Wenyong; Ren, Shumei
2016-01-01
The widespread use of superabsorbent polymers (SAPs) in arid regions improves the efficiency of local land and water use. However, SAPs’ repeated absorption and release of water has periodic and unstable effects on both soil’s physical and chemical properties and on the growth of plant roots, which complicates modeling of water movement in SAP-treated soils. In this paper, we proposea model of soil water movement for SAP-treated soils. The residence time of SAP in the soil and the duration of the experiment were considered as the same parameter t. This simplifies previously proposed models in which the residence time of SAP in the soil and the experiment’s duration were considered as two independent parameters. Numerical testing was carried out on the inverse method of estimating the source/sink term of root water uptake in the model of soil water movement under the effect of SAP. The test results show that time interval, hydraulic parameters, test error, and instrument precision had a significant influence on the stability of the inverse method, while time step, layering of soil, and boundary conditions had relatively smaller effects. A comprehensive analysis of the method’s stability, calculation, and accuracy suggests that the proposed inverse method applies if the following conditions are satisfied: the time interval is between 5 d and 17 d; the time step is between 1000 and 10000; the test error is ≥ 0.9; the instrument precision is ≤ 0.03; and the rate of soil surface evaporation is ≤ 0.6 mm/d. PMID:27505000
Inflationary cosmology with Chaplygin gas in Palatini formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowiec, Andrzej; Wojnar, Aneta; Stachowski, Aleksander
2016-01-01
We present a simple generalisation of the ΛCDM model which on the one hand reaches very good agreement with the present day experimental data and provides an internal inflationary mechanism on the other hand. It is based on Palatini modified gravity with quadratic Starobinsky term and generalized Chaplygin gas as a matter source providing, besides a current accelerated expansion, the epoch of endogenous inflation driven by type III freeze singularity. It follows from our statistical analysis that astronomical data favors negative value of the parameter coupling quadratic term into Einstein-Hilbert Lagrangian and as a consequence the bounce instead of initialmore » Big-Bang singularity is preferred.« less
Global Source Parameters from Regional Spectral Ratios for Yield Transportability Studies
NASA Astrophysics Data System (ADS)
Phillips, W. S.; Fisk, M. D.; Stead, R. J.; Begnaud, M. L.; Rowe, C. A.
2016-12-01
We use source parameters such as moment, corner frequency and high frequency rolloff as constraints in amplitude tomography, ensuring that spectra of well-studied earthquakes are recovered using the ensuing attenuation and site term model. We correct explosion data for path and site effects using such models, which allows us to test transportability of yield estimation techniques based on our best source spectral estimates. To develop a background set of source parameters, we applied spectral ratio techniques to envelopes of a global set of regional distance recordings from over 180,000 crustal events. Corner frequencies and moment ratios were determined via inversion using all event pairs within predetermined clusters, shifting to absolute levels using independently determined regional and teleseismic moments. The moment and corner frequency results can be expressed as stress drop, which has considerable scatter, yet shows dramatic regional patterns. We observe high stress in subduction zones along S. America, S. Mexico, the Banda Sea, and associated with the Yakutat Block in Alaska. We also observe high stress at the Himalayan syntaxes, the Pamirs, eastern Iran, the Caspian, the Altai-Sayan, and the central African rift. Low stress is observed along mid ocean spreading centers, the Afar rift, patches of convergence zones such as Nicaragua, the Zagros, Tibet, and the Tien Shan, among others. Mine blasts appear as low stress events due to their low corners and steep rolloffs. Many of these anomalies have been noted by previous studies, and we plan to compare results directly. As mentioned, these results will be used to constrain tomographic imaging, but can also be used in model validation procedures similar to the use of ground truth in location problems, and, perhaps most importantly, figure heavily in quality control of local and regional distance amplitude measurements.
New opportunities in quasi elastic neutron scattering spectroscopy
NASA Astrophysics Data System (ADS)
Mezei, F.; Russina, M.
2001-07-01
The high energy resolution usually required in quasi elastic neutron scattering (QENS) spectroscopy is commonly achieved by the use of cold neutrons. This is one of the important research areas where the majority of current work is done on instruments on continuous reactor sources. One particular reason for this is the capability of continuous source time-of-flight spectrometers to use instrumental parameters optimally adapted for best data collection efficiency in each experiment. These parameters include the pulse repetition rate and the length of the pulses to achieve optimal balance between resolution and intensity. In addition, the disc chopper systems used provide perfect symmetrical line shapes with no tails and low background. Recent development of a set of novel techniques enhance the efficiency of cold neutron spectroscopy on existing and future spallation sources in a dramatic fashion. These techniques involve the use of extended pulse length, high intensity coupled moderators, disc chopper systems and advanced neutron optical beam delivery, and they will enable Lujan center at Los Alamos to surpass the best existing reactor instruments in time-of-flight QENS work by more than on order of magnitude in terms of beam flux on the sample. Other applications of the same techniques will allow us to combine advantages of backscattering spectroscopy on continuous and pulsed sources in order to deliver μeV resolution in a very broad energy transfer range.
Efficient RF energy harvesting by using a fractal structured rectenna system
NASA Astrophysics Data System (ADS)
Oh, Sechang; Ramasamy, Mouli; Varadan, Vijay K.
2014-04-01
A rectenna system delivers, collects, and converts RF energy into direct current to power the electronic devices or recharge batteries. It consists of an antenna for receiving RF power, an input filter for processing energy and impedance matching, a rectifier, an output filter, and a load resistor. However, the conventional rectenna systems have drawback in terms of power generation, as the single resonant frequency of an antenna can generate only low power compared to multiple resonant frequencies. A multi band rectenna system is an optimal solution to generate more power. This paper proposes the design of a novel rectenna system, which involves developing a multi band rectenna with a fractal structured antenna to facilitate an increase in energy harvesting from various sources like Wi-Fi, TV signals, mobile networks and other ambient sources, eliminating the limitation of a single band technique. The usage of fractal antennas effects certain prominent advantages in terms of size and multiple resonances. Even though, a fractal antenna incorporates multiple resonances, controlling the resonant frequencies is an important aspect to generate power from the various desired RF sources. Hence, this paper also describes the design parameters of the fractal antenna and the methods to control the multi band frequency.
Kumarathilaka, Prasanna; Seneweera, Saman; Meharg, Andrew; Bundschuh, Jochen
2018-04-21
Rice is the main staple carbohydrate source for billions of people worldwide. Natural geogenic and anthropogenic sources has led to high arsenic (As) concentrations in rice grains. This is because As is highly bioavailable to rice roots under conditions in which rice is cultivated. A multifaceted and interdisciplinary understanding, both of short-term and long-term effects, are required to identify spatial and temporal changes in As contamination levels in paddy soil-water systems. During flooding, soil pore waters are elevated in inorganic As compared to dryland cultivation systems, as anaerobism results in poorly mobile As(V), being reduced to highly mobile As(III). The formation of iron (Fe) plaque on roots, availability of metal (hydro)oxides (Fe and Mn), organic matter, clay mineralogy and competing ions and compounds (PO 4 3- and Si(OH) 4 ) are all known to influence As(V) and As(III) mobility in paddy soil-water environments. Microorganisms play a key role in As transformation through oxidation/reduction, and methylation/volatilization reactions, but transformation kinetics are poorly understood. Scientific-based optimization of all biogeochemical parameters may help to significantly reduce the bioavailability of inorganic As. Copyright © 2018 Elsevier Ltd. All rights reserved.
Source encoding in multi-parameter full waveform inversion
NASA Astrophysics Data System (ADS)
Matharu, Gian; Sacchi, Mauricio D.
2018-04-01
Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.
Study of RCR Catalogue Radio Source Integral Spectra
NASA Astrophysics Data System (ADS)
Zhelenkova, O. P.; Majorova, E. K.
2018-04-01
We present the characteristics of the sources found on the averaged scans of the "Cold" experiment 1980-1999 surveys in the right-ascension interval 2h< RA < 7h. Thereby, a refinement of the parameters of the RC catalog sources (RATANCold) for this interval is complete. To date, the RCR catalog (RATAN Cold Refined) covers the right-ascension interval 2h< RA < 17h and includes 830 sources. The spectra are built for them with the use of new data in the range of 70-230 MHz. The dependence between the spectral indices α 0.5, α 3.94 and integral flux density at the frequencies of 74 and 150 MHz, at 1.4, 3.94 and 4.85 GHz is discussed.We found that at 150 MHz in most sources the spectral index α 0.5 gets steeper with increasing flux density. In general, the sources with flat spectra are weaker in terms of flux density than the sources with steep spectra, which especially differs at 150 MHz. We believe that this is due to the brightness of their extended components, which can be determined by the type of accretion and the neighborhood of the source.
Searches for millisecond pulsations in low-mass X-ray binaries
NASA Technical Reports Server (NTRS)
Wood, K. S.; Hertz, P.; Norris, J. P.; Vaughan, B. A.; Michelson, P. F.; Mitsuda, K.; Lewin, W. H. G.; Van Paradijs, J.; Penninx, W.; Van Der Klis, M.
1991-01-01
High-sensitivity search techniques for millisecond periods are presented and applied to data from the Japanese satellite Ginga and HEAO 1. The search is optimized for pulsed signals whose period, drift rate, and amplitude conform with what is expected for low-class X-ray binary (LMXB) sources. Consideration is given to how the current understanding of LMXBs guides the search strategy and sets these parameter limits. An optimized one-parameter coherence recovery technique (CRT) developed for recovery of phase coherence is presented. This technique provides a large increase in sensitivity over the method of incoherent summation of Fourier power spectra. The range of spin periods expected from LMXB phenomenology is discussed, the necessary constraints on the application of CRT are described in terms of integration time and orbital parameters, and the residual power unrecovered by the quadratic approximation for realistic cases is estimated.
Experimental Identification and Characterization of Multirotor UAV Propulsion
NASA Astrophysics Data System (ADS)
Kotarski, Denis; Krznar, Matija; Piljek, Petar; Simunic, Nikola
2017-07-01
In this paper, an experimental procedure for the identification and characterization of multirotor Unmanned Aerial Vehicle (UAV) propulsion is presented. Propulsion configuration needs to be defined precisely in order to achieve required flight performance. Based on the accurate dynamic model and empirical measurements of multirotor propulsion physical parameters, it is possible to design diverse configurations with different characteristics for various purposes. As a case study, we investigated design considerations for a micro indoor multirotor which is suitable for control algorithm implementation in structured environment. It consists of open source autopilot, sensors for indoor flight, “take off the shelf” propulsion components and frame. The series of experiments were conducted to show the process of parameters identification and the procedure for analysis and propulsion characterization. Additionally, we explore battery performance in terms of mass and specific energy. Experimental results show identified and estimated propulsion parameters through which blade element theory is verified.
Ancient Glass: A Literature Search and its Role in Waste Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strachan, Denis M.; Pierce, Eric M.
2010-07-01
When developing a performance assessment model for the long-term disposal of immobilized low-activity waste (ILAW) glass, it is desirable to determine the durability of glass forms over very long periods of time. However, testing is limited to short time spans, so experiments are performed under conditions that accelerate the key geochemical processes that control weathering. Verification that models currently being used can reliably calculate the long term behavior ILAW glass is a key component of the overall PA strategy. Therefore, Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to evaluate alternative strategies that can be usedmore » for PA source term model validation. One viable alternative strategy is the use of independent experimental data from archaeological studies of ancient or natural glass contained in the literature. These results represent a potential independent experiment that date back to approximately 3600 years ago or 1600 before the current era (bce) in the case of ancient glass and 106 years or older in the case of natural glass. The results of this literature review suggest that additional experimental data may be needed before the result from archaeological studies can be used as a tool for model validation of glass weathering and more specifically disposal facility performance. This is largely because none of the existing data set contains all of the information required to conduct PA source term calculations. For example, in many cases the sediments surrounding the glass was not collected and analyzed; therefore having the data required to compare computer simulations of concentration flux is not possible. This type of information is important to understanding the element release profile from the glass to the surrounding environment and provides a metric that can be used to calibrate source term models. Although useful, the available literature sources do not contain the required information needed to simulate the long-term performance of nuclear waste glasses in a near-surface or deep geologic repositories. The information that will be required include 1) experimental measurements to quantify the model parameters, 2) detailed analyses of altered glass samples, and 3) detailed analyses of the sediment surrounding the ancient glass samples.« less
NASA Astrophysics Data System (ADS)
Petit, J.-E.; Favez, O.; Sciare, J.; Crenn, V.; Sarda-Estève, R.; Bonnaire, N.; Močnik, G.; Dupont, J.-C.; Haeffelin, M.; Leoz-Garziandia, E.
2015-03-01
Aerosol mass spectrometer (AMS) measurements have been successfully used towards a better understanding of non-refractory submicron (PM1) aerosol chemical properties based on short-term campaigns. The recently developed Aerosol Chemical Speciation Monitor (ACSM) has been designed to deliver quite similar artifact-free chemical information but for low cost, and to perform robust monitoring over long-term periods. When deployed in parallel with real-time black carbon (BC) measurements, the combined data set allows for a quasi-comprehensive description of the whole PM1 fraction in near real time. Here we present 2-year long ACSM and BC data sets, between mid-2011 and mid-2013, obtained at the French atmospheric SIRTA supersite that is representative of background PM levels of the region of Paris. This large data set shows intense and time-limited (a few hours) pollution events observed during wintertime in the region of Paris, pointing to local carbonaceous emissions (mainly combustion sources). A non-parametric wind regression analysis was performed on this 2-year data set for the major PM1 constituents (organic matter, nitrate, sulfate and source apportioned BC) and ammonia in order to better refine their geographical origins and assess local/regional/advected contributions whose information is mandatory for efficient mitigation strategies. While ammonium sulfate typically shows a clear advected pattern, ammonium nitrate partially displays a similar feature, but, less expectedly, it also exhibits a significant contribution of regional and local emissions. The contribution of regional background organic aerosols (OA) is significant in spring and summer, while a more pronounced local origin is evidenced during wintertime, whose pattern is also observed for BC originating from domestic wood burning. Using time-resolved ACSM and BC information, seasonally differentiated weekly diurnal profiles of these constituents were investigated and helped to identify the main parameters controlling their temporal variations (sources, meteorological parameters). Finally, a careful investigation of all the major pollution episodes observed over the region of Paris between 2011 and 2013 was performed and classified in terms of chemical composition and the BC-to-sulfate ratio used here as a proxy of the local/regional/advected contribution of PM. In conclusion, these first 2-year quality-controlled measurements of ACSM clearly demonstrate their great potential to monitor on a long-term basis aerosol sources and their geographical origin and provide strategic information in near real time during pollution episodes. They also support the capacity of the ACSM to be proposed as a robust and credible alternative to filter-based sampling techniques for long-term monitoring strategies.
Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.
Baranwal, Vipul K; Pandey, Ram K; Singh, Om P
2014-01-01
We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.
Image quality enhancement for skin cancer optical diagnostics
NASA Astrophysics Data System (ADS)
Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey
2017-12-01
The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.
Magnetic monopole in noncommutative space-time and Wu-Yang singularity-free gauge transformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laangvik, Miklos; Salminen, Tapio; Tureanu, Anca
2011-04-15
We investigate the validity of the Dirac quantization condition for magnetic monopoles in noncommutative space-time. We use an approach which is based on an extension of the method introduced by Wu and Yang. To study the effects of noncommutativity of space-time, we consider the gauge transformations of U{sub *}(1) gauge fields and use the corresponding deformed Maxwell's equations. Using a perturbation expansion in the noncommutativity parameter {theta}, we show that the Dirac quantization condition remains unmodified up to the first order in the expansion parameter. The result is obtained for a class of noncommutative source terms, which reduce to themore » Dirac delta function in the commutative limit.« less
Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-01-01
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906
Asymptotic expansions of the kernel functions for line formation with continuous absorption
NASA Technical Reports Server (NTRS)
Hummer, D. G.
1991-01-01
Asymptotic expressions are obtained for the kernel functions M2(tau, alpha, beta) and K2(tau, alpha, beta) appearing in the theory of line formation with complete redistribution over a Voigt profile with damping parameter a, in the presence of a source of continuous opacity parameterized by beta. For a greater than 0, each coefficient in the asymptotic series is expressed as the product of analytic functions of a and eta. For Doppler broadening, only the leading term can be evaluated analytically.
On butterfly effect in higher derivative gravities
NASA Astrophysics Data System (ADS)
Alishahiha, Mohsen; Davody, Ali; Naseh, Ali; Taghavi, Seyed Farid
2016-11-01
We study butterfly effect in D-dimensional gravitational theories containing terms quadratic in Ricci scalar and Ricci tensor. One observes that due to higher order derivatives in the corresponding equations of motion there are two butterfly velocities. The velocities are determined by the dimension of operators whose sources are provided by the metric. The three dimensional TMG model is also studied where we get two butterfly velocities at generic point of the moduli space of parameters. At critical point two velocities coincide.
Coherent attacking continuous-variable quantum key distribution with entanglement in the middle
NASA Astrophysics Data System (ADS)
Zhang, Zhaoyuan; Shi, Ronghua; Zeng, Guihua; Guo, Ying
2018-06-01
We suggest an approach on the coherent attack of continuous-variable quantum key distribution (CVQKD) with an untrusted entangled source in the middle. The coherent attack strategy can be performed on the double links of quantum system, enabling the eavesdropper to steal more information from the proposed scheme using the entanglement correlation. Numeric simulation results show the improved performance of the attacked CVQKD system in terms of the derived secret key rate with the controllable parameters maximizing the stolen information.
The role of unsteady buoyancy flux on transient eruption plume velocity structure and evolution
NASA Astrophysics Data System (ADS)
Chojnicki, K. N.; Clarke, A. B.; Phillips, J. C.
2010-12-01
Volcanic vent exit velocities, eruption column velocity profiles, and atmospheric entrainment are important parameters that control the evolution of explosive volcanic eruption plumes. New data sets tracking short-term variability in such parameters are becoming more abundant in volcanology and are being used to indirectly estimate eruption source conditions such vent flux, material properties of the plume, and source mechanisms. However, inadequate theory describing the relationships between time-varying source fluxes and evolution of unsteady turbulent flows such as eruption plumes, limits the interpretation potential of these data sets. In particular, the relative roles of gas-thrust and buoyancy in volcanic explosions is known to generate distinct differences in the ascent dynamics. Here we investigate the role of initial buoyancy in unsteady, short-duration eruption dynamics through scaled laboratory experiments and provide an empirical description of the relationship between unsteady source flux and plume evolution. The experiments involved source fluids of various densities (960-1000 kg/m3) injected, with a range of initial momentum and buoyancy, into a tank of fresh water through a range of vent diameters (3-15 mm). A scaled analysis was used to determine the fundamental parameters governing the evolution of the laboratory plumes as a function of unsteady source conditions. The subsequent model can be applied to predict flow front propagation speeds, and maximum flow height and width of transient volcanic eruption plumes which can not be adequately described by existing steady approximations. In addition, the model describes the relative roles of momentum or gas-thrust and buoyancy in plume motion which is suspected to be a key parameter in quantitatively defining explosive eruption style. The velocity structure of the resulting flows was measured using the Particle Image Velocimetry (PIV) technique in which velocity vector fields were generated from displacements in time-resolved video images of particles in the flow interior. Cross-sectional profiles of vertical velocity and entrainment of ambient fluid were characterized using the resulting velocity vector maps. These data elucidate the relationship between flow front velocity and internal velocity structure which may improve interpretations of field measurements of volcanic explosions. The velocity maps also demonstrate the role of buoyancy in enhancing ambient entrainment and converting vertical velocity to horizontal velocity, which may explain why buoyancy at the vent leads to faster deceleration of the flow.
Dosimetric characterization of two radium sources for retrospective dosimetry studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candela-Juan, C., E-mail: ccanjuan@gmail.com; Karlsson, M.; Lundell, M.
2015-05-15
Purpose: During the first part of the 20th century, {sup 226}Ra was the most used radionuclide for brachytherapy. Retrospective accurate dosimetry, coupled with patient follow up, is important for advancing knowledge on long-term radiation effects. The purpose of this work was to dosimetrically characterize two {sup 226}Ra sources, commonly used in Sweden during the first half of the 20th century, for retrospective dose–effect studies. Methods: An 8 mg {sup 226}Ra tube and a 10 mg {sup 226}Ra needle, used at Radiumhemmet (Karolinska University Hospital, Stockholm, Sweden), from 1925 to the 1960s, were modeled in two independent Monte Carlo (MC) radiationmore » transport codes: GEANT4 and MCNP5. Absorbed dose and collision kerma around the two sources were obtained, from which the TG-43 parameters were derived for the secular equilibrium state. Furthermore, results from this dosimetric formalism were compared with results from a MC simulation with a superficial mould constituted by five needles inside a glass casing, placed over a water phantom, trying to mimic a typical clinical setup. Calculated absorbed doses using the TG-43 formalism were also compared with previously reported measurements and calculations based on the Sievert integral. Finally, the dose rate at large distances from a {sup 226}Ra point-like-source placed in the center of 1 m radius water sphere was calculated with GEANT4. Results: TG-43 parameters [including g{sub L}(r), F(r, θ), Λ, and s{sub K}] have been uploaded in spreadsheets as additional material, and the fitting parameters of a mathematical curve that provides the dose rate between 10 and 60 cm from the source have been provided. Results from TG-43 formalism are consistent within the treatment volume with those of a MC simulation of a typical clinical scenario. Comparisons with reported measurements made with thermoluminescent dosimeters show differences up to 13% along the transverse axis of the radium needle. It has been estimated that the uncertainty associated to the absorbed dose within the treatment volume is 10%–15%, whereas uncertainty of absorbed dose to distant organs is roughly 20%–25%. Conclusions: The results provided here facilitate retrospective dosimetry studies of {sup 226}Ra using modern treatment planning systems, which may be used to improve knowledge on long term radiation effects. It is surely important for the epidemiologic studies to be aware of the estimated uncertainty provided here before extracting their conclusions.« less
Oviedo-Ocaña, E R; Torres-Lozada, P; Marmolejo-Rebellon, L F; Torres-López, W A; Dominguez, I; Komilis, D; Sánchez, A
2017-04-01
Biowaste is commonly the largest fraction of municipal solid waste (MSW) in developing countries. Although composting is an effective method to treat source separated biowaste (SSB), there are certain limitations in terms of operation, partly due to insufficient control to the variability of SSB quality, which affects process kinetics and product quality. This study assesses the variability of the SSB physicochemical quality in a composting facility located in a small town of Colombia, in which SSB collection was performed twice a week. Likewise, the influence of the SSB physicochemical variability on the variability of compost parameters was assessed. Parametric and non-parametric tests (i.e. Student's t-test and the Mann-Whitney test) showed no significant differences in the quality parameters of SSB among collection days, and therefore, it was unnecessary to establish specific operation and maintenance regulations for each collection day. Significant variability was found in eight of the twelve quality parameters analyzed in the inlet stream, with corresponding coefficients of variation (CV) higher than 23%. The CVs for the eight parameters analyzed in the final compost (i.e. pH, moisture, total organic carbon, total nitrogen, C/N ratio, total phosphorus, total potassium and ash) ranged from 9.6% to 49.4%, with significant variations in five of those parameters (CV>20%). The above indicate that variability in the inlet stream can affect the variability of the end-product. Results suggest the need to consider variability of the inlet stream in the performance of composting facilities to achieve a compost of consistent quality. Copyright © 2017 Elsevier Ltd. All rights reserved.
New VLBI2010 scheduling strategies and implications on the terrestrial reference frames.
Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald
In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.
New VLBI2010 scheduling strategies and implications on the terrestrial reference frames
NASA Astrophysics Data System (ADS)
Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald
2014-05-01
In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.
The impacts of non-renewable and renewable energy on CO2 emissions in Turkey.
Bulut, Umit
2017-06-01
As a result of great increases in CO 2 emissions in the last few decades, many papers have examined the relationship between renewable energy and CO 2 emissions in the energy economics literature, because as a clean energy source, renewable energy can reduce CO 2 emissions and solve environmental problems stemming from increases in CO 2 emissions. When one analyses these papers, he/she will observe that they employ fixed parameter estimation methods, and time-varying effects of non-renewable and renewable energy consumption/production on greenhouse gas emissions are ignored. In order to fulfil this gap in the literature, this paper examines the effects of non-renewable and renewable energy on CO 2 emissions in Turkey over the period 1970-2013 by employing fixed parameter and time-varying parameter estimation methods. Estimation methods reveal that CO 2 emissions are positively related to non-renewable energy and renewable energy in Turkey. Since policy makers expect renewable energy to decrease CO 2 emissions, this paper argues that renewable energy is not able to satisfy the expectations of policy makers though fewer CO 2 emissions arise through production of electricity using renewable sources. In conclusion, the paper argues that policy makers should implement long-term energy policies in Turkey.
Efficient Moment-Based Inference of Admixture Parameters and Sources of Gene Flow
Levin, Alex; Reich, David; Patterson, Nick; Berger, Bonnie
2013-01-01
The recent explosion in available genetic data has led to significant advances in understanding the demographic histories of and relationships among human populations. It is still a challenge, however, to infer reliable parameter values for complicated models involving many populations. Here, we present MixMapper, an efficient, interactive method for constructing phylogenetic trees including admixture events using single nucleotide polymorphism (SNP) genotype data. MixMapper implements a novel two-phase approach to admixture inference using moment statistics, first building an unadmixed scaffold tree and then adding admixed populations by solving systems of equations that express allele frequency divergences in terms of mixture parameters. Importantly, all features of the model, including topology, sources of gene flow, branch lengths, and mixture proportions, are optimized automatically from the data and include estimates of statistical uncertainty. MixMapper also uses a new method to express branch lengths in easily interpretable drift units. We apply MixMapper to recently published data for Human Genome Diversity Cell Line Panel individuals genotyped on a SNP array designed especially for use in population genetics studies, obtaining confident results for 30 populations, 20 of them admixed. Notably, we confirm a signal of ancient admixture in European populations—including previously undetected admixture in Sardinians and Basques—involving a proportion of 20–40% ancient northern Eurasian ancestry. PMID:23709261
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
Optimization of a mirror-based neutron source using differential evolution algorithm
NASA Astrophysics Data System (ADS)
Yurov, D. V.; Prikhodko, V. V.
2016-12-01
This study is dedicated to the assessment of capabilities of gas-dynamic trap (GDT) and gas-dynamic multiple-mirror trap (GDMT) as potential neutron sources for subcritical hybrids. In mathematical terms the problem of the study has been formulated as determining the global maximum of fusion gain (Q pl), the latter represented as a function of trap parameters. A differential evolution method has been applied to perform the search. Considered in all calculations has been a configuration of the neutron source with 20 m long distance between the mirrors and 100 MW heating power. It is important to mention that the numerical study has also taken into account a number of constraints on plasma characteristics so as to provide physical credibility of searched-for trap configurations. According to the results obtained the traps considered have demonstrated fusion gain up to 0.2, depending on the constraints applied. This enables them to be used either as neutron sources within subcritical reactors for minor actinides incineration or as material-testing facilities.
A technique to control cross-field diffusion of plasma across a transverse magnetic field
NASA Astrophysics Data System (ADS)
Hazarika, P.; Chakraborty, M.; Das, B. K.; Bandyopadhyay, M.
2016-12-01
A study to control charged particle transport across a transverse magnetic field (TMF), popularly known as the magnetic filter in a negative ion source, has been carried out in a double plasma device. In the experimental setup, the TMF placed between the two magnetic cages divides the whole plasma chamber into two distinct regions, viz., the source and the target on the basis of the plasma production and the corresponding electron temperature. The plasma produced in the source region by the filament discharge method diffuses into the target region through the TMF. Data are acquired by the Langmuir probe and are compared in different source configurations, in terms of external biasing applied to metallic plates inserted in the TMF plane but in the orthogonal direction. The effect of the direction of current between the two plates in either polarity of bias in the presence of TMF on the plasma parameters and the cross-field transport of charge particles are discussed.
The attenuation of Fourier amplitudes for rock sites in eastern North America
Atkinson, Gail M.; Boore, David M.
2014-01-01
We develop an empirical model of the decay of Fourier amplitudes for earthquakes of M 3–6 recorded on rock sites in eastern North America and discuss its implications for source parameters. Attenuation at distances from 10 to 500 km may be adequately described using a bilinear model with a geometric spreading of 1/R1.3 to a transition distance of 50 km, with a geometric spreading of 1/R0.5 at greater distances. For low frequencies and distances less than 50 km, the effective geometric spreading given by the model is perturbed using a frequency‐ and hypocentral depth‐dependent factor defined in such a way as to increase amplitudes at lower frequencies near the epicenter but leave the 1 km source amplitudes unchanged. The associated anelastic attenuation is determined for each event, with an average value being given by a regional quality factor of Q=525f 0.45. This model provides a match, on average, between the known seismic moment of events and the inferred low‐frequency spectral amplitudes at R=1 km (obtained by correcting for the attenuation model). The inferred Brune stress parameters from the high‐frequency source terms are about 600 bars (60 MPa), on average, for events of M>4.5.
Powder Bed Layer Characteristics: The Overseen First-Order Process Input
NASA Astrophysics Data System (ADS)
Mindt, H. W.; Megahed, M.; Lavery, N. P.; Holmes, M. A.; Brown, S. G. R.
2016-08-01
Powder Bed Additive Manufacturing offers unique advantages in terms of manufacturing cost, lot size, and product complexity compared to traditional processes such as casting, where a minimum lot size is mandatory to achieve economic competitiveness. Many studies—both experimental and numerical—are dedicated to the analysis of how process parameters such as heat source power, scan speed, and scan strategy affect the final material properties. Apart from the general urge to increase the build rate using thicker powder layers, the coating process and how the powder is distributed on the processing table has received very little attention to date. This paper focuses on the first step of every powder bed build process: Coating the process table. A numerical study is performed to investigate how powder is transferred from the source to the processing table. A solid coating blade is modeled to spread commercial Ti-6Al-4V powder. The resulting powder layer is analyzed statistically to determine the packing density and its variation across the processing table. The results are compared with literature reports using the so-called "rain" models. A parameter study is performed to identify the influence of process table displacement and wiper velocity on the powder distribution. The achieved packing density and how that affects subsequent heat source interaction with the powder bed is also investigated numerically.
Source mechanisms of volcanic tsunamis.
Paris, Raphaël
2015-10-28
Volcanic tsunamis are generated by a variety of mechanisms, including volcano-tectonic earthquakes, slope instabilities, pyroclastic flows, underwater explosions, shock waves and caldera collapse. In this review, we focus on the lessons that can be learnt from past events and address the influence of parameters such as volume flux of mass flows, explosion energy or duration of caldera collapse on tsunami generation. The diversity of waves in terms of amplitude, period, form, dispersion, etc. poses difficulties for integration and harmonization of sources to be used for numerical models and probabilistic tsunami hazard maps. In many cases, monitoring and warning of volcanic tsunamis remain challenging (further technical and scientific developments being necessary) and must be coupled with policies of population preparedness. © 2015 The Author(s).
Thermal maturity of type II kerogen from the New Albany Shale assessed by13C CP/MAS NMR
Werner-Zwanziger, U.; Lis, G.; Mastalerz, Maria; Schimmelmann, A.
2005-01-01
Thermal maturity of oil and gas source rocks is typically quantified in terms of vitrinite reflectance, which is based on optical properties of terrestrial woody remains. This study evaluates 13C CP/MAS NMR parameters in kerogen (i.e., the insoluble fraction of organic matter in sediments and sedimentary rocks) as proxies for thermal maturity in marine-derived source rocks where terrestrially derived vitrinite is often absent or sparse. In a suite of samples from the New Albany Shale (Middle Devonian to the Early Mississippian, Illinois Basin) the abundance of aromatic carbon in kerogen determined by 13C CP/MAS NMR correlates linearly well with vitrinite reflectance. ?? 2004 Elsevier Inc. All rights reserved.
Acoustic device and method for measuring gas densities
NASA Technical Reports Server (NTRS)
Shakkottai, Parthasarathy (Inventor); Kwack, Eug Y. (Inventor); Back, Lloyd (Inventor)
1992-01-01
Density measurements can be made in a gas contained in a flow through enclosure by measuring the sound pressure level at a receiver or microphone located near a dipole sound source which is driven at constant velocity amplitude at low frequencies. Analytical results, which are provided in terms of geometrical parameters, wave numbers, and sound source type for systems of this invention, agree well with published data. The relatively simple designs feature a transmitter transducer at the closed end of a small tube and a receiver transducer on the circumference of the small tube located a small distance away from the transmitter. The transmitter should be a dipole operated at low frequency with the kL value preferable less that about 0.3.
NASA Astrophysics Data System (ADS)
Ogiso, M.
2017-12-01
Heterogeneous attenuation structure is important for not only understanding the earth structure and seismotectonics, but also ground motion prediction. Attenuation of ground motion in high frequency range is often characterized by the distribution of intrinsic and scattering attenuation parameters (intrinsic Q and scattering coefficient). From the viewpoint of ground motion prediction, both intrinsic and scattering attenuation affect the maximum amplitude of ground motion while scattering attenuation also affect the duration time of ground motion. Hence, estimation of both attenuation parameters will lead to sophisticate the ground motion prediction. In this study, we try to estimate both parameters in southwestern Japan in a tomographic manner. We will conduct envelope fitting of seismic coda since coda has sensitivity to both intrinsic attenuation and scattering coefficients. Recently, Takeuchi (2016) successfully calculated differential envelope when these parameters have fluctuations. We adopted his equations to calculate partial derivatives of these parameters since we did not need to assume homogeneous velocity structure. Matrix for inversion of structural parameters would become too huge to solve in a straightforward manner. Hence, we adopted ART-type Bayesian Reconstruction Method (Hirahara, 1998) to project the difference of envelopes to structural parameters iteratively. We conducted checkerboard reconstruction test. We assumed checkerboard pattern of 0.4 degree interval in horizontal direction and 20 km in depth direction. Reconstructed structures well reproduced the assumed pattern in shallower part while not in deeper part. Since the inversion kernel has large sensitivity around source and stations, resolution in deeper part would be limited due to the sparse distribution of earthquakes. To apply the inversion method which described above to actual waveforms, we have to correct the effects of source and site amplification term. We consider these issues to estimate the actual intrinsic and scattering structures of the target region.Acknowledgment We used the waveforms of Hi-net, NIED. This study was supported by the Earthquake Research Institute of the University of Tokyo cooperative research program.
NASA Astrophysics Data System (ADS)
Kooymana, Timothée; Buiron, Laurent; Rimpault, Gérald
2017-09-01
Heterogeneous loading of minor actinides in radial blankets is a potential solution to implement minor actinides transmutation in fast reactors. However, to compensate for the lower flux level experienced by the blankets, the fraction of minor actinides to be loaded in the blankets must be increased to maintain acceptable performances. This severely increases the decay heat and neutron source of the blanket assemblies, both before and after irradiation, by more than an order of magnitude in the case of neutron source for instance. We propose here to implement an optimization methodology of the blankets design with regards to various parameters such as the local spectrum or the mass to be loaded, with the objective of minimizing the final neutron source of the spent assembly while maximizing the transmutation performances of the blankets. In a first stage, an analysis of the various contributors to long and short term neutron and gamma source is carried out while in a second stage, relevant estimators are designed for use in the effective optimization process, which is done in the last step. A comparison with core calculations is finally done for completeness and validation purposes. It is found that the use of a moderated spectrum in the blankets can be beneficial in terms of final neutron and gamma source without impacting minor actinides transmutation performances compared to more energetic spectrum that could be achieved using metallic fuel for instance. It is also confirmed that, if possible, the use of hydrides as moderating material in the blankets is a promising option to limit the total minor actinides inventory in the fuel cycle. If not, it appears that focus should be put upon an increased residence time for the blankets rather than an increase in the acceptable neutron source for handling and reprocessing.
NASA Astrophysics Data System (ADS)
Blöcher, Johanna; Kuraz, Michal
2017-04-01
In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.
Uncertainty quantification for constitutive model calibration of brain tissue.
Brewick, Patrick T; Teferra, Kirubel
2018-05-31
The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.
Carriers for the Tunable Release of Therapeutics: Etymological Classification and Examples
Uskoković, Vuk; Ghosh, Shreya
2016-01-01
Introduction Physiological processes at the molecular level take place at precise spatiotemporal scales, which vary from tissue to tissue and from one patient to another, implying the need for the carriers that enable tunable release of therapeutics. Areas Covered Classification of all drug release to intrinsic and extrinsic is proposed, followed by the etymological clarification of the term “tunable” and its distinction from the term “tailorable”. Tunability is defined as analogous to tuning a guitar string or a radio receiver to the right frequency using a single knob. It implies changing a structural parameter along a continuous quantitative scale and correlating it numerically with the release kinetics. Examples of tunable, tailorable and environmentally responsive carriers are given, along with the parameters used to achieve these levels of control. Expert Opinion Interdependence of multiple variables defining the carrier microstructure obstructs the attempts to elucidate parameters that allow for the independent tuning of release kinetics. Learning from the tunability of nanostructured materials and superstructured metamaterials can be a fruitful source of inspiration in the quest for the new generation of tunable release carriers. The greater intersection of traditional materials sciences and pharmacokinetic perspectives could foster the development of more sophisticated mechanisms for tunable release. PMID:27322661
Searching for continuous gravitational wave sources in binary systems
NASA Astrophysics Data System (ADS)
Dhurandhar, Sanjeev V.; Vecchio, Alberto
2001-06-01
We consider the problem of searching for continuous gravitational wave (cw) sources orbiting a companion object. This issue is of particular interest because the Low mass x-ray binaries (LMXB's), and among them Sco X-1, the brightest x-ray source in the sky, might be marginally detectable with ~2 y coherent observation time by the Earth-based laser interferometers expected to come on line by 2002 and clearly observable by the second generation of detectors. Moreover, several radio pulsars, which could be deemed to be cw sources, are found to orbit a companion star or planet, and the LIGO-VIRGO-GEO600 network plans to continuously monitor such systems. We estimate the computational costs for a search launched over the additional five parameters describing generic elliptical orbits (up to e<~0.8) using match filtering techniques. These techniques provide the optimal signal-to-noise ratio and also a very clear and transparent theoretical framework. Since matched filtering will be implemented in the final and the most computationally expensive stage of the hierarchical strategies, the theoretical framework provided here can be used to determine the computational costs. In order to disentangle the computational burden involved in the orbital motion of the cw source from the other source parameters (position in the sky and spin down) and reduce the complexity of the analysis, we assume that the source is monochromatic (there is no intrinsic change in its frequency) and its location in the sky is exactly known. The orbital elements, on the other hand, are either assumed to be completely unknown or only partly known. We provide ready-to-use analytical expressions for the number of templates required to carry out the searches in the astrophysically relevant regions of the parameter space and how the computational cost scales with the ranges of the parameters. We also determine the critical accuracy to which a particular parameter must be known, so that no search is needed for it; we provide rigorous statements, based on the geometrical formulation of data analysis, concerning the size of the parameter space so that a particular neutron star is a one-filter target. This result is formulated in a completely general form, independent of the particular kind of source, and can be applied to any class of signals whose waveform can be accurately predicted. We apply our theoretical analysis to Sco X-1 and the 44 neutron stars with binary companions which are listed in the most updated version of the radio pulsar catalog. For up to ~3 h of coherent integration time, Sco X-1 will need at most a few templates; for 1 week integration time the number of templates rapidly rises to ~=5×106. This is due to the rather poor measurements available today of the projected semi-major axis and the orbital phase of the neutron star. If, however, the same search is to be carried out with only a few filters, then more refined measurements of the orbital parameters are called for-an improvement of about three orders of magnitude in the accuracy is required. Further, we show that the five NS's (radio pulsars) for which the upper limits on the signal strength are highest require no more than a few templates each and can be targeted very cheaply in terms of CPU time. Blind searches of the parameter space of orbital elements are, in general, completely un-affordable for present or near future dedicated computational resources, when the coherent integration time is of the order of the orbital period or longer. For wide binary systems, when the observation covers only a fraction of one orbit, the computational burden reduces enormously, and becomes affordable for a significant region of the parameter space.
Automated system for generation of soil moisture products for agricultural drought assessment
NASA Astrophysics Data System (ADS)
Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.
Properties of two-temperature dissipative accretion flow around black holes
NASA Astrophysics Data System (ADS)
Dihingia, Indu K.; Das, Santabrata; Mandal, Samir
2018-04-01
We study the properties of two-temperature accretion flow around a non-rotating black hole in presence of various dissipative processes where pseudo-Newtonian potential is adopted to mimic the effect of general relativity. The flow encounters energy loss by means of radiative processes acted on the electrons and at the same time, flow heats up as a consequence of viscous heating effective on ions. We assumed that the flow is exposed with the stochastic magnetic fields that leads to Synchrotron emission of electrons and these emissions are further strengthen by Compton scattering. We obtain the two-temperature global accretion solutions in terms of dissipation parameters, namely, viscosity (α) and accretion rate ({\\dot{m}}), and find for the first time in the literature that such solutions may contain standing shock waves. Solutions of this kind are multitransonic in nature, as they simultaneously pass through both inner critical point (xin) and outer critical point (xout) before crossing the black hole horizon. We calculate the properties of shock-induced global accretion solutions in terms of the flow parameters. We further show that two-temperature shocked accretion flow is not a discrete solution, instead such solution exists for wide range of flow parameters. We identify the effective domain of the parameter space for standing shock and observe that parameter space shrinks as the dissipation is increased. Since the post-shock region is hotter due to the effect of shock compression, it naturally emits hard X-rays, and therefore, the two-temperature shocked accretion solution has the potential to explain the spectral properties of the black hole sources.
Using high frequency CDOM hyperspectral absorption to fingerprint river water sources
NASA Astrophysics Data System (ADS)
Beckler, J. S.; Kirkpatrick, G. J.; Dixon, L. K.; Milbrandt, E. C.
2016-12-01
Quantifying riverine carbon transfer from land to sea is complicated by variability in dissolved organic carbon (DOC), closely-related dissolved organic matter (DOM) and chromophoric dissolved organic matter (CDOM) concentrations, as well as in the composition of the freshwater end members of multiple drainage basins and seasons. Discrete measurements in estuaries have difficulty resolving convoluted upstream watershed dynamics. Optical measurements, however, can provide more continuous data regarding the molecular composition and concentration of the CDOM as it relates to river flow, tidal mixing, and salinity and may be used to fingerprint source waters. For the first time, long-term, hyperspectral CDOM measurements were obtained on filtered Caloosahatchee River estuarine waters using an in situ, long-pathlength spectrophotometric instrument, the Optical Phytoplankton Discriminator (OPD). Through a collaborative monitoring effort among partners within the Gulf of Mexico Coastal Ocean Observing System (GCOOS), ancillary measurements of fluorescent DOM (FDOM) and water quality parameters were also obtained from co-located instrumentation at high frequency. Optical properties demonstrated both short-term (hourly) tidal variations and long-term (daily - weekly) variations corresponding to changes in riverine flow and salinity. The optical properties of the river waters are demonstrated to be a dilution-adjusted linear combination of the optical properties of the source waters comprising the overall composition (e.g. Lake Okeechobee, watershed drainage basins, Gulf of Mexico). Overall, these techniques are promising as a tool to more accurately constrain the carbon flux to the ocean and to predict the optical quality of coastal waters.
Toward real-time regional earthquake simulation of Taiwan earthquakes
NASA Astrophysics Data System (ADS)
Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.
2013-12-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
NASA Astrophysics Data System (ADS)
Singh, Uday Veer; Abhishek, Amar; Singh, Kunwar P.; Dhakate, Ratnakar; Singh, Netra Pal
2014-06-01
India's growing population enhances great pressure on groundwater resources. The Ghaziabad region is located in the northern Indo-Gangetic alluvium plain of India. Increased population and industrial activities make it imperative to appraise the quality of groundwater system to ensure long-term sustainability of resources. A total number of 250 groundwater samples were collected in two different seasons, viz., pre-monsoon and post monsoon and analyzed for major physico-chemical parameters. Broad range and great standard deviation occurs for most parameters, indicating chemical composition of groundwater affected by process, including water-rock interaction and anthropogenic effect. Iron was found as predominant heavy metal in groundwater samples followed by copper and lead. An exceptional high concentration of Chromium was found in some locations. Industrial activities as chrome plating and wood preservative are the key source to metal pollution in Ghaziabad region. On the basis of classification the area water shows normal sulfate, chloride and bi-carbonate type, respectively. Base-exchange indices classified 76 % of the groundwater sources was the sodium-bicarbonate type. The meteoric genesis indices demonstrated that 80 % of groundwater sources belong to a shallow meteoric water percolation type. Chadha's diagram suggested that the hydro-chemical faces belong to the HCO3 - dominant Ca2+-Mg2+ type along with Cl--dominant Ca2+-Mg2+-type. There was no significant change in pollution parameters in the selected seasons. Comparison of groundwater quality with Indian standards proves that majority of water samples are suitable for irrigation purposes but not for drinking.
Kumar, Pradeep; Sindhu, Rakesh K; Narayan, Shridhar; Singh, Inderbir
2010-12-01
Different monofloral honeys have a distinctive flavor and color because of differences in physicochemical parameters because of their principal nectar sources or floral types. Honey samples were collected from Apis mellifera colonies forged on 10 floras to analyze the quality of honey in terms of standards laid by Honey Grading and Marking Rules (HGMR), India, 2008 and Codex Alimentarious Commission (CAC), 1969 . The honey samples were analyzed for various physicochemical parameters of honey quality control, i.e., pH, total acidity, moisture, reducing sugars, non-reducing sugars, total sugars, water insoluble solids (WIS), ash content, 5-hydroxymethylfurfural content, and diastase value. The antioxidant potential was estimated using Folin-Ciocalteu reagent. Further, honey samples were assayed for antibacterial activities against clinical isolates of Staphylococcus aureus and Escherichia coli using the hole-plate diffusion method. The physicochemical variation in the composition of honey because of floral source shows Ziziphus honey with high pH and diastase values along with low acidity, whereas Helianthus honey contained high reducing sugar and low moisture content. Amomum, Brassica, Acacia, and Citrus contained lowest amount of non-reducing sugars, ash, WIS, and moisture, respectively. Lowest 5-hydroxymethylfurfural (HMF) value was detected in Amomum honey, while highest HMF value was observed with Eucalyptus. The maximum antibacterial and antioxidant potential was observed in Azadirachta and Citrus, respectively. The quality of honey produced by local beekeepers met HGMR and CAC standards, and the chemical composition and biological properties of honey were dependent on the floral source from which it was produced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biganzoli, Davide; Potenza, Marco A. C.; Robberto, Massimo, E-mail: robberto@stsci.edu
We discuss the radiative transfer theory for translucent clouds illuminated by an extended background source. First, we derive a rigorous solution based on the assumption that multiple scatterings produce an isotropic flux. Then we derive a more manageable analytic approximation showing that it nicely matches the results of the rigorous approach. To validate our model, we compare our predictions with accurate laboratory measurements for various types of well-characterized grains, including purely dielectric and strongly absorbing materials representative of astronomical icy and metallic grains, respectively, finding excellent agreement without the need to add free parameters. We use our model to exploremore » the behavior of an astrophysical cloud illuminated by a diffuse source with dust grains having parameters typical of the classic ISM grains of Draine and Lee and protoplanetary disks, with an application to the dark silhouette disk 114–426 in Orion Nebula. We find that the scattering term modifies the transmitted radiation, both in terms of intensity (extinction) and shape (reddening) of the spectral distribution. In particular, for small optical thickness, our results show that scattering makes reddening almost negligible at visible wavelengths. Once the optical thickness increases enough and the probability of scattering events becomes close to or larger than 1, reddening becomes present but is appreciably modified with respect to the standard expression for line-of-sight absorption. Moreover, variations of the grain refractive index, in particular the amount of absorption, also play an important role in changing the shape of the spectral transmission curve, with dielectric grains showing the minimum amount of reddening.« less
NASA Astrophysics Data System (ADS)
Donne, Sarah; Bean, Christopher; Craig, David; Dias, Frederic; Christodoulides, Paul
2016-04-01
Microseisms are continuous seismic vibrations which propagate mainly as surface Rayleigh and Love waves. They are generated by the Earth's oceans and there are two main types; primary and secondary microseisms. Primary microseisms are generated through the interaction of travelling surface gravity ocean waves with the seafloor in shallow waters relative to the wavelength of the ocean wave. Secondary microseisms, on the other hand are generated when two opposing wave trains interact and a non-linear second order effect produces a pressure fluctuation which is depth independent. The conditions necessary to produce secondary microseisms are presented in Longuet-Higgins (1950) through the interaction of two travelling waves with the same wave period and which interact at an angle of 180 degrees. Equivalent surface pressure density (p2l) is modelled using the numerical ocean wave model Wavewatch III and this term is considered as the microseism source term. This work presents an investigation of the theoretical second order pressures generated through the interaction of travelling waves with varying wave amplitude, period and angle of incidence. Predicted seafloor pressures calculated off the Southwest coast of Ireland are compared with terrestrially recorded microseism records, measured seafloor pressures and oceanographic parameters. The work presented in this study suggests that a broad set of sea states can generate second order seafloor pressures that are consistent with seafloor pressure measurements. Local seismic arrays throughout Ireland allow us to investigate the temporal covariance of these seafloor pressures with microseism source locations.
Modelling the process-based controls of long term CO2 exchange in High Arctic heath ecosystems
NASA Astrophysics Data System (ADS)
Zhang, W.; Jansson, P. E.; Elberling, B.
2016-12-01
Frozen organic carbon (C) stored in northern permafrost soils may become vulnerable due to the rapid warming of the Arctic. The loss of C as greenhouse gases may imply a critical warming potential, resulting in positive feedbacks to global climate change. However, how permafrost ecosystems C dynamics is associated with changes in hydrothermal conditions (e.g. extent and duration of snow, soil water content and active layer depth) and changes in the responses of ecosystem biogeochemistry to climate (e.g. carbon assimilation of the entire growing season, falling rates of plants' litter, and turnover rates of different soil carbon pools) is still unclear and needs to be distinguished from site to site. Here, we use a process-oriented model (CoupModel) that couples heat and mass transfer within the high resolution soil-plant-atmosphere profile to simulate the high Arctic Cassiope tetragona Heath ecosystems in Northeast Greenland. The 15 years of net ecosystem exchange (NEE) flux (2000-2014) measured during the growing season indicate that the ecosystems may be at a transition from a C sink to a C source. We calibrated the model with the NEE flux transformed from hourly data to daily, yearly and total cumulative data to identify ensembles of parameters that best described the various patterns in the observed C fluxes. Only the ensembles of yearly and total cumulative transformation described reasonably well for seasonal variability, inter-annual variability and long term trends of measurements. The correlations between parameters and simulation performance described the relative importance of physical or biological parameters that contributes to the short- and long-term variation of C flux from biogeochemical processes of such ecosystems. The estimated C budget including internal fluxes and redistribution between various pools showed that the ecosystem functioned as a C source in the first-half period and a week C sink in the second-half period. The respiration outside the growing season was mainly from the autotropic respiration of plants, occupying a considerable portion of the total yearly respiration. The dynamics of soil C fluxes were associated with the variations of air temperature, snow fall and soil moisture of the shoulder seasons.
Psychoacoustical evaluation of natural and urban sounds in soundscapes.
Yang, Ming; Kang, Jian
2013-07-01
Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.
Washburn, Richard A.; Szabo, Amanda N.; Lambourne, Kate; Willis, Erik A.; Ptomey, Lauren T.; Honas, Jeffery J.; Herrmann, Stephen D.; Donnelly, Joseph E.
2014-01-01
Background Differences in biological changes from weight loss by energy restriction and/or exercise may be associated with differences in long-term weight loss/regain. Objective To assess the effect of weight loss method on long-term changes in weight, body composition and chronic disease risk factors. Data Sources PubMed and Embase were searched (January 1990-October 2013) for studies with data on the effect of energy restriction, exercise (aerobic and resistance) on long-term weight loss. Twenty articles were included in this review. Study Eligibility Criteria Primary source, peer reviewed randomized trials published in English with an active weight loss period of >6 months, or active weight loss with a follow-up period of any duration, conducted in overweight or obese adults were included. Study Appraisal and Synthesis Methods Considerable heterogeneity across trials existed for important study parameters, therefore a meta-analysis was considered inappropriate. Results were synthesized and grouped by comparisons (e.g. diet vs. aerobic exercise, diet vs. diet + aerobic exercise etc.) and study design (long-term or weight loss/follow-up). Results Forty percent of trials reported significantly greater long-term weight loss with diet compared with aerobic exercise, while results for differences in weight regain were inconclusive. Diet+aerobic exercise resulted in significantly greater weight loss than diet alone in 50% of trials. However, weight regain (∼55% of loss) was similar in diet and diet+aerobic exercise groups. Fat-free mass tended to be preserved when interventions included exercise. PMID:25333384
An evolutive real-time source inversion based on a linear inverse formulation
NASA Astrophysics Data System (ADS)
Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.
2016-12-01
Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.
NASA Astrophysics Data System (ADS)
Laksono, Y. A.; Brotopuspito, K. S.; Suryanto, W.; Widodo; Wardah, R. A.; Rudianto, I.
2018-03-01
In order to study the structure subsurface at Merapi Lawu anomaly (MLA) using forward modelling or full waveform inversion, it needs a good earthquake source parameters. The best result source parameter comes from seismogram with high signal to noise ratio (SNR). Beside that the source must be near the MLA location and the stations that used as parameters must be outside from MLA in order to avoid anomaly. At first the seismograms are processed by software SEISAN v10 using a few stations from MERAMEX project. After we found the hypocentre that match the criterion we fine-tuned the source parameters using more stations. Based on seismogram from 21 stations, it is obtained the source parameters as follows: the event is at August, 21 2004, on 23:22:47 Indonesia western standard time (IWST), epicentre coordinate -7.80°S, 101.34°E, hypocentre 47.3 km, dominant frequency f0 = 3.0 Hz, the earthquake magnitude Mw = 3.4.
Fernández-Navajas, Ángel; Merello, Paloma; Beltrán, Pedro; García-Diego, Fernando-Juan
2013-01-01
Cultural Heritage preventive conservation requires the monitoring of the parameters involved in the process of deterioration of artworks. Thus, both long-term monitoring of the environmental parameters as well as further analysis of the recorded data are necessary. The long-term monitoring at frequencies higher than 1 data point/day generates large volumes of data that are difficult to store, manage and analyze. This paper presents software which uses a free open source database engine that allows managing and interacting with huge amounts of data from environmental monitoring of cultural heritage sites. It is of simple operation and offers multiple capabilities, such as detection of anomalous data, inquiries, graph plotting and mean trajectories. It is also possible to export the data to a spreadsheet for analyses with more advanced statistical methods (principal component analysis, ANOVA, linear regression, etc.). This paper also deals with a practical application developed for the Renaissance frescoes of the Cathedral of Valencia. The results suggest infiltration of rainwater in the vault and weekly relative humidity changes related with the religious service schedules. PMID:23447005
NASA Astrophysics Data System (ADS)
Aman, Sidra; Khan, Ilyas; Ismail, Zulkhibri; Salleh, Mohd Zuki; Tlili, I.
2018-06-01
In this article the idea of Caputo time fractional derivatives is applied to MHD mixed convection Poiseuille flow of nanofluids with graphene nanoparticles in a vertical channel. The applications of nanofluids in solar energy are argued for various solar thermal systems. It is argued in the article that using nanofluids is an alternate source to produce solar energy in thermal engineering and solar energy devices in industries. The problem is modelled in terms of PDE's with initial and boundary conditions and solved analytically via Laplace transform method. The obtained solutions for velocity, temperature and concentration are expressed in terms of Wright's function. These solutions are significantly controlled by the variations of parameters including thermal Grashof number, Solutal Grashof number and nanoparticles volume fraction. Expressions for skin-friction, Nusselt and Sherwood numbers are also determined on left and right walls of the vertical channel with important numerical results in tabular form. It is found that rate of heat transfer increases with increasing nanoparticles volume fraction and Caputo time fractional parameters.
NASA Astrophysics Data System (ADS)
Shirzaei, M.; Walter, T. R.
2009-10-01
Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.
Rebec, Katja Malovrh; Klanjšek-Gunde, Marta; Bizjak, Grega; Kobav, Matej B
2015-01-01
Ergonomic science at work and living places should appraise human factors concerning the photobiological effects of lighting. Thorough knowledge on this subject has been gained in the past; however, few attempts have been made to propose suitable evaluation parameters. The blue light hazard and its influence on melatonin secretion in age-dependent observers is considered in this paper and parameters for its evaluation are proposed. New parameters were applied to analyse the effects of white light-emitting diode (LED) light sources and to compare them with the currently applied light sources. The photobiological effects of light sources with the same illuminance but different spectral power distribution were determined for healthy 4-76-year-old observers. The suitability of new parameters is discussed. Correlated colour temperature, the only parameter currently used to assess photobiological effects, is evaluated and compared to new parameters.
Gomez, Borja; Mintegi, Santiago; Rubio, Mari Cruz; Garcia, Diego; Garcia, Silvia; Benito, Javier
2012-06-01
The objective of this study was to describe the characteristics of the enteroviral meningitis diagnosed in a pediatric emergency department among infants younger than 3 months with fever without source and its short-term evolution. This was a retrospective, cross-sectional, 6-year descriptive study including all infants younger than 3 months who presented with fever without source and who were diagnosed with enteroviral meningitis. A lumbar puncture was practiced at their first emergency visit in 398 (29.5%) of 1348 infants, and 65 (4.8%) were diagnosed with enteroviral meningitis, 33 of them (50.7%) between May and July. Among these 65 infants, 61 were classified as well-appearing; parents referred irritability in 16 (25.3%) of them (without statistical significance when compared with infants without meningitis). Forty-one (63.0%) had no altered infectious parameters (white blood cell [WBC] count between 5000 and 15,000/μL, absolute neutrophil count less than 10,000/μL, and C-reactive protein less than 20 g/L), and 39 (60%) had no pleocytosis. All of the 65 infants recovered well, and none of them developed short-term complications. The symptoms in infants younger than 3 months with enteroviral meningitis were similar to those in infants with a self-limited febrile process without intracranial infection. C-reactive protein and WBC count were not good enteroviral meningitis predictors. Cerebrospinal fluid WBC count was normal in many of these infants, so performing a viral test is recommended for febrile infants younger than 3 months in which a lumbar puncture is practiced during warm months. The short-term evolution was benign.
Discriminating Simulated Vocal Tremor Source Using Amplitude Modulation Spectra
Carbonell, Kathy M.; Lester, Rosemary A.; Story, Brad H.; Lotto, Andrew J.
2014-01-01
Objectives/Hypothesis Sources of vocal tremor are difficult to categorize perceptually and acoustically. This paper describes a preliminary attempt to discriminate vocal tremor sources through the use of spectral measures of the amplitude envelope. The hypothesis is that different vocal tremor sources are associated with distinct patterns of acoustic amplitude modulations. Study Design Statistical categorization methods (discriminant function analysis) were used to discriminate signals from simulated vocal tremor with different sources using only acoustic measures derived from the amplitude envelopes. Methods Simulations of vocal tremor were created by modulating parameters of a vocal fold model corresponding to oscillations of respiratory driving pressure (respiratory tremor), degree of vocal fold adduction (adductory tremor) and fundamental frequency of vocal fold vibration (F0 tremor). The acoustic measures were based on spectral analyses of the amplitude envelope computed across the entire signal and within select frequency bands. Results The signals could be categorized (with accuracy well above chance) in terms of the simulated tremor source using only measures of the amplitude envelope spectrum even when multiple sources of tremor were included. Conclusions These results supply initial support for an amplitude-envelope based approach to identify the source of vocal tremor and provide further evidence for the rich information about talker characteristics present in the temporal structure of the amplitude envelope. PMID:25532813
Economics of wind energy for utilities
NASA Technical Reports Server (NTRS)
Mccabe, T. F.; Goldenblatt, M. K.
1982-01-01
Utility acceptance of this technology will be contingent upon the establishment of both its technical and economic feasibility. This paper presents preliminary results from a study currently underway to establish the economic value of central station wind energy to certain utility systems. The results for the various utilities are compared specifically in terms of three parameters which have a major influence on the economic value: (1) wind resource, (2) mix of conventional generation sources, and (3) specific utility financial parameters including projected fuel costs. The wind energy is derived from modeling either MOD-2 or MOD-0A wind turbines in wind resources determined by a year of data obtained from the DOE supported meteorological towers with a two-minute sampling frequency. In this paper, preliminary results for six of the utilities studied are presented and compared.
Design of a cardiac monitor in terms of parameters of QRS complex.
Chen, Zhen-cheng; Ni, Li-li; Su, Ke-ping; Wang, Hong-yan; Jiang, Da-zong
2002-08-01
Objective. To design a portable cardiac monitor system based on the available ordinary ECG machine and works on the basis of QRS parameters. Method. The 80196 single chip microcomputer was used as the central microprocessor and real time electrocardiac signal was collected and analyzed [correction of analysized] in the system. Result. Apart from the performance of an ordinary monitor, this machine possesses also the following functions: arrhythmia analysis, HRV analysis, alarm, freeze, and record of automatic papering. Convenient in carrying, the system is powered by AC or DC sources. Stability, low power and low cost are emphasized in the hardware design; and modularization method is applied in software design. Conclusion. Popular in usage and low cost made the portable monitor system suitable for use under simple conditions.
Anthropogenic seismicity rates and operational parameters at the Salton Sea Geothermal Field.
Brodsky, Emily E; Lajoie, Lia J
2013-08-02
Geothermal power is a growing energy source; however, efforts to increase production are tempered by concern over induced earthquakes. Although increased seismicity commonly accompanies geothermal production, induced earthquake rate cannot currently be forecast on the basis of fluid injection volumes or any other operational parameters. We show that at the Salton Sea Geothermal Field, the total volume of fluid extracted or injected tracks the long-term evolution of seismicity. After correcting for the aftershock rate, the net fluid volume (extracted-injected) provides the best correlation with seismicity in recent years. We model the background earthquake rate with a linear combination of injection and net production rates that allows us to track the secular development of the field as the number of earthquakes per fluid volume injected decreases over time.
Long-term monitoring on environmental disasters using multi-source remote sensing technique
NASA Astrophysics Data System (ADS)
Kuo, Y. C.; Chen, C. F.
2017-12-01
Environmental disasters are extreme events within the earth's system that cause deaths and injuries to humans, as well as causing damages and losses of valuable assets, such as buildings, communication systems, farmlands, forest and etc. In disaster management, a large amount of multi-temporal spatial data is required. Multi-source remote sensing data with different spatial, spectral and temporal resolutions is widely applied on environmental disaster monitoring. With multi-source and multi-temporal high resolution images, we conduct rapid, systematic and seriate observations regarding to economic damages and environmental disasters on earth. It is based on three monitoring platforms: remote sensing, UAS (Unmanned Aircraft Systems) and ground investigation. The advantages of using UAS technology include great mobility and availability in real-time rapid and more flexible weather conditions. The system can produce long-term spatial distribution information from environmental disasters, obtaining high-resolution remote sensing data and field verification data in key monitoring areas. It also supports the prevention and control on ocean pollutions, illegally disposed wastes and pine pests in different scales. Meanwhile, digital photogrammetry can be applied on the camera inside and outside the position parameters to produce Digital Surface Model (DSM) data. The latest terrain environment information is simulated by using DSM data, and can be used as references in disaster recovery in the future.
Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C
2018-01-01
This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.
The dark art of light measurement: accurate radiometry for low-level light therapy.
Hadis, Mohammed A; Zainal, Siti A; Holder, Michelle J; Carroll, James D; Cooper, Paul R; Milward, Michael R; Palin, William M
2016-05-01
Lasers and light-emitting diodes are used for a range of biomedical applications with many studies reporting their beneficial effects. However, three main concerns exist regarding much of the low-level light therapy (LLLT) or photobiomodulation literature; (1) incomplete, inaccurate and unverified irradiation parameters, (2) miscalculation of 'dose,' and (3) the misuse of appropriate light property terminology. The aim of this systematic review was to assess where, and to what extent, these inadequacies exist and to provide an overview of 'best practice' in light measurement methods and importance of correct light measurement. A review of recent relevant literature was performed in PubMed using the terms LLLT and photobiomodulation (March 2014-March 2015) to investigate the contemporary information available in LLLT and photobiomodulation literature in terms of reporting light properties and irradiation parameters. A total of 74 articles formed the basis of this systematic review. Although most articles reported beneficial effects following LLLT, the majority contained no information in terms of how light was measured (73%) and relied on manufacturer-stated values. For all papers reviewed, missing information for specific light parameters included wavelength (3%), light source type (8%), power (41%), pulse frequency (52%), beam area (40%), irradiance (43%), exposure time (16%), radiant energy (74%) and fluence (16%). Frequent use of incorrect terminology was also observed within the reviewed literature. A poor understanding of photophysics is evident as a significant number of papers neglected to report or misreported important radiometric data. These errors affect repeatability and reliability of studies shared between scientists, manufacturers and clinicians and could degrade efficacy of patient treatments. Researchers need a physicist or appropriately skilled engineer on the team, and manuscript reviewers should reject papers that do not report beam measurement methods and all ten key parameters: wavelength, power, irradiation time, beam area (at the skin or culture surface; this is not necessarily the same size as the aperture), radiant energy, radiant exposure, pulse parameters, number of treatments, interval between treatments and anatomical location. Inclusion of these parameters will improve the information available to compare and contrast study outcomes and improve repeatability, reliability of studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
John McCord
2007-09-01
This report documents transport data and data analyses for Yucca Flat/Climax Mine CAU 97. The purpose of the data compilation and related analyses is to provide the primary reference to support parameterization of the Yucca Flat/Climax Mine CAU transport model. Specific task objectives were as follows: • Identify and compile currently available transport parameter data and supporting information that may be relevant to the Yucca Flat/Climax Mine CAU. • Assess the level of quality of the data and associated documentation. • Analyze the data to derive expected values and estimates of the associated uncertainty and variability. The scope of thismore » document includes the compilation and assessment of data and information relevant to transport parameters for the Yucca Flat/Climax Mine CAU subsurface within the context of unclassified source-term contamination. Data types of interest include mineralogy, aqueous chemistry, matrix and effective porosity, dispersivity, matrix diffusion, matrix and fracture sorption, and colloid-facilitated transport parameters.« less
QCD nature of dark energy at finite temperature: Cosmological implications
NASA Astrophysics Data System (ADS)
Azizi, K.; Katırcı, N.
2016-05-01
The Veneziano ghost field has been proposed as an alternative source of dark energy, whose energy density is consistent with the cosmological observations. In this model, the energy density of the QCD ghost field is expressed in terms of QCD degrees of freedom at zero temperature. We extend this model to finite temperature to search the model predictions from late time to early universe. We depict the variations of QCD parameters entering the calculations, dark energy density, equation of state, Hubble and deceleration parameters on temperature from zero to a critical temperature. We compare our results with the observations and theoretical predictions existing at different eras. It is found that this model safely defines the universe from quark condensation up to now and its predictions are not in tension with those of the standard cosmology. The EoS parameter of dark energy is dynamical and evolves from -1/3 in the presence of radiation to -1 at late time. The finite temperature ghost dark energy predictions on the Hubble parameter well fit to those of Λ CDM and observations at late time.
Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark
2014-01-01
In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.
Ting, T. O.; Lim, Eng Gee
2014-01-01
In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041
NASA Astrophysics Data System (ADS)
Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian
2018-06-01
Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). A stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double-couple and full moment tensor with high frequency data, is very challenging. Moreover, the application to underground mining system requires accounting for the 3-D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3-D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in the presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to eight events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double-couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip and rake configurations of the double-couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.
AxonPacking: An Open-Source Software to Simulate Arrangements of Axons in White Matter
Mingasson, Tom; Duval, Tanguy; Stikov, Nikola; Cohen-Adad, Julien
2017-01-01
HIGHLIGHTS AxonPacking: Open-source software for simulating white matter microstructure.Validation on a theoretical disk packing problem.Reproducible and stable for various densities and diameter distributions.Can be used to study interplay between myelin/fiber density and restricted fraction. Quantitative Magnetic Resonance Imaging (MRI) can provide parameters that describe white matter microstructure, such as the fiber volume fraction (FVF), the myelin volume fraction (MVF) or the axon volume fraction (AVF) via the fraction of restricted water (fr). While already being used for clinical application, the complex interplay between these parameters requires thorough validation via simulations. These simulations required a realistic, controlled and adaptable model of the white matter axons with the surrounding myelin sheath. While there already exist useful algorithms to perform this task, none of them combine optimisation of axon packing, presence of myelin sheath and availability as free and open source software. Here, we introduce a novel disk packing algorithm that addresses these issues. The performance of the algorithm is tested in term of reproducibility over 50 runs, resulting density, and stability over iterations. This tool was then used to derive multiple values of FVF and to study the impact of this parameter on fr and MVF in light of the known microstructure based on histology sample. The standard deviation of the axon density over runs was lower than 10−3 and the expected hexagonal packing for monodisperse disks was obtained with a density close to the optimal density (obtained: 0.892, theoretical: 0.907). Using an FVF ranging within [0.58, 0.82] and a mean inter-axon gap ranging within [0.1, 1.1] μm, MVF ranged within [0.32, 0.44] and fr ranged within [0.39, 0.71], which is consistent with the histology. The proposed algorithm is implemented in the open-source software AxonPacking (https://github.com/neuropoly/axonpacking) and can be useful for validating diffusion models as well as for enabling researchers to study the interplay between microstructure parameters when evaluating qMRI methods. PMID:28197091
NASA Astrophysics Data System (ADS)
Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian
2018-03-01
Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). Stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double couple and full moment tensor with high frequency data is very challenging. Moreover, the application to underground mining system requires accounting for the 3D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to 8 events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip, rake configurations of the double couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.
Modeling the X-Ray Process, and X-Ray Flaw Size Parameter for POD Studies
NASA Technical Reports Server (NTRS)
Khoshti, Ajay
2014-01-01
Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.
Modeling the X-ray Process, and X-ray Flaw Size Parameter for POD Studies
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2014-01-01
Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances, the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters, including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.
Arques-Orobon, Francisco Jose; Nuñez, Neftali; Vazquez, Manuel; Gonzalez-Posadas, Vicente
2016-01-01
This work analyzes the long-term functionality of HP (High-power) UV-LEDs (Ultraviolet Light Emitting Diodes) as the exciting light source in non-contact, continuous 24/7 real-time fluoro-sensing pollutant identification in inland water. Fluorescence is an effective alternative in the detection and identification of hydrocarbons. The HP UV-LEDs are more advantageous than classical light sources (xenon and mercury lamps) and helps in the development of a low cost, non-contact, and compact system for continuous real-time fieldwork. This work analyzes the wavelength, output optical power, and the effects of viscosity, temperature of the water pollutants, and the functional consistency for long-term HP UV-LED working operation. To accomplish the latter, an analysis of the influence of two types 365 nm HP UV-LEDs degradation under two continuous real-system working mode conditions was done, by temperature Accelerated Life Tests (ALTs). These tests estimate the mean life under continuous working conditions of 6200 h and for cycled working conditions (30 s ON & 30 s OFF) of 66,000 h, over 7 years of 24/7 operating life of hydrocarbon pollution monitoring. In addition, the durability in the face of the internal and external parameter system variations is evaluated. PMID:26927113
Arques-Orobon, Francisco Jose; Nuñez, Neftali; Vazquez, Manuel; Gonzalez-Posadas, Vicente
2016-02-26
This work analyzes the long-term functionality of HP (High-power) UV-LEDs (Ultraviolet Light Emitting Diodes) as the exciting light source in non-contact, continuous 24/7 real-time fluoro-sensing pollutant identification in inland water. Fluorescence is an effective alternative in the detection and identification of hydrocarbons. The HP UV-LEDs are more advantageous than classical light sources (xenon and mercury lamps) and helps in the development of a low cost, non-contact, and compact system for continuous real-time fieldwork. This work analyzes the wavelength, output optical power, and the effects of viscosity, temperature of the water pollutants, and the functional consistency for long-term HP UV-LED working operation. To accomplish the latter, an analysis of the influence of two types 365 nm HP UV-LEDs degradation under two continuous real-system working mode conditions was done, by temperature Accelerated Life Tests (ALTs). These tests estimate the mean life under continuous working conditions of 6200 h and for cycled working conditions (30 s ON & 30 s OFF) of 66,000 h, over 7 years of 24/7 operating life of hydrocarbon pollution monitoring. In addition, the durability in the face of the internal and external parameter system variations is evaluated.
NASA Astrophysics Data System (ADS)
Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe
2017-01-01
A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic ground motions obtained using the EGF method agree well with the observed motions in terms of acceleration, velocity, and displacement within the frequency range of 0.3-10 Hz. These findings indicate that the 2016 Kumamoto earthquake is a standard event that follows the scaling relationship of crustal earthquakes in Japan.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
Leptogenesis from gravity waves in models of inflation.
Alexander, Stephon H S; Peskin, Michael E; Sheikh-Jabbari, M M
2006-03-03
We present a new mechanism for creating the observed cosmic matter-antimatter asymmetry which satisfies all three Sakharov conditions from one common thread, gravitational waves. We generate lepton number through the gravitational anomaly in the lepton number current. The source term comes from elliptically polarized gravity waves that are produced during inflation if the inflaton field contains a CP-odd component. The amount of matter asymmetry generated in our model can be of realistic size for the parameters within the range of some inflationary scenarios and grand unified theories.
Comparison of digital signal processing modules in gamma-ray spectrometry.
Lépy, Marie-Christine; Cissé, Ousmane Ibrahima; Pierre, Sylvie
2014-05-01
Commercial digital signal-processing modules have been tested for their applicability to gamma-ray spectrometry. The tests were based on the same n-type high purity germanium detector. The spectrum quality was studied in terms of energy resolution and peak area versus shaping parameters, using a Eu-152 point source. The stability of a reference peak count rate versus the total count rate was also examined. The reliability of the quantitative results is discussed for their use in measurement at the metrological level. © 2013 Published by Elsevier Ltd.
The HEAO-A2 soft X-ray survey of cataclysmic variable stars - EX Hydrae during optical quiescence
NASA Technical Reports Server (NTRS)
Cordova, F. A.; Riegler, G. R.
1979-01-01
Results are reported for HEAO A2 soft X-ray (below 2 keV) scanning observations of the southern dwarf nova EX Hya. An X-ray light curve is presented which shows no apparent orbital modulation. The best-fitting spectral parameters are derived for the source, and the observations are compared with the spectral behavior of the dwarf nova SS Cyg during optical quiescence. The results are discussed in terms of models for X-ray production by accreting white dwarfs.
Dictionary-Based Tensor Canonical Polyadic Decomposition
NASA Astrophysics Data System (ADS)
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Active Galactic Nuclei at All Wavelengths and from All Angles
NASA Astrophysics Data System (ADS)
Padovani, Paolo
2017-11-01
AGN are quite unique astronomical sources emitting over more than 20 orders of magnitude in frequency, with different electromagnetic bands providing windows on different sub-structures and their physics. They come in a large number of flavors only partially related to intrinsic differences. I highlight here the types of sources selected in different bands, the relevant selection effects and biases, and the underlying physical processes. I then look at the "big picture" by describing the most important parameters one needs to describe the variety of AGN classes and by discussing AGN at all frequencies in terms of their sky surface density. I conclude with a look at the most pressing open issues and the main new facilities, which will flood us with new data to tackle them.
Active Galactic Nuclei at all wavelengths and from all angles
NASA Astrophysics Data System (ADS)
Padovani, Paolo
2017-11-01
AGN are quite unique astronomical sources emitting over more than twenty orders of magnitude in frequency, with different electromagnetic bands providing windows on different sub-structures and their physics. They come in a large number of flavors only partially related to intrinsic differences. I highlight here the types of sources selected in different bands, the relevant selection effects and biases, and the underlying physical processes. I then look at the ``big picture'' by describing the most important parameters one needs to describe the variety of AGN classes and by discussing AGN at all frequencies in terms of their sky surface density. I conclude with a look at the most pressing open issues and the main new facilities, which will flood us with new data to tackle them.
Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.
Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Comparison of the WSA-ENLIL model with three CME cone types
NASA Astrophysics Data System (ADS)
Jang, Soojeong; Moon, Y.; Na, H.
2013-07-01
We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.Abstract (2,250 Maximum Characters): We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.
NASA Astrophysics Data System (ADS)
Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin
2018-02-01
Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these results and by means of the acceleration on the three-dimensional modelling and inversion algorithms, it is thought that it may be easier to identify underground structures with high resolution.
Microcrystalline silicon thin-film transistors for large area electronic applications
NASA Astrophysics Data System (ADS)
Chan, Kah-Yoong; Bunte, Eerke; Knipp, Dietmar; Stiebig, Helmut
2007-11-01
Thin-film transistors (TFTs) based on microcrystalline silicon (µc-Si:H) exhibit high charge carrier mobilities exceeding 35 cm2 V-1 s-1. The devices are fabricated by plasma-enhanced chemical vapor deposition at substrate temperatures below 200 °C. The fabrication process of the µc-Si:H TFTs is similar to the low temperature fabrication of amorphous silicon TFTs. The electrical characteristics of the µc-Si:H-based transistors will be presented. As the device charge carrier mobility of short channel TFTs is limited by the contacts, the influence of the drain and source contacts on the device parameters including the device charge carrier mobility and the device threshold voltage will be discussed. The experimental data will be described by a modified standard transistor model which accounts for the contact effects. Furthermore, the transmission line method was used to extract the device parameters including the contact resistance. The modified standard transistor model and the transmission line method will be compared in terms of the extracted device parameters and contact resistances.
Model selection and Bayesian inference for high-resolution seabed reflection inversion.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2009-02-01
This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.
NASA Astrophysics Data System (ADS)
Dasgupta, Arunima; Sastry, K. L. N.; Dhinwa, P. S.; Rathore, V. S.; Nathawat, M. S.
2013-08-01
Desertification risk assessment is important in order to take proper measures for its prevention. Present research intends to identify the areas under risk of desertification along with their severity in terms of degradation in natural parameters. An integrated model with fuzzy membership analysis, fuzzy rule-based inference system and geospatial techniques was adopted, including five specific natural parameters namely slope, soil pH, soil depth, soil texture and NDVI. Individual parameters were classified according to their deviation from mean. Membership of each individual values to be in a certain class was derived using the normal probability density function of that class. Thus if a single class of a single parameter is with mean μ and standard deviation σ, the values falling beyond μ + 2 σ and μ - 2 σ are not representing that class, but a transitional zone between two subsequent classes. These are the most important areas in terms of degradation, as they have the lowest probability to be in a certain class, hence highest probability to be extended or narrowed down in next or previous class respectively. Eventually, these are the values which can be easily altered, under extrogenic influences, hence are identified as risk areas. The overall desertification risk is derived by incorporating the different risk severity of each parameter using fuzzy rule-based interference system in GIS environment. Multicriteria based geo-statistics are applied to locate the areas under different severity of desertification risk. The study revealed that in Kota, various anthropogenic pressures are accelerating land deterioration, coupled with natural erosive forces. Four major sources of desertification in Kota are, namely Gully and Ravine erosion, inappropriate mining practices, growing urbanization and random deforestation.
Impact of various operating modes on performance and emission parameters of small heat source
NASA Astrophysics Data System (ADS)
Vician, Peter; Holubčík, Michal; Palacka, Matej; Jandačka, Jozef
2016-06-01
Thesis deals with the measurement of performance and emission parameters of small heat source for combustion of biomass in each of its operating modes. As the heat source was used pellet boiler with an output of 18 kW. The work includes design of experimental device for measuring the impact of changes in air supply and method for controlling the power and emission parameters of heat sources for combustion of woody biomass. The work describes the main factors that affect the combustion process and analyze the measurements of emissions at the heat source. The results of experiment demonstrate the values of performance and emissions parameters for the different operating modes of the boiler, which serve as a decisive factor in choosing the appropriate mode.
Climate Change Studies over Bangalore using Multi-source Remote Sensing Data and GIS
NASA Astrophysics Data System (ADS)
B, S.; Gouda, K. C.; Laxmikantha, B. P.; Bhat, N.
2014-12-01
Urbanization is a form of metropolitan growth that is a response to often bewildering sets of economic, social, and political forces and to the physical geography of an area. Some of the causes of the sprawl include - population growth, economy, patterns of infrastructure initiatives like the construction of roads and the provision of infrastructure using public money encouraging development. The direct implication of such urban sprawl is the change in land use and land cover of the region. In this study the long term climate data from multiple sources like NCEP reanalysis, IMD observations and various satellite derived products from MAIRS, IMD, ERSL and TRMM are considered and analyzed using the developed algorithms for the better understanding of the variability in the climate parameters over Bangalore. These products are further mathematically analyzed to arrive at desired results by extracting land surface temperature (LST), Potential evapo-transmission (PET), Rainfall, Humidity etc. Various satellites products are derived from NASA (National Aeronautics Space Agency), Indian meteorological satellites and global satellites are helpful in massive study of urban issues at global and regional scale. Climate change analysis is well studied by using either single source data such as Temperature or Rainfall from IMD (Indian Meteorological Department) or combined data products available as in case of MAIRS (Monsoon Asia Integrated Regional Scale) program to get rainfall at regional scale. Finally all the above said parameters are normalized and analyzed with the help of various open source available software's for pre and post processing our requirements to obtain desired results. A sample of analysis i.e. the Inter annual variability of annual averaged Temperature over Bangalore is presented in figure 1, which clearly shows the rising trend of the temperature (0.06oC/year). Also the Land use and land cover (LULC) analysis over Bangalore, Day light hours from satellite derived products are analyzed and the correlation of climate parameters with LULC are presented.
NASA Astrophysics Data System (ADS)
Moruzzi, G.; Murphy, R. J.; Lees, R. M.; Predoi-Cross, A.; Billinghurst, B. E.
2010-09-01
The Fourier transform spectrum of the ? isotopologue of methanol has been recorded in the 120-350 cm-1 far-infrared region at a resolution of 0.00096 cm-1 using synchrotron source radiation at the Canadian Light Source. The study, motivated by astrophysical applications, is aimed at generating a sufficiently accurate set of energy level term values for the ground vibrational state to allow prediction of the centres of the quadrupole hyperfine multiplets for astronomically observable sub-millimetre transitions to within an uncertainty of a few MHz. To expedite transition identification, a new function was added to the Ritz program in which predicted spectral line positions were generated by an adjustable interpolation between the known assignments for the ? and ? isotopologues. By displaying the predictions along with the experimental spectrum on the computer monitor and adjusting the predictions to match observed features, rapid assignment of numerous ? sub-bands was possible. The least squares function of the Ritz program was then used to generate term values for the identified levels. For each torsion-K-rotation substate, the term values were fitted to a Taylor-series expansion in powers of J(J + 1) to determine the substate origin energy and effective B-value. In this first phase of the study we did not attempt a full global fit to the assigned transitions, but instead fitted the sub-band J-independent origins to a restricted Hamiltonian containing the principal torsional and K-dependent terms. These included structural and torsional potential parameters plus quartic distortional and torsion-rotation interaction terms.
Real-time Forensic Disaster Analysis
NASA Astrophysics Data System (ADS)
Wenzel, F.; Daniell, J.; Khazai, B.; Mühr, B.; Kunz-Plapp, T.; Markus, M.; Vervaeck, A.
2012-04-01
The Center for Disaster Management and Risk Reduction Technology (CEDIM, www.cedim.de) - an interdisciplinary research center founded by the German Research Centre for Geoscience (GFZ) and Karlsruhe Institute of Technology (KIT) - has embarked on a new style of disaster research known as Forensic Disaster Analysis. The notion has been coined by the Integrated Research on Disaster Risk initiative (IRDR, www.irdrinternational.org) launched by ICSU in 2010. It has been defined as an approach to studying natural disasters that aims at uncovering the root causes of disasters through in-depth investigations that go beyond the reconnaissance reports and case studies typically conducted after disasters. In adopting this comprehensive understanding of disasters CEDIM adds a real-time component to the assessment and evaluation process. By comprehensive we mean that most if not all relevant aspects of disasters are considered and jointly analysed. This includes the impact (human, economy, and infrastructure), comparisons with recent historic events, social vulnerability, reconstruction and long-term impacts on livelihood issues. The forensic disaster analysis research mode is thus best characterized as "event-based research" through systematic investigation of critical issues arising after a disaster across various inter-related areas. The forensic approach requires (a) availability of global data bases regarding previous earthquake losses, socio-economic parameters, building stock information, etc.; (b) leveraging platforms such as the EERI clearing house, relief-web, and the many sources of local and international sources where information is organized; and (c) rapid access to critical information (e.g., crowd sourcing techniques) to improve our understanding of the complex dynamics of disasters. The main scientific questions being addressed are: What are critical factors that control loss of life, of infrastructure, and for economy? What are the critical interactions between hazard - socio-economic systems - technological systems? What were the protective measures and to what extent did they work? Can we predict pattern of losses and socio-economic implications for future extreme events from simple parameters: hazard parameters, historic evidence, socio-economic conditions? Can we predict implications for reconstruction from simple parameters: hazard parameters, historic evidence, socio-economic conditions? The M7.2 Van Earthquake (Eastern Turkey) of 23 Oct. 2011 serves as an example for a forensic approach.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
Green’s functions for a volume source in an elastic half-space
Zabolotskaya, Evgenia A.; Ilinskii, Yurii A.; Hay, Todd A.; Hamilton, Mark F.
2012-01-01
Green’s functions are derived for elastic waves generated by a volume source in a homogeneous isotropic half-space. The context is sources at shallow burial depths, for which surface (Rayleigh) and bulk waves, both longitudinal and transverse, can be generated with comparable magnitudes. Two approaches are followed. First, the Green’s function is expanded with respect to eigenmodes that correspond to Rayleigh waves. While bulk waves are thus ignored, this approximation is valid on the surface far from the source, where the Rayleigh wave modes dominate. The second approach employs an angular spectrum that accounts for the bulk waves and yields a solution that may be separated into two terms. One is associated with bulk waves, the other with Rayleigh waves. The latter is proved to be identical to the Green’s function obtained following the first approach. The Green’s function obtained via angular spectrum decomposition is analyzed numerically in the time domain for different burial depths and distances to the receiver, and for parameters relevant to seismo-acoustic detection of land mines and other buried objects. PMID:22423682
NASA Astrophysics Data System (ADS)
Malviya, Devesh; Borage, Mangesh Balkrishna; Tiwari, Sunil
2017-12-01
This paper investigates the possibility of application of Resonant Immittance Converters (RICs) as a current source for the current-fed symmetrical Capacitor-Diode Voltage Multiplier (CDVM) with LCL-T Resonant Converter (RC) as an example. Firstly, detailed characterization of the current-fed symmetrical CDVM is carried out using repeated simulations followed by the normalization of the simulation results in order to derive the closed-form curve fit equations to predict the operating modes, output voltage and ripple in terms of operating parameters. RICs, due to their ability to convert voltage source into a current source, become a possible candidate for the realization of current source for the current-fed symmetrical CDVM. Detailed analysis, optimization and design of LCL-T RC with CDVM is performed in this paper. A step by step design procedure for the design of CDVM and the converter is proposed. A 5-stage prototype symmetrical CDVM driven by LCL-T RC to produce 2.5 kV, 50 mA dc output voltage is designed, built and tested to validate the findings of the analysis and simulation.
A study on the seismic source parameters for earthquakes occurring in the southern Korean Peninsula
NASA Astrophysics Data System (ADS)
Rhee, H. M.; Sheen, D. H.
2015-12-01
We investigated the characteristics of the seismic source parameters of the southern part of the Korean Peninsula for the 599 events with ML≥1.7 from 2001 to 2014. A large number of data are carefully selected by visual inspection in the time and frequency domains. The data set consist of 5,093 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. The corner frequency, stress drop, and moment magnitude of each event were measured by using the modified method of Jo and Baag (2001), based on the methods of Snoke (1987) and Andrews (1986). We found that this method could improve the stability of the estimation of source parameters from S-wave displacement spectrum by an iterative process. Then, we compared the source parameters with those obtained from previous studies and investigated the source scaling relationship and the regional variations of source parameters in the southern Korean Peninsula.
NASA Astrophysics Data System (ADS)
Tofelde, Stefanie; Sachse, Dirk; Schildgen, Taylor; Strecker, Manfred R.
2015-04-01
The burial of organic matter in marine sediments represents the main long-term sink for reduced carbon in the global carbon cycle, with the fluvial system being the predominant transport mechanism. Organic matter deposited in marine and continental sediments contains valuable information on ecological and climatic conditions, and organic proxy data is thus often used in paleoclimate research. To use sedimentary records to investigate past environmental conditions in the terrestrial realm, processes dictating the transport of organic matter, including spatial and temporal resolution as well as the influence of climatic and tectonic processes, have to be understood. In this study, we test if a lipid biomarker based approach can be used to trace present-day organic matter sources in a fluvial watershed draining two intermontane basins in the southern-central Andes of NW Argentina, a tectonically active region with pronounced topographic, rainfall, and vegetation gradients. We investigated the distribution of long-chain leaf-wax n-alkanes, a terrestrial plant biomarker (and as such representative of terrestrially sourced carbon), in river sediments and coarse particulate organic matter (CPOM) along two altitudinal and hydrological gradients. We used n-alkane abundances and their stable carbon and hydrogen isotopic values as three independent parameters for source discrimination. Additionally, we analyzed the control of environmental parameters on the isotopic signatures in leaf-wax n-alkanes. The general pattern of n-alkane distribution in river sediments and CPOM samples in our study area suggest that vascular plants are the major source of riverine organic matter. The stable carbon isotopic composition of nC29 alkanes suggests a nearly exclusive input of C3 vegetation. Although C4 plants are present in the lower catchment areas, the total percentage is too low to have a detectable influence on the carbon isotopic composition in river sediment and CPOM samples. Considering environmental parameters, nC29 alkane δ13C values are significantly correlated with mean annual rainfall in the respective catchment area, with less negative δ13C values in drier areas (r = - 0.63, p < 0.01). The variability in stable hydrogen isotopic composition (δD) of nC29 alkanes is determined mostly by the δD value of the source water and aridity. We find that the apparent fractionation (?app), defined as the difference in hydrogen isotopic composition of plant source waters and synthesized leaf-wax n-alkanes, is significantly correlated with aridity (r = -0.65, p < 0.005), with a smaller apparent fractionation in drier areas, as well as with mean annual rainfall (r = -0.59, p < 0.01), relative humidity (r = -0.56, p < 0.02), and actual evapotranspiration (r = -0.53, p < 0.05). Our data indicate that vascular plants are the major source of riverine organic matter, with their stable carbon and hydrogen isotopic compositions influenced by climatic parameters. Thus, on spatial scales covering large gradients in environmental parameters, the analysis of leaf-wax n-alkanes can be used for organic matter source assessment in orogenic settings.
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms.
Li, Le; Yip, Kevin Y
2016-12-15
Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature. Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
A novel approach for characterizing broad-band radio spectral energy distributions
NASA Astrophysics Data System (ADS)
Harvey, V. M.; Franzen, T.; Morgan, J.; Seymour, N.
2018-05-01
We present a new broad-band radio frequency catalogue across 0.12 GHz ≤ ν ≤ 20 GHz created by combining data from the Murchison Widefield Array Commissioning Survey, the Australia Telescope 20 GHz survey, and the literature. Our catalogue consists of 1285 sources limited by S20 GHz > 40 mJy at 5σ, and contains flux density measurements (or estimates) and uncertainties at 0.074, 0.080, 0.119, 0.150, 0.180, 0.408, 0.843, 1.4, 4.8, 8.6, and 20 GHz. We fit a second-order polynomial in log-log space to the spectral energy distributions of all these sources in order to characterize their broad-band emission. For the 994 sources that are well described by a linear or quadratic model we present a new diagnostic plot arranging sources by the linear and curvature terms. We demonstrate the advantages of such a plot over the traditional radio colour-colour diagram. We also present astrophysical descriptions of the sources found in each segment of this new parameter space and discuss the utility of these plots in the upcoming era of large area, deep, broad-band radio surveys.
Anthropics of aluminum-26 decay and biological homochirality
NASA Astrophysics Data System (ADS)
Sandora, McCullen
2017-11-01
Results of recent experiment reinstate feasibility to the hypothesis that biomolecular homochirality originates from beta decay. Coupled with hints that this process occurred extraterrestrially suggests aluminum-26 as the most likely source. If true, then its appropriateness is highly dependent on the half-life and energy of this decay. Demanding that this mechanism hold places new constraints on the anthropically allowed range for multiple parameters, including the electron mass, difference between up and down quark masses, the fine structure constant, and the electroweak scale. These new constraints on particle masses are tighter than those previously found. However, one edge of the allowed region is nearly degenerate with an existing bound, which, using what is termed here as `the principle of noncoincident peril', is argued to be a strong indicator that the fine structure constant must be an environmental parameter in the multiverse.
NASA Astrophysics Data System (ADS)
Limbach, P.; Müller, T.; Skoda, R.
2015-12-01
Commonly, for the simulation of cavitation in centrifugal pumps incompressible flow solvers with VOF kind cavitation models are applied. Since the source/sink terms of the void fraction transport equation are based on simplified bubble dynamics, empirical parameters may need to be adjusted to the particular pump operating point. In the present study a barotropic cavitation model, which is based solely on thermodynamic fluid properties and does not include any empirical parameters, is applied on a single flow channel of a pump impeller in combination with a time-explicit viscous compressible flow solver. The suction head curves (head drop) are compared to the results of an incompressible implicit standard industrial CFD tool and are predicted qualitatively correct by the barotropic model.
Evaluation of stream water quality in Atlanta, Georgia, and the surrounding region (USA)
Peters, N.E.; Kandell, S.J.
1999-01-01
A water-quality index (WQI) was developed from historical data (1986-1995) for streams in the Atlanta Region and augmented with 'new' and generally more comprehensive biweekly data on four small urban streams, representing an industrial area, a developed medium-density residential area and developing and developed low-density residential areas. Parameter WQIs were derived from percentile ranks of individual water-quality parameter values for each site by normalizing the constituent ranks for values from all sites in the area for a base period, i.e. 1990-1995. WQIs were developed primarily for nutrient-related parameters due to data availability. Site WQIs, which were computed by averaging the parameter WQIs, range from 0.2 (good quality) to 0.8 (poor quality), and increased downstream of known nutrient sources. Also, annual site WQI decreases from 1986 to 1995 at most long-term monitoring sites. Annual site WQI for individual parameters correlated with annual hydrological characteristics, particularly runoff, precipitation quantity, and water yield, reflecting the effect of dilution on parameter values. The WQIs of the four small urban streams were evaluated for the core-nutrient-related parameters, parameters for specific dissolved trace metal concentrations and sediment characteristics, and a species diversity index for the macro-invertebrate taxa. The site WQI for the core-nutrient-related parameters used in the retrospective analysis was, as expected, the worst for the industrial area and the best for the low-density residential areas. However, macro-invertebrate data indicate that although the species at the medium-density residential site were diverse, the taxa at the site were for species tolerant of degraded water quality. Furthermore, although a species-diversity index indicates no substantial difference between the two low-density residential areas, the number for macro-invertebrates for the developing area was much less than that for the developed area, consistent with observations of recent sediment problems probably associated with construction in the basin. However, sediment parameters were similar for the two sites suggesting that the routine biweekly measurements may not capture the short-term increases in sediment transport associated with rainstorms. The WQI technique is limited by the number and types of parameters included in it, the general conditions of those parameters for the range of conditions in area streams, and by the effects of external factors, such as hydrology, and therefore, should be used with caution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul L. Wichlacz
2003-09-01
This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
NASA Astrophysics Data System (ADS)
Miller, Urszula; Grzelka, Agnieszka; Romanik, Elżbieta; Kuriata, Magdalena
2018-01-01
Operation of municipal management facilities is inseparable from the problem of malodorous compounds emissions to the atmospheric air. In that case odor nuisance is related to the chemical composition of waste, sewage and sludge as well as to the activity of microorganisms whose products of life processes can be those odorous compounds. Significant reduction of odorant emission from many sources can be achieved by optimizing parameters and conditions of processes. However, it is not always possible to limit the formation of odorants. In such cases it is best to use appropriate deodorizing methods. The choice of the appropriate method is based on in terms of physical parameters, emission intensity of polluted gases and their composition, if it is possible to determine. Among the solutions used in municipal economy, there can be distinguished physico-chemical methods such as sorption and oxidation. In cases where the source of the emission is not encapsulated, odor masking techniques are used, which consists of spraying preparations that neutralize unpleasant odors. The paper presents the characteristics of selected methods of eliminating odor nuisance and evaluation of their applicability in municipal management facilities.
Pre-seismic anomalies from optical satellite observations: a review
NASA Astrophysics Data System (ADS)
Jiao, Zhong-Hu; Zhao, Jing; Shan, Xinjian
2018-04-01
Detecting various anomalies using optical satellite data prior to strong earthquakes is key to understanding and forecasting earthquake activities because of its recognition of thermal-radiation-related phenomena in seismic preparation phases. Data from satellite observations serve as a powerful tool in monitoring earthquake preparation areas at a global scale and in a nearly real-time manner. Over the past several decades, many new different data sources have been utilized in this field, and progressive anomaly detection approaches have been developed. This paper reviews the progress and development of pre-seismic anomaly detection technology in this decade. First, precursor parameters, including parameters from the top of the atmosphere, in the atmosphere, and on the Earth's surface, are stated and discussed. Second, different anomaly detection methods, which are used to extract anomalous signals that probably indicate future seismic events, are presented. Finally, certain critical problems with the current research are highlighted, and new developing trends and perspectives for future work are discussed. The development of Earth observation satellites and anomaly detection algorithms can enrich available information sources, provide advanced tools for multilevel earthquake monitoring, and improve short- and medium-term forecasting, which play a large and growing role in pre-seismic anomaly detection research.
System calibration method for Fourier ptychographic microscopy
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.
Coherent molecular transistor: control through variation of the gate wave function.
Ernzerhof, Matthias
2014-03-21
In quantum interference transistors (QUITs), the current through the device is controlled by variation of the gate component of the wave function that interferes with the wave function component joining the source and the sink. Initially, mesoscopic QUITs have been studied and more recently, QUITs at the molecular scale have been proposed and implemented. Typically, in these devices the gate lead is subjected to externally adjustable physical parameters that permit interference control through modifications of the gate wave function. Here, we present an alternative model of a molecular QUIT in which the gate wave function is directly considered as a variable and the transistor operation is discussed in terms of this variable. This implies that we specify the gate current as well as the phase of the gate wave function component and calculate the resulting current through the source-sink channel. Thus, we extend on prior works that focus on the phase of the gate wave function component as a control parameter while having zero or certain discrete values of the current. We address a large class of systems, including finite graphene flakes, and obtain analytic solutions for how the gate wave function controls the transistor.
Sewage contamination in the upper Mississippi River as measured by the fecal sterol, coprostanol
Writer, J.H.; Leenheer, J.A.; Barber, L.B.; Amy, G.L.; Chapra, S.C.
1995-01-01
The molecular sewage indicator, coprostanol, was measured in bed sediments of the Mississippi River for the purpose of determining sewage contamination. Coprostanol is a non-ionic, non-polar, organic molecule that associates with sediments in surface waters, and concentrations of coprostanol in bed sediments provide an indication of long-term sewage loads. Because coprostanol concentrations are dependent on particle size and percent organic carbon, a ratio between coprostanol (sewage sources) and cholestanol + cholesterol (sewage and non-sewage sources) was used to remove the biases related to particle size and percent organic carbon. The dynamics of contaminant transport in the Upper Mississippi River are influenced by both hydrologic and geochemical parameters. A mass balance model incorporating environmental parameters such as river and tributary discharge, suspended sediment concentration, fraction of organic carbon, sedimentation rates, municipal discharges and coprostanol decay rates was developed that describes coprostanol concentrations and therefore, expected patterns of municipal sewage effects on the Upper Mississippi River. Comparison of the computed and the measured coprostanol concentrations provides insight into the complex hydrologic and geochemical processes of contaminant transport and the ability to link measured chemical concentrations with hydrologic characteristics of the Mississippi River.
Constraints on the extremely high-energy cosmic ray accelerators from classical electrodynamics
NASA Astrophysics Data System (ADS)
Aharonian, F. A.; Belyanin, A. A.; Derishev, E. V.; Kocharovsky, V. V.; Kocharovsky, Vl. V.
2002-07-01
We formulate the general requirements, set by classical electrodynamics, on the sources of extremely high-energy cosmic rays (EHECRs). It is shown that the parameters of EHECR accelerators are strongly limited not only by the particle confinement in large-scale magnetic fields or by the difference in electric potentials (generalized Hillas criterion) but also by the synchrotron radiation, the electro-bremsstrahlung, or the curvature radiation of accelerated particles. Optimization of these requirements in terms of an accelerator's size and magnetic field strength results in the ultimate lower limit to the overall source energy budget, which scales as the fifth power of attainable particle energy. Hard γ rays accompanying generation of EHECRs can be used to probe potential acceleration sites. We apply the results to several populations of astrophysical objects-potential EHECR sources-and discuss their ability to accelerate protons to 1020 eV and beyond. The possibility of gain from ultrarelativistic bulk flows is addressed, with active galactic nuclei and gamma-ray bursts being the examples.
Constraints on the extremely high-energy cosmic rays accelerators from classical electrodynamics
NASA Astrophysics Data System (ADS)
Belyanin, A.; Aharonian, F.; Derishev, E.; Kocharovsky, V.; Kocharovsky, V.
We formulate the general requirements, set by classical electrodynamics, to the sources of extremely high-energy cosmic rays (EHECRs). It is shown that the parameters of EHECR accelerators are strongly limited not only by the particle confinement in large-scale magnetic field or by the difference in electric potentials (generalized Hillas criterion), but also by the synchrotron radiation, the electro-bremsstrahlung, or the curvature radiation of accelerated particles. Optimization of these requirements in terms of accelerator's size and magnetic field strength results in the ultimate lower limit to the overall source energy budget, which scales as the fifth power of attainable particle energy. Hard gamma-rays accompanying generation of EHECRs can be used to probe potential acceleration sites. We apply the results to several populations of astrophysical objects - potential EHECR sources - and discuss their ability to accelerate protons to 1020 eV and beyond. A possibility to gain from ultrarelativistic bulk flows is addressed, with Active Galactic Nuclei and Gamma-Ray Bursts being the examples.
NASA Technical Reports Server (NTRS)
Clapp, J. L.
1973-01-01
Research objectives during 1972-73 were to: (1) Ascertain the extent to which special aerial photography can be operationally used in monitoring water pollution parameters. (2) Ascertain the effectiveness of remote sensing in the investigation of nearshore mixing and coastal entrapment in large water bodies. (3) Develop an explicit relationship of the extent of the mixing zone in terms of the outfall, effluent and water body characteristics. (4) Develop and demonstrate the use of the remote sensing method as an effective legal implement through which administrative agencies and courts can not only investigate possible pollution sources but also legally prove the source of water pollution. (5) Evaluate the field potential of remote sensing techniques in monitoring algal blooms and aquatic macrophytes, and the use of these as indicators of lake eutrophication level. (6) Develop a remote sensing technique for the determination of the location and extent of hydrologically active source areas in a watershed.
NASA Astrophysics Data System (ADS)
Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin
2017-10-01
The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.
Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals
NASA Astrophysics Data System (ADS)
Huerta, E. A.; Gair, Jonathan R.
2009-04-01
We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
NASA Astrophysics Data System (ADS)
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
NASA Astrophysics Data System (ADS)
Ozen, Murat; Guler, Murat
2014-02-01
Aggregate gradation is one of the key design parameters affecting the workability and strength properties of concrete mixtures. Estimating aggregate gradation from hardened concrete samples can offer valuable insights into the quality of mixtures in terms of the degree of segregation and the amount of deviation from the specified gradation limits. In this study, a methodology is introduced to determine the particle size distribution of aggregates from 2D cross sectional images of concrete samples. The samples used in the study were fabricated from six mix designs by varying the aggregate gradation, aggregate source and maximum aggregate size with five replicates of each design combination. Each sample was cut into three pieces using a diamond saw and then scanned to obtain the cross sectional images using a desktop flatbed scanner. An algorithm is proposed to determine the optimum threshold for the image analysis of the cross sections. A procedure was also suggested to determine a suitable particle shape parameter to be used in the analysis of aggregate size distribution within each cross section. Results of analyses indicated that the optimum threshold hence the pixel distribution functions may be different even for the cross sections of an identical concrete sample. Besides, the maximum ferret diameter is the most suitable shape parameter to estimate the size distribution of aggregates when computed based on the diagonal sieve opening. The outcome of this study can be of practical value for the practitioners to evaluate concrete in terms of the degree of segregation and the bounds of mixture's gradation achieved during manufacturing.
Studies on the Extraction Region of the Type VI RF Driven H- Ion Source
NASA Astrophysics Data System (ADS)
McNeely, P.; Bandyopadhyay, M.; Franzen, P.; Heinemann, B.; Hu, C.; Kraus, W.; Riedl, R.; Speth, E.; Wilhelm, R.
2002-11-01
IPP Garching has spent several years developing a RF driven H- ion source intended to be an alternative to the current ITER (International Thermonuclear Experimental Reactor) reference design ion source. A RF driven source offers a number of advantages to ITER in terms of reduced costs and maintenance requirements. Although the RF driven ion source has shown itself to be competitive with a standard arc filament ion source for positive ions many questions still remain on the physics behind the production of the H- ion beam extracted from the source. With the improvements that have been implemented to the BATMAN (Bavarian Test Machine for Negative Ions) facility over the last two years it is now possible to study both the extracted ion beam and the plasma in the vicinity of the extraction grid in greater detail. This paper will show the effect of changing the extraction and acceleration voltage on both the current and shape of the beam as measured on the calorimeter some 1.5 m downstream from the source. The extraction voltage required to operate in the plasma limit is 3 kV. The perveance optimum for the extraction system was determined to be 2.2 x 10-6 A/V3/2 and occurs at 2.7 kV extraction voltage. The horizontal and vertical beam half widths vary as a function of the extracted ion current and the horizontal half width is generally smaller than the vertical. The effect of reducing the co-extracted electron current via plasma grid biasing on the H- current extractable and the beam profile from the source is shown. It is possible in the case of a silver contaminated plasma to reduce the co-extracted electron current to 20% of the initial value by applying a bias of 12 V. In the case where argon is present in the plasma, biasing is observed to have minimal effect on the beam half width but in a pure hydrogen plasma the beam half width increases as the bias voltage increases. New Langmuir probe studies that have been carried out parallel to the plasma grid (in the vicinity of the peak of the external magnetic filter field) and changes to source parameters as a function of power, and argon addition are reported. The behaviour of the electron density is different when the plasma is argon seeded showing a strong increase with RF power. The plasma potential is decreased by 2 V when argon is added to the plasma. The effect of the presence of unwanted silver sputtered from the Faraday screen by Ar+ ions on both the source performance and the plasma parameters is also presented. The silver dramatically downgraded source performance in terms of current density and produced an early saturation of current with applied RF power. Recently, collaboration was begun with the Technical University of Augsburg to perform spectroscopic measurements on the Type VI ion source. The final results of this analysis are not yet ready but some interesting initial observations on the gas temperature, disassociation degree and impurity ions will be presented.
Efthimiou, George C; Bartzis, John G; Berbekar, Eva; Hertwig, Denise; Harms, Frank; Leitl, Bernd
2015-06-26
The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
The concentration-discharge slope as a tool for water quality management.
Bieroza, M Z; Heathwaite, A L; Bechmann, M; Kyllmar, K; Jordan, P
2018-07-15
Recent technological breakthroughs of optical sensors and analysers have enabled matching the water quality measurement interval to the time scales of stream flow changes and led to an improved understanding of spatially and temporally heterogeneous sources and delivery pathways for many solutes and particulates. This new ability to match the chemograph with the hydrograph has promoted renewed interest in the concentration-discharge (c-q) relationship and its value in characterizing catchment storage, time lags and legacy effects for both weathering products and anthropogenic pollutants. In this paper we evaluated the stream c-q relationships for a number of water quality determinands (phosphorus, suspended sediments, nitrogen) in intensively managed agricultural catchments based on both high-frequency (sub-hourly) and long-term low-frequency (fortnightly-monthly) routine monitoring data. We used resampled high-frequency data to test the uncertainty in water quality parameters (e.g. mean, 95th percentile and load) derived from low-frequency sub-datasets. We showed that the uncertainty in water quality parameters increases with reduced sampling frequency as a function of the c-q slope. We also showed that different sources and delivery pathways control c-q relationship for different solutes and particulates. Secondly, we evaluated the variation in c-q slopes derived from the long-term low-frequency data for different determinands and catchments and showed strong chemostatic behaviour for phosphorus and nitrogen due to saturation and agricultural legacy effects. The c-q slope analysis can provide an effective tool to evaluate the current monitoring networks and the effectiveness of water management interventions. This research highlights how improved understanding of solute and particulate dynamics obtained with optical sensors and analysers can be used to understand patterns in long-term water quality time series, reduce the uncertainty in the monitoring data and to manage eutrophication in agricultural catchments. Copyright © 2018 Elsevier B.V. All rights reserved.
Active System for Electromagnetic Perturbation Monitoring in Vehicles
NASA Astrophysics Data System (ADS)
Matoi, Adrian Marian; Helerea, Elena
Nowadays electromagnetic environment is rapidly expanding in frequency domain and wireless services extend in terms of covered area. European electromagnetic compatibility regulations refer to limit values regarding emissions, as well as procedures for determining susceptibility of the vehicle. Approval procedure for a series of cars is based on determining emissions/immunity level for a few vehicles picked randomly from the entire series, supposing that entire vehicle series is compliant. During immunity assessment, the vehicle is not subjected to real perturbation sources, but exposed to electric/magnetic fields generated by laboratory equipment. Since current approach takes into account only partially real situation regarding perturbation sources, this paper proposes an active system for determining electromagnetic parameters of vehicle's environment, that implements a logical diagram for measurement, satisfying the imposed requirements. This new and original solution is useful for EMC assessment of hybrid and electrical vehicles.
NASA Astrophysics Data System (ADS)
Taleb, M.; Cherkaoui, M.; Hbib, M.
2018-05-01
Recently, renewable energy sources are impacting seriously power quality of the grids in term of frequency and voltage stability, due to their intermittence and less forecasting accuracy. Among these sources, wind energy conversion systems (WECS) received a great interest and especially the configuration with Doubly Fed Induction Generator. However, WECS strongly nonlinear, are making their control not easy by classical approaches such as a PI. In this paper, we continue deepen study of PI controller used in active and reactive power control of this kind of WECS. Particle Swarm Optimization (PSO) is suggested to improve its dynamic performances and its robustness against parameters variations. This work highlights the performances of PSO optimized PI control against classical PI tuned with poles compensation strategy. Simulations are carried out on MATLAB-SIMULINK software.
Bragg-Fresnel optics: New field of applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snigirev, A.
Bragg-Fresnel Optics shows excellent compatibility with the third generation synchrotron radiation sources such as ESRF and is capable of obtaining monochromatic submicron focal spots with 10{sup 8}-10{sup 9} photons/sec in an energy bandwidth of 10{sup -4}-10{sup -6} and in a photon energy range between 2-100 keV. New types of Bragg-Fresnel lenses like modified, ion implanted, bent and acoustically modulated were tested. Microprobe techniques like microdiffraction and microfluorescence based on Bragg-Fresnel optics were realised at the ESRF beamlines. Excellent parameters of the X-ray beam at the ESRF in terms of low emittance and quite small angular source size allow for Bragg-Fresnelmore » optics to occupy new fields of applications such as high resolution diffraction, holography, interferometry and phase contrast imaging.« less
Recycling and source reduction for long duration space habitation
NASA Technical Reports Server (NTRS)
Hightower, T. M.
1992-01-01
A direct mathematical approach has been established for characterizing the performance of closed-loop life support systems. The understanding that this approach gives clearly illustrates the options available for increasing the performance of a life support system by changing various parameters. New terms are defined and utilized, such as Segregation Factor, Resource Recovery Efficiency, Overall Reclamation Efficiency, Resupply Reduction Factor, and Life Support Extension Factor. The effects of increases in expendable system supplies required due to increases in life support system complexity are shown. Minimizing resupply through increased recycling and source reduction is illustrated. The effects of recycling upon resupply launch cost is also shown. Finally, material balance analyses have been performed based on quantity and composition data for both supplies and wastes, to illustrate the use of this approach by comparing ten different closed-loop life support system cases.
Keuschnigg, Peter; Kellner, Daniel; Fritscher, Karl; Zechner, Andrea; Mayer, Ulrich; Huber, Philipp; Sedlmayer, Felix; Deutschmann, Heinz; Steininger, Philipp
2017-01-01
Couch-mounted cone-beam computed tomography (CBCT) imaging devices with independently rotatable x-ray source and flat-panel detector arms for acquisitions of arbitrary regions of interest (ROI) have recently been introduced in image-guided radiotherapy (IGRT). This work analyzes mechanical limitations and gravity-induced effects influencing the geometric accuracy of images acquired with arbitrary angular constellations of source and detector in nonisocentric trajectories, which is considered essential for IGRT. In order to compensate for geometric inaccuracies of this modality, a 9-degrees-of-freedom (9-DOF) flexmap correction approach is presented, focusing especially on the separability of the flexmap parameters of the independently movable components of the device. The 9-DOF comprise a 3D translation of the x-ray source focal spot, a 3D translation of the flat-panel's active area center and three Euler-rotations of the detector's row and column vectors. The flexmap parameters are expressed with respect to the angular position of each of the devices arms. Estimation of the parameters is performed, using a CT-based structure set of a table-mounted, cylindrical ball-bearing phantom. Digitally reconstructed radiograph (DRR) patches are derived from the structure set followed by local 2D in-plane registration and subsequent 3D transform estimation by nonlinear regression with outlier detection. Flexmap parameter evaluations for the factory-calibrated system in clockwise and counter-clockwise rotation direction have shown only minor differences for the overall set of flexmap parameters. High short-term reproducibility of the flexmap parameters has been confirmed by experiments over 10 acquisitions for both directions, resulting in standard deviation values of ≤0.183 mm for translational components and ≤0.0219 deg for rotational components, respectively. A comparison of isocentric and nonisocentric flexmap evaluations showed that the mean differences of the parameter curves reside within their standard deviations, confirming the ability of the proposed calibration method to handle both types of trajectories equally well. Reconstructions of 0.1 mm and 0.25 mm steel wires showed similar results for the isocentric and nonisocentric cases. The full-width at half maximum (FWHM) measure indicates an average improvement of the calibrated reconstruction of 85% over the uncalibrated reconstruction. The contrast of the point spread function (PSF) improved by 310% on average over all experiments. Moreover, a reduced amount of artifacts visible in nonisocentric reconstructions of a head phantom and a line-pair phantom has been achieved by separate application of the 9-DOF flexmap on the geometry described by the independently moving source arm and detector arm. Using a 9-DOF flexmap approach for correcting the geometry of projections acquired with a device capable of independent movements of the source and panel arms has been shown to be essential for IGRT use cases such as CBCT reconstruction and 2D/3D registration tasks. The proposed pipeline is able to create flexmap curves which are easy to interpret, useful for mechanical description of the device and repetitive quality assurance as well as system-level preventive maintenance. Application of the flexmap has shown improvements of image quality for planar imaging and volumetric imaging which is crucial for patient alignment accuracy. © 2016 American Association of Physicists in Medicine.
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
NASA Astrophysics Data System (ADS)
Zhu, Yiting; Narendran, Nadarajah; Tan, Jianchuan; Mou, Xi
2014-09-01
The organic light-emitting diode (OLED) has demonstrated its novelty in displays and certain lighting applications. Similar to white light-emitting diode (LED) technology, it also holds the promise of saving energy. Even though the luminous efficacy values of OLED products have been steadily growing, their longevity is still not well understood. Furthermore, currently there is no industry standard for photometric and colorimetric testing, short and long term, of OLEDs. Each OLED manufacturer tests its OLED panels under different electrical and thermal conditions using different measurement methods. In this study, an imaging-based photometric and colorimetric measurement method for OLED panels was investigated. Unlike an LED that can be considered as a point source, the OLED is a large form area source. Therefore, for an area source to satisfy lighting application needs, it is important that it maintains uniform light level and color properties across the emitting surface of the panel over a long period. This study intended to develop a measurement procedure that can be used to test long-term photometric and colorimetric properties of OLED panels. The objective was to better understand how test parameters such as drive current or luminance and temperature affect the degradation rate. In addition, this study investigated whether data interpolation could allow for determination of degradation and lifetime, L70, at application conditions based on the degradation rates measured at different operating conditions.
Auroral Proper Motion in the Era of AMISR and EMCCD
NASA Astrophysics Data System (ADS)
Semeter, J. L.
2016-12-01
The term "aurora" is a catch-all for luminosity produced by the deposition of magnetospheric energy in the outer atmosphere. The use of this single phenomenological term occludes the rich variety of sources and mechanisms responsible for the excitation. Among these are electron thermal conduction (SAR arcs), electrostatic potential fields ("inverted-V" aurora), wave-particle resonance (Alfvenic aurora, pulsating aurora), pitch-angle scattering (diffuse aurora), and direct injection of plasma sheet particles (PBIs, substorms). Much information about auroral energization has been derived from the energy spectrum of primary particles, which may be measured directly with an in situ detector or indirectly via analysis of the atmospheric response (e.g., auroral spectroscopy, tomography, ionization). Somewhat less emphasized has been the information in the B_perp dimension. Specifically, the scale-dependent motions of auroral forms in the rest frame of the ambient plasma provide a means of partitioning both the source region and the source mechanism. These results, in turn, affect ionospheric state parameters that control the M-I coupling process-most notably, the degree of structure imparted to the conductance field. This paper describes recent results enabled by the advent of two technologies: high frame-rate, high-resolution imaging detectors, and electronically steerable incoherent scatter radar (the AMISR systems). In addition to contributing to our understanding of the aurora, these results may be used in predictive models of multi-scale energy transfer within the disturbed geospace system.
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
Laser induced heat source distribution in bio-tissues
NASA Astrophysics Data System (ADS)
Li, Xiaoxia; Fan, Shifu; Zhao, Youquan
2006-09-01
During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.
Murphy, Heather M; McBean, Edward A; Farahbakhsh, Khosrow
2010-12-01
Point-of-use (POU) technologies have been proposed as solutions for meeting the Millennium Development Goal (MDG) for safe water. They reduce the risk of contamination between the water source and the home, by providing treatment at the household level. This study examined two POU technologies commonly used around the world: BioSand and ceramic filters. While the health benefits in terms of diarrhoeal disease reduction have been fairly well documented for both technologies, little research has focused on the ability of these technologies to treat other contaminants that pose health concerns, including the potential for formation of contaminants as a result of POU treatment. These technologies have not been rigorously tested to see if they meet World Health Organization (WHO) drinking water guidelines. A study was developed to evaluate POU BioSand and ceramic filters in terms of microbiological and chemical quality of the treated water. The following parameters were monitored on filters in rural Cambodia over a six-month period: iron, manganese, fluoride, nitrate, nitrite and Escherichia coli. The results revealed that these technologies are not capable of consistently meeting all of the WHO drinking water guidelines for these parameters.
The Atmospheric Infrared Sounder- An Overview
NASA Technical Reports Server (NTRS)
Larnbrigtsen, Bjorn; Fetzer, Eric; Lee, Sung-Yung; Irion, Fredrick; Hearty, Thomas; Gaiser, Steve; Pagano, Thomas; Aumann, Hartmut; Chahine, Moustafa
2004-01-01
The Atmospheric Infrared Sounder (AIRS) was launched in May 2002. Along with two companion microwave sensors, it forms the AIRS Sounding Suite. This system is the most advanced atmospheric sounding system to date, with measurement accuracies far surpassing those available on current weather satellites. The data products are calibrated radiances from all three sensors and a number of derived geophysical parameters, including vertical temperature and humidity profiles, surface temperature, cloud fraction, cIoud top pressure, and profiles of ozone. These products are generated under cloudy as well as clear conditions. An ongoing calibration validation effort has confirmed that the system is very accurate and stable, and many of the geophysical parameters have been validated. AIRS is in some cases more accurate than any other source and can therefore be difficult to validate, but this offers interesting new research opportunities. The applications for the AIRS products range from numerical weather prediction to atmospheric research - where the AIRS water vapor products near the surface and in the mid to upper troposphere will make it possible to characterize and model phenomena that are key for short-term atmospheric processes, such as weather patterns, to long-term processes, such as interannual cycles (e.g., El Nino) and climate change.
An efficient soil water balance model based on hybrid numerical and statistical methods
NASA Astrophysics Data System (ADS)
Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei
2018-04-01
Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new model makes it particularly suitable for large-scale simulation of soil water movement, because the new model can be used with coarse discretization in space and time.
NASA Astrophysics Data System (ADS)
Takagi, R.; Obara, K.; Uchida, N.
2017-12-01
Understanding slow earthquake activity improves our knowledge of slip behavior in brittle-ductile transition zone and subduction process including megathrust earthquakes. In order to understand overall picture of slow slip activity, it is important to make a comprehensive catalog of slow slip events (SSEs). Although short-term SSEs have been detected by GNSS and tilt meter records systematically, analysis of long-term slow slip events relies on individual slip inversions. We develop an algorism to systematically detect long-term SSEs and estimate source parameters of the SSEs using GNSS data. The algorism is similar to GRiD-MT (Tsuruoka et al., 2009), which is grid-based automatic determination of moment tensor solution. Instead of moment tensor fitting to long period seismic records, we estimate parameters of a single rectangle fault to fit GNSS displacement time series. First, we make a two dimensional grid covering possible location of SSE. Second, we estimate best-fit parameters (length, width, slip, and rake) of the rectangle fault at each grid point by an iterative damped least square method. Depth, strike, and dip are fixed on the plate boundary. Ramp function with duration of 300 days is used for expressing time evolution of the fault slip. Third, a grid maximizing variance reduction is selected as a candidate of long-term SSE. We also search onset of ramp function based on the grid search. We applied the method to GNSS data in southwest Japan to detect long-term SSEs in Nankai subduction zone. With current selection criteria, we found 13 events with Mw6.2-6.9 in Hyuga-nada, Bungo channel, and central Shikoku from 1998 to 2015, which include unreported events. Key finding is along strike migrations of long-term SSEs from Hyuga-nada to Bungo channel and from Bungo channel to central Shikoku. In particular, three successive events migrating northward in Hyuga-nada preceded the 2003 Bungo channel SSE, and one event in central Shikoku followed the 2003 SSE in Bungo channel. The space-time dimensions of the possible along-strike migration are about 300km in length and 6 years in time. Systematic detection with assumptions of various durations in the time evolution of SSE may improve the picture of SSE activity and possible interaction with neighboring SSEs.
Does Controlling for Temporal Parameters Change the Levels-of-Processing Effect in Working Memory?
Loaiza, Vanessa M.; Camos, Valérie
2016-01-01
The distinguishability between working memory (WM) and long-term memory has been a frequent and long-lasting source of debate in the literature. One recent method of identifying the relationship between the two systems has been to consider the influence of long-term memory effects, such as the levels-of-processing (LoP) effect, in WM. However, the few studies that have examined the LoP effect in WM have shown divergent results. This study examined the LoP effect in WM by considering a theoretically meaningful methodological aspect of the LoP span task. Specifically, we fixed the presentation duration of the processing component a priori because such fixed complex span tasks have shown differences when compared to unfixed tasks in terms of recall from WM as well as the latent structure of WM. After establishing a fixed presentation rate from a pilot study, the LoP span task presented memoranda in red or blue font that were immediately followed by two processing words that matched the memoranda in terms of font color or semantic relatedness. On presentation of the processing words, participants made deep or shallow processing decisions for each of the memoranda before a cue to recall them from WM. Participants also completed delayed recall of the memoranda. Results indicated that LoP affected delayed recall, but not immediate recall from WM. These results suggest that fixing temporal parameters of the LoP span task does not moderate the null LoP effect in WM, and further indicate that WM and long-term episodic memory are dissociable on the basis of LoP effects. PMID:27152126
Does Controlling for Temporal Parameters Change the Levels-of-Processing Effect in Working Memory?
Loaiza, Vanessa M; Camos, Valérie
2016-01-01
The distinguishability between working memory (WM) and long-term memory has been a frequent and long-lasting source of debate in the literature. One recent method of identifying the relationship between the two systems has been to consider the influence of long-term memory effects, such as the levels-of-processing (LoP) effect, in WM. However, the few studies that have examined the LoP effect in WM have shown divergent results. This study examined the LoP effect in WM by considering a theoretically meaningful methodological aspect of the LoP span task. Specifically, we fixed the presentation duration of the processing component a priori because such fixed complex span tasks have shown differences when compared to unfixed tasks in terms of recall from WM as well as the latent structure of WM. After establishing a fixed presentation rate from a pilot study, the LoP span task presented memoranda in red or blue font that were immediately followed by two processing words that matched the memoranda in terms of font color or semantic relatedness. On presentation of the processing words, participants made deep or shallow processing decisions for each of the memoranda before a cue to recall them from WM. Participants also completed delayed recall of the memoranda. Results indicated that LoP affected delayed recall, but not immediate recall from WM. These results suggest that fixing temporal parameters of the LoP span task does not moderate the null LoP effect in WM, and further indicate that WM and long-term episodic memory are dissociable on the basis of LoP effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas
This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less
NASA Astrophysics Data System (ADS)
Ren, Luchuan
2015-04-01
A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method
Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.
Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran
2018-06-15
Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-09-02
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.
Sweet potato growth parameters, yield components and nutritive value for CELSS applications
NASA Technical Reports Server (NTRS)
Loretan, P. A.; Bonsi, C. K.; Hill, W. A.; Ogbuehi, C. R.; Mortley, D. G.
1989-01-01
Sweet potatoes have been grown hydroponically using the nutrient film technique (NFT) to provide a potential food source for long-term manned space missions. Experiments in both sand and NFT cultivars have produced up to 1790 g/plant of fresh storage root with an edible biomass index ranging from 60-89 percent and edible biomass linear growth rates of 39-66 g/sq m day in 105 to 130 days. Experiments with different cultivars, nutrient solution compositions, application rates, air and root temperatures, photoperiods, and light intensities indicate good potential for sweet potatoes in CELSS.
1982-12-01
period f T - switching period a - AGC control parameter q - quantum efficiency of photon to electron conversions "I - binary "one" given in terms of the...of the photons striking the surface of the detector. This rate is defined as: X(t) = (np(t)A) / hf 0 (21) where n - quantum efficiency of the photon...mw to 10 mw [Ref 5, Table 1] for infrared wavelengths. 30 Assuming all of the source’s output power is detected, the rate is calculated to be an order
Quantitative Determination of Vinpocetine in Dietary Supplements.
French, John M T; King, Matthew D; McDougal, Owen M
2016-05-01
Current United States regulatory policies allow for the addition of pharmacologically active substances in dietary supplements if derived from a botanical source. The inclusion of certain nootropic drugs, such as vinpocetine, in dietary supplements has recently come under scrutiny due to the lack of defined dosage parameters and yet unproven short- and long-term benefits and risks to human health. This study quantified the concentration of vinpocetine in several commercially available dietary supplements and found that a highly variable range of 0.6-5.1 mg/serving was present across the tested products, with most products providing no specification of vinpocetine concentrations.
Nonimaging optical illumination system
Winston, R.; Ries, H.
1996-12-17
A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source, a light reflecting surface, and a family of light edge rays defined along a reference line with the reflecting surface defined in terms of the reference line as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line, and D is a distance from a point on the reference line to the reflection surface along the desired edge ray through the point. 35 figs.
Nonimaging optical illumination system
Winston, R.; Ries, H.
1998-10-06
A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source a light reflecting surface, and a family of light edge rays defined along a reference line with the reflecting surface defined in terms of the reference lines a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line, and D is a distance from a point on the reference line to the reflection surface along the desired edge ray through the point. 35 figs.
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
Blind Source Parameters for Performance Evaluation of Despeckling Filters.
Biradar, Nagashettappa; Dewal, M L; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh
2016-01-01
The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images.
Blind Source Parameters for Performance Evaluation of Despeckling Filters
Biradar, Nagashettappa; Dewal, M. L.; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh
2016-01-01
The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. PMID:27298618
Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin
The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less
Geist, E.; Yoshioka, S.
1996-01-01
The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.
About the Modeling of Radio Source Time Series as Linear Splines
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2016-12-01
Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.
NASA Astrophysics Data System (ADS)
Massa, Corrado
1996-03-01
The consequences of a cosmological ∧ term varying asS -2 in a spatially isotropic universe with scale factorS and conserved matter tensor are investigated. One finds a perpetually expanding universe with positive ∧ and gravitational ‘constant’G that increases with time. The ‘hard’ equation of state 3P>U (U mass-energy density,P scalar pressure) applied to the early universe leads to the expansion lawS∝t (t cosmic time) which solves the horizon problem with no need of inflation. Also the flatness problem is resolved without inflation. The model does not affect the well known predictions on the cosmic light elements abundance which come from standard big bang cosmology. In the present, matter dominated universe one findsdG/dt=2∧H/U (H is the Hubble parameter) which is consistent with observations provided ∧<10-57 cm-2. Asymptotically (S→∞) the ∧ term equalsGU/2, in agreement with other studies.
NASA Astrophysics Data System (ADS)
Chen, X.; Abercrombie, R. E.; Pennington, C.
2017-12-01
Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.
Low Reynolds number k-epsilon modelling with the aid of direct simulation data
NASA Technical Reports Server (NTRS)
Rodi, W.; Mansour, N. N.
1993-01-01
The constant C sub mu and the near-wall damping function f sub mu in the eddy-viscosity relation of the k-epsilon model are evaluated from direct numerical simulation (DNS) data for developed channel and boundary layer flow at two Reynolds numbers each. Various existing f sub mu model functions are compared with the DNS data, and a new function is fitted to the high-Reynolds-number channel flow data. The epsilon-budget is computed for the fully developed channel flow. The relative magnitude of the terms in the epsilon-equation is analyzed with the aid of scaling arguments, and the parameter governing this magnitude is established. Models for the sum of all source and sink terms in the epsilon-equation are tested against the DNS data, and an improved model is proposed.
Unveiling the physics of AGN through X-ray variability
NASA Astrophysics Data System (ADS)
Hernández-García, L.; González-Martín, O.; Masegosa, J.; Márquez, I.
2017-03-01
Although variability is a general property characterizing active galactic nuclei (AGN), it is not well established whether the changes occur in the same way in every nuclei. The main purpose of this work is to study the X-ray variability pattern(s) in AGN selected at optical wavelengths in a large sample, including low ionization nuclear emission line regions (LINERs) and type 1.8, 1.9, and 2 Seyferts, using the public archives in Chandra and/or XMM-Newton. Spectra of the same source gathered at different epochs were simultaneously fitted to study long term variations; the variability patterns were studied allowing different parameters to vary during the spectral fit. Whenever possible, short term variations from the analysis of the light curves and long term UV flux variability were studied. Variations at X-rays in timescales of months/years are very common in all AGN families but short term variations are only found in type 1.8 and 1.9 Seyferts. The main driver of the long term X-ray variations seems to be related to changes in the nuclear power. Other variability patterns cannot be discarded in a few cases. We discuss the geometry and physics of AGN through the X-ray variability analysis.
Al-Khaza'leh, Ja'far Mansur; Reiber, Christoph; Al Baqain, Raid; Valle Zárate, Anne
2015-01-01
Goat production is an important agricultural activity in Jordan. The country is one of the poorest countries in the world in terms of water scarcity. Provision of sufficient quantity of good quality drinking water is important for goats to maintain feed intake and production. This study aimed to evaluate the seasonal availability and quality of goats' drinking water sources, accessibility, and utilization in different zones in the Karak Governorate in southern Jordan. Data collection methods comprised interviews with purposively selected farmers and quality assessment of water sources. The provision of drinking water was considered as one of the major constraints for goat production, particularly during the dry season (DS). Long travel distances to the water sources, waiting time at watering points, and high fuel and labor costs were the key reasons associated with the problem. All the values of water quality (WQ) parameters were within acceptable limits of the guidelines for livestock drinking WQ with exception of iron, which showed slightly elevated concentration in one borehole source in the DS. These findings show that water shortage is an important problem leading to consequences for goat keepers. To alleviate the water shortage constraint and in view of the depleted groundwater sources, alternative water sources at reasonable distance have to be tapped and monitored for water quality and more efficient use of rainwater harvesting systems in the study area is recommended.
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Revil, A.
2018-04-01
Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.
An open-source and low-cost monitoring system for precision enology.
Di Gennaro, Salvatore Filippo; Matese, Alessandro; Mancin, Mirko; Primicerio, Jacopo; Palliotti, Alberto
2014-12-05
Winemaking is a dynamic process, where microbiological and chemical effects may strongly differentiate products from the same vineyard and even between wine vats. This high variability means an increase in work in terms of control and process management. The winemaking process therefore requires a site-specific approach in order to optimize cellar practices and quality management, suggesting a new concept of winemaking, identified as Precision Enology. The Institute of Biometeorology of the Italian National Research Council has developed a wireless monitoring system, consisting of a series of nodes integrated in barrel bungs with sensors for the measurement of wine physical and chemical parameters in the barrel. This paper describes an open-source evolution of the preliminary prototype, using Arduino-based technology. Results have shown good performance in terms of data transmission and accuracy, minimal size and power consumption. The system has been designed to create a low-cost product, which allows a remote and real-time control of wine evolution in each barrel, minimizing costs and time for sampling and laboratory analysis. The possibility of integrating any kind of sensors makes the system a flexible tool that can satisfy various monitoring needs.
Laboratory tools and e-learning elements in training of acousto-optics
NASA Astrophysics Data System (ADS)
Barócsi, Attila; Lenk, Sándor; Ujhelyi, Ferenc; Majoros, Tamás.; Maák, Paál.
2015-10-01
Due to the acousto-optic (AO) effect, the refractive index of an optical interaction medium is perturbed by an acoustic wave induced in the medium that builds up a phase grating that will diffract the incident light beam if the condition of constructive interference is satisfied. All parameters, such as magnitude, period or phase of the grating can be controlled that allows the construction of useful devices (modulators, switches, one or multi-dimensional deflectors, spectrum analyzers, tunable filters, frequency shifters, etc.) The research and training of acousto-optics have a long-term tradition at our department. In this presentation, we introduce the related laboratory exercises fitted into an e-learning frame. The BSc level exercise utilizes a laser source and an AO cell to demonstrate the effect and principal AO functions explaining signal processing terms such as amplitude or frequency modulation, modulation depth and Fourier transformation ending up in building a free space sound transmitting and demodulation system. The setup for MSc level utilizes an AO filter with mono- and polychromatic light sources to learn about spectral analysis and synthesis. Smart phones can be used to generate signal inputs or outputs for both setups as well as to help students' preparation and reporting.
Stability of high-speed boundary layers in oxygen including chemical non-equilibrium effects
NASA Astrophysics Data System (ADS)
Klentzman, Jill; Tumin, Anatoli
2013-11-01
The stability of high-speed boundary layers in chemical non-equilibrium is examined. A parametric study varying the edge temperature and the wall conditions is conducted for boundary layers in oxygen. The edge Mach number and enthalpy ranges considered are relevant to the flight conditions of reusable hypersonic cruise vehicles. Both viscous and inviscid stability formulations are used and the results compared to gain insight into the effects of viscosity and thermal conductivity on the stability. It is found that viscous effects have a strong impact on the temperature and mass fraction perturbations in the critical layer and in the viscous sublayer near the wall. Outside of these areas, the perturbations closely match in the viscous and inviscid models. The impact of chemical non-equilibrium on the stability is investigated by analyzing the effects of the chemical source term in the stability equations. The chemical source term is found to influence the growth rate of the second Mack mode instability but not have much of an effect on the mass fraction eigenfunction for the flow parameters considered. This work was supported by the AFOSR/NASA/National Center for Hypersonic Laminar-Turbulent Transition Research.
AQUATOX Data Sources Documents
Contains the data sources for parameter values of the AQUATOX model including: a bibliography for the AQUATOX data libraries and the compendia of parameter values for US Army Corps of Engineers models.
NASA Astrophysics Data System (ADS)
Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei
2016-03-01
In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.
Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan C.; Gauntt, Randall O.
Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
Bhalla, Kavi; Harrison, James E
2016-04-01
Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
The rotation-powered nature of some soft gamma-ray repeaters and anomalous X-ray pulsars
NASA Astrophysics Data System (ADS)
Coelho, Jaziel G.; Cáceres, D. L.; de Lima, R. C. R.; Malheiro, M.; Rueda, J. A.; Ruffini, R.
2017-03-01
Context. Soft gamma-ray repeaters (SGRs) and anomalous X-ray pulsars (AXPs) are slow rotating isolated pulsars whose energy reservoir is still matter of debate. Adopting neutron star (NS) fiducial parameters; mass M = 1.4 M⊙, radius R = 10 km, and moment of inertia, I = 1045 g cm2, the rotational energy loss, Ėrot, is lower than the observed luminosity (dominated by the X-rays) LX for many of the sources. Aims: We investigate the possibility that some members of this family could be canonical rotation-powered pulsars using realistic NS structure parameters instead of fiducial values. Methods: We compute the NS mass, radius, moment of inertia and angular momentum from numerical integration of the axisymmetric general relativistic equations of equilibrium. We then compute the entire range of allowed values of the rotational energy loss, Ėrot, for the observed values of rotation period P and spin-down rate Ṗ. We also estimate the surface magnetic field using a general relativistic model of a rotating magnetic dipole. Results: We show that realistic NS parameters lowers the estimated value of the magnetic field and radiation efficiency, LX/Ėrot, with respect to estimates based on fiducial NS parameters. We show that nine SGRs/AXPs can be described as canonical pulsars driven by the NS rotational energy, for LX computed in the soft (2-10 keV) X-ray band. We compute the range of NS masses for which LX/Ėrot< 1. We discuss the observed hard X-ray emission in three sources of the group of nine potentially rotation-powered NSs. This additional hard X-ray component dominates over the soft one leading to LX/Ėrot > 1 in two of them. Conclusions: We show that 9 SGRs/AXPs can be rotation-powered NSs if we analyze their X-ray luminosity in the soft 2-10 keV band. Interestingly, four of them show radio emission and six have been associated with supernova remnants (including Swift J1834.9-0846 the first SGR observed with a surrounding wind nebula). These observations give additional support to our results of a natural explanation of these sources in terms of ordinary pulsars. Including the hard X-ray emission observed in three sources of the group of potential rotation-powered NSs, this number of sources with LX/Ėrot< 1 becomes seven. It remains open to verification 1) the accuracy of the estimated distances and 2) the possible contribution of the associated supernova remnants to the hard X-ray emission.
Earthquake source parameters determined by the SAFOD Pilot Hole seismic array
Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.
2004-01-01
We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Dogra, Mridula; Singh, K. J.; Kaur, Kulwinder; Anand, Vikas; Kaur, Parminder; Singh, Prabhjot; Bajwa, B. S.
2018-03-01
In the present study, quaternary system of the composition (0.45 + x) Bi2O3-(0.25 - x) BaO-0.15 B2O3-0.15 Na2O (where 0 ≤ x ≤ 0.2 mol fraction) has been prepared by using melt-quenching technique for investigation of gamma ray shielding properties. Mass attenuation coefficients and half value layer parameters have been determined experimentally at 662 keV by using 137Cs source. It has been found that experimental results of these parameters hold good agreement with theoretical values. The density, molar volume, XRD, FTIR, Raman and UV-visible studies have been used to determine structural properties of the prepared glass samples. Dissolution rate of the samples has also been measured to check their utility as long term durable glasses.
JET DT Scenario Extrapolation and Optimization with METIS
NASA Astrophysics Data System (ADS)
Urban, Jakub; Jaulmes, Fabien; Artaud, Jean-Francois
2017-10-01
Prospective JET (Joint European Torus) DT operation scenarios are modelled by the fast integrated code METIS. METIS combines scaling laws, e.g. for global and pedestal energy or density peaking, with simplified transport and source models, while retaining fundamental nonlinear couplings, in particular in the fusion power. We have tuned METIS parameters to match JET-ILW high performance experiments, including baseline and hybrid. Based on recent observations, we assume a weaker input power scaling than IPB98 and a 10% confinement improvement due to the higher ion mass. The rapidity of METIS is utilized to scan the performance of JET DT scenarios with respect to fundamental parameters, such as plasma current, magnetic field, density or heating power. Simplified, easily parameterized waveforms are used to study the effect the ramp-up speed or heating timing. Finally, an efficient Bayesian optimizer is employed to seek the most performant scenarios in terms of the fusion power or gain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Shimin, E-mail: gsm861@126.com; Mei, Liquan, E-mail: lqmei@mail.xjtu.edu.cn
The amplitude modulation of ion-acoustic waves is investigated in an unmagnetized plasma containing positive ions, negative ions, and electrons obeying a kappa-type distribution that is penetrated by a positive ion beam. By considering dissipative mechanisms, including ionization, negative-positive ion recombination, and electron attachment, we introduce a comprehensive model for the plasma with the effects of sources and sinks. Via reductive perturbation theory, the modified nonlinear Schrödinger equation with a dissipative term is derived to govern the dynamics of the modulated waves. The effect of the plasma parameters on the modulation instability criterion for the modified nonlinear Schrödinger equation is numericallymore » investigated in detail. Within the unstable region, first- and second-order dissipative ion-acoustic rogue waves are present. The effect of the plasma parameters on the characteristics of the dissipative rogue waves is also discussed.« less
Goal-oriented Site Characterization in Hydrogeological Applications: An Overview
NASA Astrophysics Data System (ADS)
Nowak, W.; de Barros, F.; Rubin, Y.
2011-12-01
In this study, we address the importance of goal-oriented site characterization. Given the multiple sources of uncertainty in hydrogeological applications, information needs of modeling, prediction and decision support should be satisfied with efficient and rational field campaigns. In this work, we provide an overview of an optimal sampling design framework based on Bayesian decision theory, statistical parameter inference and Bayesian model averaging. It optimizes the field sampling campaign around decisions on environmental performance metrics (e.g., risk, arrival times, etc.) while accounting for parametric and model uncertainty in the geostatistical characterization, in forcing terms, and measurement error. The appealing aspects of the framework lie on its goal-oriented character and that it is directly linked to the confidence in a specified decision. We illustrate how these concepts could be applied in a human health risk problem where uncertainty from both hydrogeological and health parameters are accounted.
Technical Review of SRS Dose Reconstrruction Methods Used By CDC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkins, Ali, A
2005-07-20
At the request of the Centers for Disease Control and Prevention (CDC), a subcontractor Advanced Technologies and Laboratories International, Inc.(ATL) issued a draft report estimating offsite dose as a result of Savannah River Site operations for the period 1954-1992 in support of Phase III of the SRS Dose Reconstruction Project. The doses reported by ATL differed than those previously estimated by Savannah River Site SRS dose modelers for a variety of reasons, but primarily because (1) ATL used different source terms, (2) ATL considered trespasser/poacher scenarios and (3) ATL did not consistently use site-specific parameters or correct usage parameters. Themore » receptors with the highest dose from atmospheric and liquid pathways were within about a factor of four greater than dose values previously reported by SRS. A complete set of technical comments have also been included.« less
A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.
We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neary, Vincent Sinclair; Yang, Zhaoqing; Wang, Taiping
A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending onmore » the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.« less
High-Sensitivity GaN Microchemical Sensors
NASA Technical Reports Server (NTRS)
Son, Kyung-ah; Yang, Baohua; Liao, Anna; Moon, Jeongsun; Prokopuk, Nicholas
2009-01-01
Systematic studies have been performed on the sensitivity of GaN HEMT (high electron mobility transistor) sensors using various gate electrode designs and operational parameters. The results here show that a higher sensitivity can be achieved with a larger W/L ratio (W = gate width, L = gate length) at a given D (D = source-drain distance), and multi-finger gate electrodes offer a higher sensitivity than a one-finger gate electrode. In terms of operating conditions, sensor sensitivity is strongly dependent on transconductance of the sensor. The highest sensitivity can be achieved at the gate voltage where the slope of the transconductance curve is the largest. This work provides critical information about how the gate electrode of a GaN HEMT, which has been identified as the most sensitive among GaN microsensors, needs to be designed, and what operation parameters should be used for high sensitivity detection.
NASA Astrophysics Data System (ADS)
Bikmaev, I. F.; Nikolaeva, E. A.; Shimansky, V. V.; Galeev, A. I.; Zhuchkov, R. Ya.; Irtuganov, E. N.; Melnikov, S. S.; Sakhibullin, N. A.; Grebenev, S. A.; Sharipova, L. M.
2017-10-01
We present the results of our long-term photometric and spectroscopic observations at the Russian-Turkish RTT-150 telescope for the optical counterpart to one of the best-known sources, representatives of the class of fast X-ray transients, IGR J17544-2619. Based on our optical data, we have determined for the first time the orbital and physical parameters of the binary system by the methods of Doppler spectroscopy.We have calculated theoretical spectra of the optical counterpart by applying non- LTE corrections for selected lines and obtained the parameters of the stellar atmosphere ( T eff = 33 000 K, log g = 3.85, R = 9.5 R ⊙, and M = 23 M ⊙). The latter suggest that the optical star is not a supergiant as has been thought previously.
Multisource Estimation of Long-term Global Terrestrial Surface Radiation
NASA Astrophysics Data System (ADS)
Peng, L.; Sheffield, J.
2017-12-01
Land surface net radiation is the essential energy source at the earth's surface. It determines the surface energy budget and its partitioning, drives the hydrological cycle by providing available energy, and offers heat, light, and energy for biological processes. Individual components in net radiation have changed historically due to natural and anthropogenic climate change and land use change. Decadal variations in radiation such as global dimming or brightening have important implications for hydrological and carbon cycles. In order to assess the trends and variability of net radiation and evapotranspiration, there is a need for accurate estimates of long-term terrestrial surface radiation. While large progress in measuring top of atmosphere energy budget has been made, huge discrepancies exist among ground observations, satellite retrievals, and reanalysis fields of surface radiation, due to the lack of observational networks, the difficulty in measuring from space, and the uncertainty in algorithm parameters. To overcome the weakness of single source datasets, we propose a multi-source merging approach to fully utilize and combine multiple datasets of radiation components separately, as they are complementary in space and time. First, we conduct diagnostic analysis of multiple satellite and reanalysis datasets based on in-situ measurements such as Global Energy Balance Archive (GEBA), existing validation studies, and other information such as network density and consistency with other meteorological variables. Then, we calculate the optimal weighted average of multiple datasets by minimizing the variance of error between in-situ measurements and other observations. Finally, we quantify the uncertainties in the estimates of surface net radiation and employ physical constraints based on the surface energy balance to reduce these uncertainties. The final dataset is evaluated in terms of the long-term variability and its attribution to changes in individual components. The goal of this study is to provide a merged observational benchmark for large-scale diagnostic analyses, remote sensing and land surface modeling.
Hazard assessment of long-period ground motions for the Nankai Trough earthquakes
NASA Astrophysics Data System (ADS)
Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.
2013-12-01
We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.
Optimizing detection and analysis of slow waves in sleep EEG.
Mensen, Armand; Riedner, Brady; Tononi, Giulio
2016-12-01
Analysis of individual slow waves in EEG recording during sleep provides both greater sensitivity and specificity compared to spectral power measures. However, parameters for detection and analysis have not been widely explored and validated. We present a new, open-source, Matlab based, toolbox for the automatic detection and analysis of slow waves; with adjustable parameter settings, as well as manual correction and exploration of the results using a multi-faceted visualization tool. We explore a large search space of parameter settings for slow wave detection and measure their effects on a selection of outcome parameters. Every choice of parameter setting had some effect on at least one outcome parameter. In general, the largest effect sizes were found when choosing the EEG reference, type of canonical waveform, and amplitude thresholding. Previously published methods accurately detect large, global waves but are conservative and miss the detection of smaller amplitude, local slow waves. The toolbox has additional benefits in terms of speed, user-interface, and visualization options to compare and contrast slow waves. The exploration of parameter settings in the toolbox highlights the importance of careful selection of detection METHODS: The sensitivity and specificity of the automated detection can be improved by manually adding or deleting entire waves and or specific channels using the toolbox visualization functions. The toolbox standardizes the detection procedure, sets the stage for reliable results and comparisons and is easy to use without previous programming experience. Copyright © 2016 Elsevier B.V. All rights reserved.
Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms
Li, Le; Yip, Kevin Y.
2016-01-01
Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature.Availability: Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/. PMID:27976738
An almost-parameter-free harmony search algorithm for groundwater pollution source identification.
Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui
2013-01-01
The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.
Cysewski, Piotr; Jeliński, Tomasz
2013-10-01
The electronic spectrum of four different anthraquinones (1,2-dihydroxyanthraquinone, 1-aminoanthraquinone, 2-aminoanthraquinone and 1-amino-2-methylanthraquinone) in methanol solution was measured and used as reference data for theoretical color prediction. The visible part of the spectrum was modeled according to TD-DFT framework with a broad range of DFT functionals. The convoluted theoretical spectra were validated against experimental data by a direct color comparison in terms of CIE XYZ and CIE Lab tristimulus model color. It was found, that the 6-31G** basis set provides the most accurate color prediction and there is no need to extend the basis set since it does not improve the prediction of color. Although different functionals were found to give the most accurate color prediction for different anthraquinones, it is possible to apply the same DFT approach for the whole set of analyzed dyes. Especially three functionals seem to be valuable, namely mPW1LYP, B1LYP and PBE0 due to very similar spectra predictions. The major source of discrepancies between theoretical and experimental spectra comes from L values, representing the lightness, and the a parameter, depicting the position on green→magenta axis. Fortunately, the agreement between computed and observed blue→yellow axis (parameter b) is very precise in the case of studied anthraquinone dyes in methanol solution. Despite discussed shortcomings, color prediction from first principle quantum chemistry computations can lead to quite satisfactory results, expressed in terms of color space parameters.
New Boundary Constraints for Elliptic Systems used in Grid Generation Problems
NASA Technical Reports Server (NTRS)
Kaul, Upender K.; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper discusses new boundary constraints for elliptic partial differential equations as used in grid generation problems in generalized curvilinear coordinate systems. These constraints, based on the principle of local conservation of thermal energy in the vicinity of the boundaries, are derived using the Green's Theorem. They uniquely determine the so called decay parameters in the source terms of these elliptic systems. These constraints' are designed for boundary clustered grids where large gradients in physical quantities need to be resolved adequately. It is observed that the present formulation also works satisfactorily for mild clustering. Therefore, a closure for the decay parameter specification for elliptic grid generation problems has been provided resulting in a fully automated elliptic grid generation technique. Thus, there is no need for a parametric study of these decay parameters since the new constraints fix them uniquely. It is also shown that for Neumann type boundary conditions, these boundary constraints uniquely determine the solution to the internal elliptic problem thus eliminating the non-uniqueness of the solution of an internal Neumann boundary value grid generation problem.
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Technical Reports Server (NTRS)
Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark
2013-01-01
As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
Statistics of initial density perturbations in heavy ion collisions and their fluid dynamic response
NASA Astrophysics Data System (ADS)
Floerchinger, Stefan; Wiedemann, Urs Achim
2014-08-01
An interesting opportunity to determine thermodynamic and transport properties in more detail is to identify generic statistical properties of initial density perturbations. Here we study event-by-event fluctuations in terms of correlation functions for two models that can be solved analytically. The first assumes Gaussian fluctuations around a distribution that is fixed by the collision geometry but leads to non-Gaussian features after averaging over the reaction plane orientation at non-zero impact parameter. In this context, we derive a three-parameter extension of the commonly used Bessel-Gaussian event-by-event distribution of harmonic flow coefficients. Secondly, we study a model of N independent point sources for which connected n-point correlation functions of initial perturbations scale like 1 /N n-1. This scaling is violated for non-central collisions in a way that can be characterized by its impact parameter dependence. We discuss to what extent these are generic properties that can be expected to hold for any model of initial conditions, and how this can improve the fluid dynamical analysis of heavy ion collisions.
A new lumped-parameter model for flow in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.
A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less
Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.
Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth
2017-05-26
In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.
NASA Astrophysics Data System (ADS)
Bradley, Larry; Sipocz, Brigitta; Robitaille, Thomas; Tollerud, Erik; Deil, Christoph; Vinícius, Zè; Barbary, Kyle; Günther, Hans Moritz; Bostroem, Azalee; Droettboom, Michael; Bray, Erik; Bratholm, Lars Andersen; Pickering, T. E.; Craig, Matt; Pascual, Sergio; Greco, Johnny; Donath, Axel; Kerzendorf, Wolfgang; Littlefair, Stuart; Barentsen, Geert; D'Eugenio, Francesco; Weaver, Benjamin Alan
2016-09-01
Photutils provides tools for detecting and performing photometry of astronomical sources. It can estimate the background and background rms in astronomical images, detect sources in astronomical images, estimate morphological parameters of those sources (e.g., centroid and shape parameters), and perform aperture and PSF photometry. Written in Python, it is an affiliated package of Astropy (ascl:1304.002).
NASA Astrophysics Data System (ADS)
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
Development of a SMA-Based, Slat-Gap Filler for Airframe Noise Reduction
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Long, David L.
2015-01-01
Noise produced by unsteady flow around aircraft structures, termed airframe noise, is an important source of aircraft noise during the approach and landing phases of flight. Conventional leading-edge-slat devices for high lift on typical transport aircraft are a prominent source of airframe noise. Many concepts for slat noise reduction have been investigated. Slat-cove fillers have emerged as an attractive solution, but they maintain the gap flow, leaving some noise production mechanisms unabated, and thus represent a nonoptimal solution. Drooped-leading-edge (DLE) concepts have been proposed as "optimal" because the gap flow is eliminated. The deployed leading edge device is not distinct and separate from the main wing in DLE concepts and the high-lift performance suffers at high angles of attack (alpha) as a consequence. Elusive high-alpha performance and excessive weight penalty have stymied DLE development. The fact that high-lift performance of DLE systems is only affected at high alpha suggests another concept that simultaneously achieves the high-lift of the baseline airfoil and the noise reduction of DLE concepts. The concept involves utilizing a conventional leading-edge slat device and a deformable structure that is deployed from the leading edge of the main wing and closes the gap between the slat and main wing, termed a slat-gap filler (SGF). The deployable structure consists of a portion of the skin of the main wing and it is driven in conjunction with the slat during deployment and retraction. Benchtop models have been developed to assess the feasibility and to study important parameters. Computational models have assisted in the bench-top model design and provided valuable insight in the parameter space as well as the feasibility.
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2012-12-01
Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).
Vacuum stress energy density and its gravitational implications
NASA Astrophysics Data System (ADS)
Estrada, Ricardo; Fulling, Stephen A.; Kaplan, Lev; Kirsten, Klaus; Liu, Zhonghai; Milton, Kimball A.
2008-04-01
In nongravitational physics the local density of energy is often regarded as merely a bookkeeping device; only total energy has an experimental meaning—and it is only modulo a constant term. But in general relativity the local stress-energy tensor is the source term in Einstein's equation. In closed universes, and those with Kaluza-Klein dimensions, theoretical consistency demands that quantum vacuum energy should exist and have gravitational effects, although there are no boundary materials giving rise to that energy by van der Waals interactions. In the lab there are boundaries, and in general the energy density has a nonintegrable singularity as a boundary is approached (for idealized boundary conditions). As pointed out long ago by Candelas and Deutsch, in this situation there is doubt about the viability of the semiclassical Einstein equation. Our goal is to show that the divergences in the linearized Einstein equation can be renormalized to yield a plausible approximation to the finite theory that presumably exists for realistic boundary conditions. For a scalar field with Dirichlet or Neumann boundary conditions inside a rectangular parallelepiped, we have calculated by the method of images all components of the stress tensor, for all values of the conformal coupling parameter and an exponential ultraviolet cutoff parameter. The qualitative features of contributions from various classes of closed classical paths are noted. Then the Estrada-Kanwal distributional theory of asymptotics, particularly the moment expansion, is used to show that the linearized Einstein equation with the stress-energy near a plane boundary as source converges to a consistent theory when the cutoff is removed. This paper reports work in progress on a project combining researchers in Texas, Louisiana and Oklahoma. It is supported by NSF Grants PHY-0554849 and PHY-0554926.
77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...
Shellenbarger, G.G.; Athearn, N.D.; Takekawa, John Y.; Boehm, A.B.
2008-01-01
Throughout the world, coastal resource managers are encouraging the restoration of previously modified coastal habitats back into wetlands and managed ponds for their ecosystem value. Because many coastal wetlands are adjacent to urban centers and waters used for human recreation, it is important to understand how wildlife can affect water quality. We measured fecal indicator bacteria (FIB) concentrations, presence/absence of Salmonella, bird abundance, and physico-chemical parameters in two coastal, managed ponds and adjacent sloughs for 4 weeks during the summer and winter in 2006. We characterized the microbial water quality in these waters relative to state water-quality standards and examined the relationship between FIB, bird abundance, and physico-chemical parameters. A box model approach was utilized to determine the net source or sink of FIB in the ponds during the study periods. FIB concentrations often exceeded state standards, particularly in the summer, and microbial water quality in the sloughs was generally lower than in ponds during both seasons. Specifically, the inflow of water from the sloughs to the ponds during the summer, more so than waterfowl use, appeared to increase the FIB concentrations in the ponds. The box model results suggested that the ponds served as net wetland sources and sinks for FIB, and high bird abundances in the winter likely contributed to net winter source terms for two of the three FIB in both ponds. Eight serovars of the human pathogen Salmonella were isolated from slough and pond waters, although the source of the pathogen to these wetlands was not identified. Thus, it appeared that factors other than bird abundance were most important in modulating FIB concentrations in these ponds.
Shibata, Tomoyuki; Solo-Gabriele, Helena M; Sinigalliano, Christopher D; Gidley, Maribeth L; Plano, Lisa R W; Fleisher, Jay M; Wang, John D; Elmir, Samir M; He, Guoqing; Wright, Mary E; Abdelzaher, Amir M; Ortega, Cristina; Wanless, David; Garza, Anna C; Kish, Jonathan; Scott, Troy; Hollenbeck, Julie; Backer, Lorraine C; Fleming, Lora E
2010-11-01
The objectives of this work were to compare enterococci (ENT) measurements based on the membrane filter, ENT(MF) with alternatives that can provide faster results including alternative enterococci methods (e.g., chromogenic substrate (CS), and quantitative polymerase chain reaction (qPCR)), and results from regression models based upon environmental parameters that can be measured in real-time. ENT(MF) were also compared to source tracking markers (Staphylococcus aureus, Bacteroidales human and dog markers, and Catellicoccus gull marker) in an effort to interpret the variability of the signal. Results showed that concentrations of enterococci based upon MF (<2 to 3320 CFU/100 mL) were significantly different from the CS and qPCR methods (p < 0.01). The correlations between MF and CS (r = 0.58, p < 0.01) were stronger than between MF and qPCR (r ≤ 0.36, p < 0.01). Enterococci levels by MF, CS, and qPCR methods were positively correlated with turbidity and tidal height. Enterococci by MF and CS were also inversely correlated with solar radiation but enterococci by qPCR was not. The regression model based on environmental variables provided fair qualitative predictions of enterococci by MF in real-time, for daily geometric mean levels, but not for individual samples. Overall, ENT(MF) was not significantly correlated with source tracking markers with the exception of samples collected during one storm event. The inability of the regression model to predict ENT(MF) levels for individual samples is likely due to the different sources of ENT impacting the beach at any given time, making it particularly difficult to to predict short-term variability of ENT(MF) for environmental parameters.
Soria-Hernández, Cintya; Serna-Saldívar, Sergio
2015-01-01
Summary Proteins from vegetable and cereal sources are an excellent alternative to substitute animal-based counterparts because of their reduced cost, abundant supply and good nutritional value. The objective of this investigation is to study a set of vegetable and cereal proteins in terms of physicochemical and functional properties. Twenty protein sources were studied: five soya bean flour samples, one pea flour and fourteen newly developed blends of soya bean and maize germ (five concentrates and nine hydrolysates). The physicochemical characterization included pH (5.63 to 7.57), electrical conductivity (1.32 to 4.32 mS/cm), protein content (20.78 to 94.24% on dry mass basis), free amino nitrogen (0.54 to 2.87 mg/g) and urease activity (0.08 to 2.20). The functional properties showed interesting differences among proteins: water absorption index ranged from 0.41 to 18.52, the highest being of soya and maize concentrates. Nitrogen and water solubility ranged from 10.14 to 74.89% and from 20.42 to 95.65%, respectively. Fat absorption and emulsification activity indices ranged from 2.59 to 4.72 and from 3936.6 to 52 399.2 m2/g respectively, the highest being of pea flour. Foam activity (66.7 to 475.0%) of the soya and maize hydrolysates was the best. Correlation analyses showed that hydrolysis affected solubility-related parameters whereas fat-associated indices were inversely correlated with water-linked parameters. Foam properties were better of proteins treated with low heat, which also had high urease activity. Physicochemical and functional characterization of the soya and maize protein concentrates and hydrolysates allowed the identification of differences regarding other vegetable and cereal protein sources such as pea or soya bean. PMID:27904358
Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform
NASA Astrophysics Data System (ADS)
Wang, Y.; Ni, S.; Chen, W.
2012-12-01
Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.
Parameter optimization for surface flux transport models
NASA Astrophysics Data System (ADS)
Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.
2017-11-01
Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.
Research on Matching Method of Power Supply Parameters for Dual Energy Source Electric Vehicles
NASA Astrophysics Data System (ADS)
Jiang, Q.; Luo, M. J.; Zhang, S. K.; Liao, M. W.
2018-03-01
A new type of power source is proposed, which is based on the traffic signal matching method of the dual energy source power supply composed of the batteries and the supercapacitors. First, analyzing the power characteristics is required to meet the excellent dynamic characteristics of EV, studying the energy characteristics is required to meet the mileage requirements and researching the physical boundary characteristics is required to meet the physical conditions of the power supply. Secondly, the parameter matching design with the highest energy efficiency is adopted to select the optimal parameter group with the method of matching deviation. Finally, the simulation analysis of the vehicle is carried out in MATLABSimulink, The mileage and energy efficiency of dual energy sources are analyzed in different parameter models, and the rationality of the matching method is verified.
Personal sound zone reproduction with room reflections
NASA Astrophysics Data System (ADS)
Olik, Marek
Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.
Extending the Lincoln-Petersen estimator for multiple identifications in one source.
Köse, T; Orman, M; Ikiz, F; Baksh, M F; Gallagher, J; Böhning, D
2014-10-30
The Lincoln-Petersen estimator is one of the most popular estimators used in capture-recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11 ,f10 ,f01 ,f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln-Petersen estimator provides an estimate for f00 . In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln-Petersen's, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln-Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey. Copyright © 2014 John Wiley & Sons, Ltd.
PLEIADES: High Peak Brightness, Subpicosecond Thomson Hard-X-ray source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuba, J; Anderson, S G; Barty, C J
2003-12-15
The Picosecond Laser-Electron Inter-Action for the Dynamic Evaluation of Structures (PLEIADES) facility, is a unique, novel, tunable (10-200 keV), ultrafast (ps-fs), hard x-ray source that greatly extends the parameter range reached by existing 3rd generation sources, both in terms of x-ray energy range, pulse duration, and peak brightness at high energies. First light was observed at 70 keV early in 2003, and the experimental data agrees with 3D codes developed at LLNL. The x-rays are generated by the interaction of a 50 fs Fourier-transform-limited laser pulse produced by the TW-class FALCON CPA laser and a highly focused, relativistic (20-100 MeV),more » high brightness (1 nC, 0.3-5 ps, 5 mm.mrad, 0.2% energy spread) photo-electron bunch. The resulting x-ray brightness is expected to exceed 10{sup 20} ph/mm{sup 2}/s/mrad{sup 2}/0.1% BW. The beam is well-collimated (10 mrad divergence over the full spectrum, 1 mrad for a single color), and the source is a unique tool for time-resolved dynamic measurements in matter, including high-Z materials.« less
NASA Technical Reports Server (NTRS)
Han, Shin-Chan; Sauber, Jeanne; Riva, Riccardo
2011-01-01
The 2011 great Tohoku-Oki earthquake, apart from shaking the ground, perturbed the motions of satellites orbiting some hundreds km away above the ground, such as GRACE, due to coseismic change in the gravity field. Significant changes in inter-satellite distance were observed after the earthquake. These unconventional satellite measurements were inverted to examine the earthquake source processes from a radically different perspective that complements the analyses of seismic and geodetic ground recordings. We found the average slip located up-dip of the hypocenter but within the lower crust, as characterized by a limited range of bulk and shear moduli. The GRACE data constrained a group of earthquake source parameters that yield increasing dip (7-16 degrees plus or minus 2 degrees) and, simultaneously, decreasing moment magnitude (9.17-9.02 plus or minus 0.04) with increasing source depth (15-24 kilometers). The GRACE solution includes the cumulative moment released over a month and demonstrates a unique view of the long-wavelength gravimetric response to all mass redistribution processes associated with the dynamic rupture and short-term postseismic mechanisms to improve our understanding of the physics of megathrusts.
Finite-amplitude, pulsed, ultrasonic beams
NASA Astrophysics Data System (ADS)
Coulouvrat, François; Frøysa, Kjell-Eivind
An analytical, approximate solution of the inviscid KZK equation for a nonlinear pulsed sound beam radiated by an acoustic source with a Gaussian velocity distribution, is obtained by means of the renormalization method. This method involves two steps. First, the transient, weakly nonlinear field is computed. However, because of cumulative nonlinear effects, that expansion is non-uniform and breaks down at some distance away from the source. So, in order to extend its validity, it is re-written in a new frame of co-ordinates, better suited to following the nonlinear distorsion of the wave profile. Basically, the nonlinear coordinate transform introduces additional terms in the expansion, which are chosen so as to counterbalance the non-uniform ones. Special care is devoted to the treatment of shock waves. Finally, comparisons with the results of a finite-difference scheme turn out favorable, and show the efficiency of the method for a rather large range of parameters.
Direct photolysis of polycyclic aromatic hydrocarbons in drinking water sources.
Sanches, S; Leitão, C; Penetra, A; Cardoso, V V; Ferreira, E; Benoliel, M J; Crespo, M T Barreto; Pereira, V J
2011-09-15
The widely used low pressure lamps were tested in terms of their efficiency to degrade polycyclic aromatic hydrocarbons listed as priority pollutants by the European Water Framework Directive and the U.S. Environmental Protection Agency, in water matrices with very different compositions (laboratory grade water, groundwater, and surface water). Using a UV fluence of 1500 mJ/cm(2), anthracene and benzo(a)pyrene were efficiently degraded, with much higher percent removals obtained when present in groundwater (83-93%) compared to surface water (36-48%). The removal percentages obtained for fluoranthene were lower and ranged from 13 to 54% in the different water matrices tested. Several parameters that influence the direct photolysis of polycyclic aromatic hydrocarbons were determined and their photolysis by-products were identified by mass spectrometry. The formation of photolysis by-products was found to be highly dependent on the source waters tested. Copyright © 2011 Elsevier B.V. All rights reserved.
Two dimensional radial gas flows in atmospheric pressure plasma-enhanced chemical vapor deposition
NASA Astrophysics Data System (ADS)
Kim, Gwihyun; Park, Seran; Shin, Hyunsu; Song, Seungho; Oh, Hoon-Jung; Ko, Dae Hong; Choi, Jung-Il; Baik, Seung Jae
2017-12-01
Atmospheric pressure (AP) operation of plasma-enhanced chemical vapor deposition (PECVD) is one of promising concepts for high quality and low cost processing. Atmospheric plasma discharge requires narrow gap configuration, which causes an inherent feature of AP PECVD. Two dimensional radial gas flows in AP PECVD induces radial variation of mass-transport and that of substrate temperature. The opposite trend of these variations would be the key consideration in the development of uniform deposition process. Another inherent feature of AP PECVD is confined plasma discharge, from which volume power density concept is derived as a key parameter for the control of deposition rate. We investigated deposition rate as a function of volume power density, gas flux, source gas partial pressure, hydrogen partial pressure, plasma source frequency, and substrate temperature; and derived a design guideline of deposition tool and process development in terms of deposition rate and uniformity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuñez-Cumplido, E., E-mail: ejnc-mccg@hotmail.com; Hernandez-Armas, J.; Perez-Calatayud, J.
2015-08-15
Purpose: In clinical practice, specific air kerma strength (S{sub K}) value is used in treatment planning system (TPS) permanent brachytherapy implant calculations with {sup 125}I and {sup 103}Pd sources; in fact, commercial TPS provide only one S{sub K} input value for all implanted sources and the certified shipment average is typically used. However, the value for S{sub K} is dispersed: this dispersion is not only due to the manufacturing process and variation between different source batches but also due to the classification of sources into different classes according to their S{sub K} values. The purpose of this work is tomore » examine the impact of S{sub K} dispersion on typical implant parameters that are used to evaluate the dose volume histogram (DVH) for both planning target volume (PTV) and organs at risk (OARs). Methods: The authors have developed a new algorithm to compute dose distributions with different S{sub K} values for each source. Three different prostate volumes (20, 30, and 40 cm{sup 3}) were considered and two typical commercial sources of different radionuclides were used. Using a conventional TPS, clinically accepted calculations were made for {sup 125}I sources; for the palladium, typical implants were simulated. To assess the many different possible S{sub K} values for each source belonging to a class, the authors assigned an S{sub K} value to each source in a randomized process 1000 times for each source and volume. All the dose distributions generated for each set of simulations were assessed through the DVH distributions comparing with dose distributions obtained using a uniform S{sub K} value for all the implanted sources. The authors analyzed several dose coverage (V{sub 100} and D{sub 90}) and overdosage parameters for prostate and PTV and also the limiting and overdosage parameters for OARs, urethra and rectum. Results: The parameters analyzed followed a Gaussian distribution for the entire set of computed dosimetries. PTV and prostate V{sub 100} and D{sub 90} variations ranged between 0.2% and 1.78% for both sources. Variations for the overdosage parameters V{sub 150} and V{sub 200} compared to dose coverage parameters were observed and, in general, variations were larger for parameters related to {sup 125}I sources than {sup 103}Pd sources. For OAR dosimetry, variations with respect to the reference D{sub 0.1cm{sup 3}} were observed for rectum values, ranging from 2% to 3%, compared with urethra values, which ranged from 1% to 2%. Conclusions: Dose coverage for prostate and PTV was practically unaffected by S{sub K} dispersion, as was the maximum dose deposited in the urethra due to the implant technique geometry. However, the authors observed larger variations for the PTV V{sub 150}, rectum V{sub 100}, and rectum D{sub 0.1cm{sup 3}} values. The variations in rectum parameters were caused by the specific location of sources with S{sub K} value that differed from the average in the vicinity. Finally, on comparing the two sources, variations were larger for {sup 125}I than for {sup 103}Pd. This is because for {sup 103}Pd, a greater number of sources were used to obtain a valid dose distribution than for {sup 125}I, resulting in a lower variation for each S{sub K} value for each source (because the variations become averaged out statistically speaking)« less
NASA Astrophysics Data System (ADS)
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation.
Dmochowski, Jacek P; Koessler, Laurent; Norcia, Anthony M; Bikson, Marom; Parra, Lucas C
2017-08-15
To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4-7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation
Dmochowski, Jacek P.; Koessler, Laurent; Norcia, Anthony M.; Bikson, Marom; Parra, Lucas C.
2018-01-01
To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4–7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. PMID:28578130
Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao
2017-10-01
A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.
Source effects on the simulation of the strong groud motion of the 2011 Lorca earthquake
NASA Astrophysics Data System (ADS)
Saraò, Angela; Moratto, Luca; Vuan, Alessandro; Mucciarelli, Marco; Jimenez, Maria Jose; Garcia Fernandez, Mariano
2016-04-01
On May 11, 2011 a moderate seismic event (Mw=5.2) struck the city of Lorca (South-East Spain) causing nine casualties, a large number of injured people and damages at the civil buildings. The largest PGA value (360 cm/s2) ever recorded so far in Spain, was observed at the accelerometric station located in Lorca (LOR), and it was explained as due to the source directivity, rather than to local site effects. During the last years different source models, retrieved from the inversions of geodetic or seismological data, or a combination of the two, have been published. To investigate the variability that equivalent source models of an average earthquake can introduce in the computation of strong motion, we calculated seismograms (up to 1 Hz), using an approach based on the wavenumber integration and, as input, four different source models taken from the literature. The source models differ mainly for the slip distribution on the fault. Our results show that, as effect of the different sources, the ground motion variability, in terms of pseudo-spectral velocity (1s), can reach one order of magnitude for near source receivers or for sites influenced by the forward-directivity effect. Finally, we compute the strong motion at frequencies higher than 1 Hz using the Empirical Green Functions and the source model parameters that better reproduce the recorded shaking up to 1 Hz: the computed seismograms fit satisfactorily the signals recorded at LOR station as well as at the other stations close to the source.
Krall, J. R.; Hackstadt, A. J.; Peng, R. D.
2017-01-01
Exposure to particulate matter (PM) air pollution has been associated with a range of adverse health outcomes, including cardiovascular disease (CVD) hospitalizations and other clinical parameters. Determining which sources of PM, such as traffic or industry, are most associated with adverse health outcomes could help guide future recommendations aimed at reducing harmful pollution exposure for susceptible individuals. Information obtained from multisite studies, which is generally more precise than information from a single location, is critical to understanding how PM impacts health and to informing local strategies for reducing individual-level PM exposure. However, few methods exist to perform multisite studies of PM sources, which are not generally directly observed, and adverse health outcomes. We developed SHARE, a hierarchical modeling approach that facilitates reproducible, multisite epidemiologic studies of PM sources. SHARE is a two-stage approach that first summarizes information about PM sources across multiple sites. Then, this information is used to determine how community-level (i.e. county- or city-level) health effects of PM sources should be pooled to estimate regional-level health effects. SHARE is a type of population value decomposition that aims to separate out regional-level features from site-level data. Unlike previous approaches for multisite epidemiologic studies of PM sources, the SHARE approach allows the specific PM sources identified to vary by site. Using data from 2000–2010 for 63 northeastern US counties, we estimated regional-level health effects associated with short-term exposure to major types of PM sources. We found PM from secondary sulfate, traffic, and metals sources was most associated with CVD hospitalizations. PMID:28098412
NASA Astrophysics Data System (ADS)
Kumar, J.; Jain, A.; Srivastava, R.
2005-12-01
The identification of pollution sources in aquifers is an important area of research not only for the hydrologists but also for the local and Federal agencies and defense organizations. Once the data in terms of pollutant concentration measurements at observation wells become known, it is important to identify the polluting industry in order to implement punitive or remedial measures. Traditionally, hydrologists have relied on the conceptual methods for the identification of groundwater pollution sources. The problem of identification of groundwater pollution sources using the conceptual methods requires a thorough understanding of the groundwater flow and contaminant transport processes and inverse modeling procedures that are highly complex and difficult to implement. Recently, the soft computing techniques, such as artificial neural networks (ANNs) and genetic algorithms, have provided an attractive and easy to implement alternative to solve complex problems efficiently. Some researchers have used ANNs for the identification of pollution sources in aquifers. A major problem with most previous studies using ANNs has been the large size of the neural networks that are needed to model the inverse problem. The breakthrough curves at an observation well may consist of hundreds of concentration measurements, and presenting all of them to the input layer of an ANN not only results in humongous networks but also requires large amount of training and testing data sets to develop the ANN models. This paper presents the results of a study aimed at using certain characteristics of the breakthrough curves and ANNs for determining the distance of the pollution source from a given observation well. Two different neural network models are developed that differ in the manner of characterizing the breakthrough curves. The first ANN model uses five parameters, similar to the synthetic unit hydrograph parameters, to characterize the breakthrough curves. The five parameters employed are peak concentration, time to peak concentration, the widths of the breakthrough curves at 50% and 75% of the peak concentration, and the time base of the breakthrough curve. The second ANN model employs only the first four parameters leaving out the time base. The measurement of breakthrough curve at an observation well involves very high costs in sample collection at suitable time intervals and analysis for various contaminants. The receding portions of the breakthrough curves are normally very long and excluding the time base from modeling would result in considerable cost savings. The feed-forward multi-layer perceptron (MLP) type neural networks trained using the back-propagation algorithm, are employed in this study. The ANN models for the two approaches were developed using simulated data generated for conservative pollutant transport through a homogeneous aquifer. A new approach for ANN training using back-propagation is employed that considers two different error statistics to prevent over-training and under-training of the ANNs. The preliminary results indicate that the ANNs are able to identify the location of the pollution source very efficiently from both the methods of the breakthrough curves characterization.
Lo, Kam W
2017-03-01
When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.
The HelCat dual-source plasma device.
Lynn, Alan G; Gilmore, Mark; Watts, Christopher; Herrea, Janis; Kelly, Ralph; Will, Steve; Xie, Shuangwei; Yan, Lincan; Zhang, Yue
2009-10-01
The HelCat (Helicon-Cathode) device has been constructed to support a broad range of basic plasma science experiments relevant to the areas of solar physics, laboratory astrophysics, plasma nonlinear dynamics, and turbulence. These research topics require a relatively large plasma source capable of operating over a broad region of parameter space with a plasma duration up to at least several milliseconds. To achieve these parameters a novel dual-source system was developed utilizing both helicon and thermionic cathode sources. Plasma parameters of n(e) approximately 0.5-50 x 10(18) m(-3) and T(e) approximately 3-12 eV allow access to a wide range of collisionalities important to the research. The HelCat device and initial characterization of plasma behavior during dual-source operation are described.
Radiological analysis of plutonium glass batches with natural/enriched boron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainisch, R.
2000-06-22
The disposition of surplus plutonium inventories by the US Department of Energy (DOE) includes the immobilization of certain plutonium materials in a borosilicate glass matrix, also referred to as vitrification. This paper addresses source terms of plutonium masses immobilized in a borosilicate glass matrix where the glass components include both natural boron and enriched boron. The calculated source terms pertain to neutron and gamma source strength (particles per second), and source spectrum changes. The calculated source terms corresponding to natural boron and enriched boron are compared to determine the benefits (decrease in radiation source terms) for to the use ofmore » enriched boron. The analysis of plutonium glass source terms shows that a large component of the neutron source terms is due to (a, n) reactions. The Americium-241 and plutonium present in the glass emit alpha particles (a). These alpha particles interact with low-Z nuclides like B-11, B-10, and O-17 in the glass to produce neutrons. The low-Z nuclides are referred to as target particles. The reference glass contains 9.4 wt percent B{sub 2}O{sub 3}. Boron-11 was found to strongly support the (a, n) reactions in the glass matrix. B-11 has a natural abundance of over 80 percent. The (a, n) reaction rates for B-10 are lower than for B-11 and the analysis shows that the plutonium glass neutron source terms can be reduced by artificially enriching natural boron with B-10. The natural abundance of B-10 is 19.9 percent. Boron enriched to 96-wt percent B-10 or above can be obtained commercially. Since lower source terms imply lower dose rates to radiation workers handling the plutonium glass materials, it is important to know the achievable decrease in source terms as a result of boron enrichment. Plutonium materials are normally handled in glove boxes with shielded glass windows and the work entails both extremity and whole-body exposures. Lowering the source terms of the plutonium batches will make the handling of these materials less difficult and will reduce radiation exposure to operating workers.« less
NASA Astrophysics Data System (ADS)
Oosthuizen, Nadia; Hughes, Denis A.; Kapangaziwiri, Evison; Mwenge Kahinda, Jean-Marc; Mvandaba, Vuyelwa
2018-05-01
The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin - the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe - is assessed. Input data (and model parameters) are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17.72 Mm3 after the uncertainty in water use information was added.
Baghani, Hamid Reza; Lohrabian, Vahid; Aghamiri, Mahmoud Reza; Robatjazi, Mostafa
2016-03-01
(125)I is one of the important sources frequently used in brachytherapy. Up to now, several different commercial models of this source type have been introduced to the clinical radiation oncology applications. Recently, a new source model, IrSeed-125, has been added to this list. The aim of the present study is to determine the dosimetric parameters of this new source model based on the recommendations of TG-43 (U1) protocol using Monte Carlo simulation. The dosimetric characteristics of Ir-125 including dose rate constant, radial dose function, 2D anisotropy function and 1D anisotropy function were determined inside liquid water using MCNPX code and compared to those of other commercially available iodine sources. The dose rate constant of this new source was found to be 0.983+0.015 cGyh-1U-1 that was in good agreement with the TLD measured data (0.965 cGyh-1U-1). The 1D anisotropy function at 3, 5, and 7 cm radial distances were obtained as 0.954, 0.953 and 0.959, respectively. The results of this study showed that the dosimetric characteristics of this new brachytherapy source are comparable with those of other commercially available sources. Furthermore, the simulated parameters were in accordance with the previously measured ones. Therefore, the Monte Carlo calculated dosimetric parameters could be employed to obtain the dose distribution around this new brachytherapy source based on TG-43 (U1) protocol.
The effect of directivity in a PSHA framework
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Herrero, A.; Cultrera, G.
2012-09-01
We propose a method to introduce a refined representation of the ground motion in the framework of the Probabilistic Seismic Hazard Analysis (PSHA). This study is especially oriented to the incorporation of a priori information about source parameters, by focusing on the directivity effect and its influence on seismic hazard maps. Two strategies have been followed. One considers the seismic source as an extended source, and it is valid when the PSHA seismogenetic sources are represented as fault segments. We show that the incorporation of variables related to the directivity effect can lead to variations up to 20 per cent of the hazard level in case of dip-slip faults with uniform distribution of hypocentre location, in terms of spectral acceleration response at 5 s, exceeding probability of 10 per cent in 50 yr. The second one concerns the more general problem of the seismogenetic areas, where each point is a seismogenetic source having the same chance of enucleate a seismic event. In our proposition the point source is associated to the rupture-related parameters, defined using a statistical description. As an example, we consider a source point of an area characterized by strike-slip faulting style. With the introduction of the directivity correction the modulation of the hazard map reaches values up to 100 per cent (for strike-slip, unilateral faults). The introduction of directivity does not increase uniformly the hazard level, but acts more like a redistribution of the estimation that is consistent with the fault orientation. A general increase appears only when no a priori information is available. However, nowadays good a priori knowledge exists on style of faulting, dip and orientation of faults associated to the majority of the seismogenetic zones of the present seismic hazard maps. The percentage of variation obtained is strongly dependent on the type of model chosen to represent analytically the directivity effect. Therefore, it is our aim to emphasize more on the methodology following which, all the information collected may be easily converted to obtain a more comprehensive and meaningful probabilistic seismic hazard formulation.
Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta
2017-02-01
The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.
System calibration method for Fourier ptychographic microscopy.
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Utilization of GPS Tropospheric Delays for Climate Research
NASA Astrophysics Data System (ADS)
Suparta, Wayan
2017-05-01
The tropospheric delay is one of the main error sources in Global Positioning Systems (GPS) and its impact plays a crucial role in near real-time weather forecasting. Accessibility and accurate estimation of this parameter are essential for weather and climate research. Advances in GPS application has allowed the measurements of zenith tropospheric delay (ZTD) in all weather conditions and on a global scale with fine temporal and spatial resolution. In addition to the rapid advancement of GPS technology and informatics and the development of research in the field of Earth and Planetary Sciences, the GPS data has been available free of charge. Now only required sophisticated processing techniques but user friendly. On the other hand, the ZTD parameter obtained from the models or measurements needs to be converted into precipitable water vapor (PWV) to make it more useful as a component of weather forecasting and analysis atmospheric hazards such as tropical storms, flash floods, landslide, pollution, and earthquake as well as for climate change studies. This paper addresses the determination of ZTD as a signal error or delay source during the propagation from the satellite to a receiver on the ground and is a key driving force behind the atmospheric events. Some results in terms of ZTD and PWV will be highlighted in this paper.
NASA Astrophysics Data System (ADS)
Huang, Ching-Sheng; Yeh, Hund-Der
2016-11-01
This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.
Transition to a Source with Modified Physical Parameters by Energy Supply or Using an External Force
NASA Astrophysics Data System (ADS)
Kucherov, A. N.
2017-11-01
A study has been made of the possibility for the physical parameters of a source/sink, i.e., for the enthalpy, temperature, total pressure, maximum velocity, and minimum dimension, at a constant radial Mach number to be changed by energy or force action on the gas in a bounded zone. It has been shown that the parameters can be controlled at a subsonic, supersonic, and transonic (sonic in the limit) radial Mach number. In the updated source/sink, all versions of a vortex-source combination can be implemented: into a vacuum, out of a vacuum, into a submerged space, and out of a submerged space, partially or fully.
Comparative Study of Light Sources for Household
NASA Astrophysics Data System (ADS)
Pawlak, Andrzej; Zalesińska, Małgorzata
2017-03-01
The article describes test results that provided the ground to define and evaluate basic photometric, colorimetric and electric parameters of selected, widely available light sources, which are equivalent to a traditional incandescent 60-Watt light bulb. Overall, one halogen light bulb, three compact fluorescent lamps and eleven LED light sources were tested. In general, it was concluded that in most cases (branded products, in particular) the measured and calculated parameters differ from the values declared by manufacturers only to a small degree. LED sources prove to be the most beneficial substitute for traditional light bulbs, considering both their operational parameters and their price, which is comparable with the price of compact fluorescent lamps or, in some instances, even lower.
Voit, E O; Knapp, R G
1997-08-15
The linear-logistic regression model and Cox's proportional hazard model are widely used in epidemiology. Their successful application leaves no doubt that they are accurate reflections of observed disease processes and their associated risks or incidence rates. In spite of their prominence, it is not a priori evident why these models work. This article presents a derivation of the two models from the framework of canonical modeling. It begins with a general description of the dynamics between risk sources and disease development, formulates this description in the canonical representation of an S-system, and shows how the linear-logistic model and Cox's proportional hazard model follow naturally from this representation. The article interprets the model parameters in terms of epidemiological concepts as well as in terms of general systems theory and explains the assumptions and limitations generally accepted in the application of these epidemiological models.
Accuracy of assessing the level of impulse sound from distant sources.
Wszołek, Tadeusz; Kłaczyński, Maciej
2007-01-01
Impulse sound events are characterised by ultra high pressures and low frequencies. Lower frequency sounds are generally less attenuated over a given distance in the atmosphere than higher frequencies. Thus, impulse sounds can be heard over greater distances and will be more affected by the environment. To calculate a long-term average immission level it is necessary to apply weighting factors like the probability of the occurrence of each weather condition during the relevant time period. This means that when measuring impulse noise at a long distance it is necessary to follow environmental parameters in many points along the way sound travels and also to have a database of sound transfer functions in the long term. The paper analyses the uncertainty of immission measurement results of impulse sound from cladding and destroying explosive materials. The influence of environmental conditions on the way sound travels is the focus of this paper.
Cosmological implications of scalar field dark energy models in f(T,𝒯 ) gravity
NASA Astrophysics Data System (ADS)
Salako, Ines G.; Jawad, Abdul; Moradpour, Hooman
After reviewing the f(T,𝒯 ) gravity, in which T is the torsion scalar and 𝒯 is the trace of the energy-momentum tensor, we refer to two cosmological models of this theory in agreement with observational data. Thereinafter, we consider a flat Friedmann-Robertson-Walker (FRW) universe filled by a pressureless source and look at the terms other than the Einstein terms in the corresponding Friedmann equations, as the dark energy (DE) candidate. In addition, some cosmological features of models, including equation of states and deceleration parameters, are addressed helping us in getting the accelerated expansion of the universe in quintessence era. Finally, we extract the scalar field as well as potential of quintessence, tachyon, K-essence and dilatonic fields for both f(T,𝒯 ) models. It is observed that the dynamics of scalar field as well as the scalar potential of these models indicate an accelerated expanding universe in these models.
Long-term transport behavior of psychoactive compounds in sewage-affected groundwater
NASA Astrophysics Data System (ADS)
Nham, Hang Thuy Thi; Greskowiak, Janek; Hamann, Enrico; Meffe, Raffaella; Hass, Ulrike; Massmann, Gudrun
2016-11-01
The present study provides a model-based characterization of the long-term transport behavior of five psychoactive compounds (meprobamate, pyrithyldione, primidone, phenobarbital and phenylethylmalonamide) introduced into groundwater via sewage irrigation in Berlin, Germany. Compounds are still present in the groundwater despite the sewage farm closure in the year 1980. Due to the limited information on (i) compound concentrations in the source water and (ii) substance properties, a total of 180 cross-sectional model realizations for each compound were carried out, covering a large range of possible parameter combinations. Results were compared with the present-day contamination patterns in the aquifer and the most likely scenarios were identified based on a number of model performance criteria. The simulation results show that (i) compounds are highly persistent under the present field conditions, and (ii) sorption is insignificant. Thus, back-diffusion from low permeability zones appears as the main reason for the compound retardation.
The Certainty of Uncertainty: Potential Sources of Bias and Imprecision in Disease Ecology Studies.
Lachish, Shelly; Murray, Kris A
2018-01-01
Wildlife diseases have important implications for wildlife and human health, the preservation of biodiversity and the resilience of ecosystems. However, understanding disease dynamics and the impacts of pathogens in wild populations is challenging because these complex systems can rarely, if ever, be observed without error. Uncertainty in disease ecology studies is commonly defined in terms of either heterogeneity in detectability (due to variation in the probability of encountering, capturing, or detecting individuals in their natural habitat) or uncertainty in disease state assignment (due to misclassification errors or incomplete information). In reality, however, uncertainty in disease ecology studies extends beyond these components of observation error and can arise from multiple varied processes, each of which can lead to bias and a lack of precision in parameter estimates. Here, we present an inventory of the sources of potential uncertainty in studies that attempt to quantify disease-relevant parameters from wild populations (e.g., prevalence, incidence, transmission rates, force of infection, risk of infection, persistence times, and disease-induced impacts). We show that uncertainty can arise via processes pertaining to aspects of the disease system, the study design, the methods used to study the system, and the state of knowledge of the system, and that uncertainties generated via one process can propagate through to others because of interactions between the numerous biological, methodological and environmental factors at play. We show that many of these sources of uncertainty may not be immediately apparent to researchers (for example, unidentified crypticity among vectors, hosts or pathogens, a mismatch between the temporal scale of sampling and disease dynamics, demographic or social misclassification), and thus have received comparatively little consideration in the literature to date. Finally, we discuss the type of bias or imprecision introduced by these varied sources of uncertainty and briefly present appropriate sampling and analytical methods to account for, or minimise, their influence on estimates of disease-relevant parameters. This review should assist researchers and practitioners to navigate the pitfalls of uncertainty in wildlife disease ecology studies.
Measurement of erosion in helicon plasma thrusters using the VASIMR® VX-CR device
NASA Astrophysics Data System (ADS)
Del Valle Gamboa, Juan Ignacio; Castro-Nieto, Jose; Squire, Jared; Carter, Mark; Chang-Diaz, Franklin
2015-09-01
The helicon plasma source is one of the principal stages of the high-power VASIMR® electric propulsion system. The VASIMR® VX-CR experiment focuses solely on this stage, exploring the erosion and long-term operation effects of the VASIMR helicon source. We report on the design and operational parameters of the VX-CR experiment, and the development of modeling tools and characterization techniques allowing the study of erosion phenomena in helicon plasma sources in general, and stand-alone helicon plasma thrusters (HPTs) in particular. A thorough understanding of the erosion phenomena within HPTs will enable better predictions of their behavior as well as more accurate estimations of their expected lifetime. We present a simplified model of the plasma-wall interactions within HPTs based on current models of the plasma density distributions in helicon discharges. Results from this modeling tool are used to predict the erosion within the plasma-facing components of the VX-CR device. Experimental techniques to measure actual erosion, including the use of coordinate-measuring machines and microscopy, will be discussed.
Assessment of Noise and Associated Health Impacts at Selected Secondary Schools in Ibadan, Nigeria
Ana, Godson R. E. E.; Shendell, Derek G.; Brown, G. E.; Sridhar, M. K. C.
2009-01-01
Background. Most schools in Ibadan, Nigeria, are located near major roads (mobile line sources). We conducted an initial assessment of noise levels and adverse noise-related health and learning effects. Methods. For this descriptive, cross-sectional study, four schools were selected randomly from eight participating in overall project. We administered 200 questionnaires, 50 per school, assessing health and learning-related outcomes. Noise levels (A-weighted decibels, dBA) were measured with calibrated sound level meters. Traffic density was assessed for school with the highest measured dBA. Observational checklists assessed noise control parameters and building physical attributes. Results. Short-term, cross-sectional school-day noise levels ranged 68.3–84.7 dBA. Over 60% of respondents reported that vehicular traffic was major source of noise, and over 70% complained being disturbed by noise. Three schools reported tiredness, and one school lack of concentration, as the most prevalent noise-related health problems. Conclusion. Secondary school occupants in Ibadan, Nigeria were potentially affected by exposure to noise from mobile line sources. PMID:20041025
Gardiner, James; Gunarathne, Nuwan; Howard, David; Kenney, Laurence
2016-01-01
Collecting large datasets of amputee gait data is notoriously difficult. Additionally, collecting data on less prevalent amputations or on gait activities other than level walking and running on hard surfaces is rarely attempted. However, with the wealth of user-generated content on the Internet, the scope for collecting amputee gait data from alternative sources other than traditional gait labs is intriguing. Here we investigate the potential of YouTube videos to provide gait data on amputee walking. We use an example dataset of trans-femoral amputees level walking at self-selected speeds to collect temporal gait parameters and calculate gait asymmetry. We compare our YouTube data with typical literature values, and show that our methodology produces results that are highly comparable to data collected in a traditional manner. The similarity between the results of our novel methodology and literature values lends confidence to our technique. Nevertheless, clear challenges with the collection and interpretation of crowd-sourced gait data remain, including long term access to datasets, and a lack of validity and reliability studies in this area.
Gardiner, James; Gunarathne, Nuwan; Howard, David; Kenney, Laurence
2016-01-01
Collecting large datasets of amputee gait data is notoriously difficult. Additionally, collecting data on less prevalent amputations or on gait activities other than level walking and running on hard surfaces is rarely attempted. However, with the wealth of user-generated content on the Internet, the scope for collecting amputee gait data from alternative sources other than traditional gait labs is intriguing. Here we investigate the potential of YouTube videos to provide gait data on amputee walking. We use an example dataset of trans-femoral amputees level walking at self-selected speeds to collect temporal gait parameters and calculate gait asymmetry. We compare our YouTube data with typical literature values, and show that our methodology produces results that are highly comparable to data collected in a traditional manner. The similarity between the results of our novel methodology and literature values lends confidence to our technique. Nevertheless, clear challenges with the collection and interpretation of crowd-sourced gait data remain, including long term access to datasets, and a lack of validity and reliability studies in this area. PMID:27764226
DC and analog/RF performance optimisation of source pocket dual work function TFET
NASA Astrophysics Data System (ADS)
Raad, Bhagwan Ram; Sharma, Dheeraj; Kondekar, Pravin; Nigam, Kaushal; Baronia, Sagar
2017-12-01
We investigate a systematic study of source pocket tunnel field-effect transistor (SP TFET) with dual work function of single gate material by using uniform and Gaussian doping profile in the drain region for ultra-low power high frequency high speed applications. For this, a n+ doped region is created near the source/channel junction to decrease the depletion width results in improvement of ON-state current. However, the dual work function of the double gate is used for enhancement of the device performance in terms of DC and analog/RF parameters. Further, to improve the high frequency performance of the device, Gaussian doping profile is considered in the drain region with different characteristic lengths which decreases the gate to drain capacitance and leads to drastic improvement in analog/RF figures of merit. Furthermore, the optimisation is performed with different concentrations for uniform and Gaussian drain doping profile and for various sectional length of lower work function of the gate electrode. Finally, the effect of temperature variation on the device performance is demonstrated.
A source to deliver mesoscopic particles for laser plasma studies
NASA Astrophysics Data System (ADS)
Gopal, R.; Kumar, R.; Anand, M.; Kulkarni, A.; Singh, D. P.; Krishnan, S. R.; Sharma, V.; Krishnamurthy, M.
2017-02-01
Intense ultrashort laser produced plasmas are a source for high brightness, short burst of X-rays, electrons, and high energy ions. Laser energy absorption and its disbursement strongly depend on the laser parameters and also on the initial size and shape of the target. The ability to change the shape, size, and material composition of the matter that absorbs light is of paramount importance not only from a fundamental physics point of view but also for potentially developing laser plasma sources tailored for specific applications. The idea of preparing mesoscopic particles of desired size/shape and suspending them in vacuum for laser plasma acceleration is a sparsely explored domain. In the following report we outline the development of a delivery mechanism of microparticles into an effusive jet in vacuum for laser plasma studies. We characterise the device in terms of particle density, particle size distribution, and duration of operation under conditions suitable for laser plasma studies. We also present the first results of x-ray emission from micro crystals of boric acid that extends to 100 keV even under relatively mild intensities of 1016 W/cm2.
Unrecognized astrometric confusion in the Galactic Centre
NASA Astrophysics Data System (ADS)
Plewa, P. M.; Sari, R.
2018-06-01
The Galactic Centre is a crowded stellar field and frequent unrecognized events of source confusion, which involve undetected faint stars, are expected to introduce astrometric noise on a sub-mas level. This confusion noise is the main non-instrumental effect limiting the astrometric accuracy and precision of current near-infrared imaging observations and the long-term monitoring of individual stellar orbits in the vicinity of the central supermassive black hole. We self-consistently simulate the motions of the known and the yet unidentified stars to characterize this noise component and show that a likely consequence of source confusion is a bias in estimates of the stellar orbital elements, as well as the inferred mass and distance of the black hole, in particular if stars are being observed at small projected separations from it, such as the star S2 during pericentre passage. Furthermore, we investigate modelling the effect of source confusion as an additional noise component that is time-correlated, demonstrating a need for improved noise models to obtain trustworthy estimates of the parameters of interest (and their uncertainties) in future astrometric studies.
Understanding the dust cycle at high latitudes: integrating models and observations
NASA Astrophysics Data System (ADS)
Albani, S.; Mahowald, N. M.; Maggi, V.; Delmonte, B.; Winckler, G.; Potenza, M. A. C.; Baccolo, G.; Balkanski, Y.
2017-12-01
Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. Paleodust archives from land, ocean, and ice sheets preserve the history of dust deposition for a range of spatial scales from close to the major hemispheric sources to remote sinks such as the polar ice sheets. In each hemisphere common features on the glacial-interglacial time scale mark the baseline evolution of the dust cycle, and inspired the hypothesis that increased dust deposition to ocean stimulated the glacial biological pump contributing to the reduction of atmospheric carbon dioxide levels. On the other hand finer geographical and temporal scales features are superposed to these glacial-interglacial trends, providing the chance of a more sophisticated understanding of the dust cycle, for instance allowing distinctions in terms of source availability or transport patterns as recorded by different records. As such paleodust archives can prove invaluable sources of information, especially when characterized by a quantitative estimation of the mass accumulation rates, and interpreted in connection with climate models. We review our past work and present ongoing research showing how climate models can help in the interpretation of paleodust records, as well as the potential of the same observations for constraining the representation of the global dust cycle embedded in Earth System Models, both in terms of magnitude and physical parameters related to particle sizes and optical properties. Finally we show the impacts on climate, based on this kind of observationally constrained model simulations.
Seismic envelope-based detection and location of ground-coupled airwaves from volcanoes in Alaska
Fee, David; Haney, Matt; Matoza, Robin S.; Szuberla, Curt A.L.; Lyons, John; Waythomas, Christopher F.
2016-01-01
Volcanic explosions and other infrasonic sources frequently produce acoustic waves that are recorded by seismometers. Here we explore multiple techniques to detect, locate, and characterize ground‐coupled airwaves (GCA) on volcano seismic networks in Alaska. GCA waveforms are typically incoherent between stations, thus we use envelope‐based techniques in our analyses. For distant sources and planar waves, we use f‐k beamforming to estimate back azimuth and trace velocity parameters. For spherical waves originating within the network, we use two related time difference of arrival (TDOA) methods to detect and localize the source. We investigate a modified envelope function to enhance the signal‐to‐noise ratio and emphasize both high energies and energy contrasts within a spectrogram. We apply these methods to recent eruptions from Cleveland, Veniaminof, and Pavlof Volcanoes, Alaska. Array processing of GCA from Cleveland Volcano on 4 May 2013 produces robust detection and wave characterization. Our modified envelopes substantially improve the short‐term average/long‐term average ratios, enhancing explosion detection. We detect GCA within both the Veniaminof and Pavlof networks from the 2007 and 2013–2014 activity, indicating repeated volcanic explosions. Event clustering and forward modeling suggests that high‐resolution localization is possible for GCA on typical volcano seismic networks. These results indicate that GCA can be used to help detect, locate, characterize, and monitor volcanic eruptions, particularly in difficult‐to‐monitor regions. We have implemented these GCA detection algorithms into our operational volcano‐monitoring algorithms at the Alaska Volcano Observatory.
Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei
2007-10-01
Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.
Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity
NASA Technical Reports Server (NTRS)
Lin, J. Y.; Mingori, D. L.
1992-01-01
We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.
Quantitative determination of vinpocetine in dietary supplements
French, John M. T.; King, Matthew D.
2017-01-01
Current United States regulatory policies allow for the addition of pharmacologically active substances in dietary supplements if derived from a botanical source. The inclusion of certain nootropic drugs, such as vinpocetine, in dietary supplements has recently come under scrutiny due to the lack of defined dosage parameters and yet unproven short- and long-term benefits and risks to human health. This study quantified the concentration of vinpocetine in several commercially available dietary supplements and found that a highly variable range of 0.6–5.1 mg/serving was present across the tested products, with most products providing no specification of vinpocetine concentrations. PMID:27319129
DOE Office of Scientific and Technical Information (OSTI.GOV)
David, M.-L., E-mail: marie-laure.david@univ-poitiers.fr; Pailloux, F.; Canadian Centre for Electron Microscopy, Mc Master University, 1280 Main Street West, Hamilton, Ontario L8S 4M1
We demonstrate that the helium density and corresponding pressure can be modified in single nano-scale bubbles embedded in semiconductors by using the electron beam of a scanning transmission electron microscope as a multifunctional probe: the measurement probe for imaging and chemical analysis and the irradiation source to modify concomitantly the pressure in a controllable way by fine tuning of the electron beam parameters. The control of the detrapping rate is achieved by varying the experimental conditions. The underlying physical mechanisms are discussed; our experimental observations suggest that the helium detrapping from bubbles could be interpreted in terms of direct ballisticmore » collisions, leading to the ejection of the helium atoms from the bubble.« less
Nonimaging optical illumination system
Winston, Roland; Ries, Harald
2000-01-01
A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.
Nonimaging optical illumination system
Winston, Roland; Ries, Harald
1998-01-01
A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.
Nonimaging optical illumination system
Winston, Roland; Ries, Harald
1996-01-01
A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.
Performance Impact of Deflagration to Detonation Transition Enhancing Obstacles
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Schauer, Frederick; Hopper, David
2012-01-01
A sub-model is developed to account for the drag and heat transfer enhancement resulting from deflagration-to-detonation (DDT) inducing obstacles commonly used in pulse detonation engines (PDE). The sub-model is incorporated as a source term in a time-accurate, quasi-onedimensional, CFD-based PDE simulation. The simulation and sub-model are then validated through comparison with a particular experiment in which limited DDT obstacle parameters were varied. The simulation is then used to examine the relative contributions from drag and heat transfer to the reduced thrust which is observed. It is found that heat transfer is far more significant than aerodynamic drag in this particular experiment.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-01-01
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642
MODEST - JPL GEODETIC AND ASTROMETRIC VLBI MODELING AND PARAMETER ESTIMATION PROGRAM
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1994-01-01
Observations of extragalactic radio sources in the gigahertz region of the radio frequency spectrum by two or more antennas, separated by a baseline as long as the diameter of the Earth, can be reduced, by radio interferometry techniques, to yield time delays and their rates of change. The Very Long Baseline Interferometric (VLBI) observables can be processed by the MODEST software to yield geodetic and astrometric parameters of interest in areas such as geophysical satellite and spacecraft tracking applications and geodynamics. As the accuracy of radio interferometry has improved, increasingly complete models of the delay and delay rate observables have been developed. MODEST is a delay model (MOD) and parameter estimation (EST) program that takes into account delay effects such as geometry, clock, troposphere, and the ionosphere. MODEST includes all known effects at the centimeter level in modeling. As the field evolves and new effects are discovered, these can be included in the model. In general, the model includes contributions to the observables from Earth orientation, antenna motion, clock behavior, atmospheric effects, and radio source structure. Within each of these categories, a number of unknown parameters may be estimated from the observations. Since all parts of the time delay model contain nearly linear parameter terms, a square-root-information filter (SRIF) linear least-squares algorithm is employed in parameter estimation. Flexibility (via dynamic memory allocation) in the MODEST code ensures that the same executable can process a wide array of problems. These range from a few hundred observations on a single baseline, yielding estimates of tens of parameters, to global solutions estimating tens of thousands of parameters from hundreds of thousands of observations at antennas widely distributed over the Earth's surface. Depending on memory and disk storage availability, large problems may be subdivided into more tractable pieces that are processed sequentially. MODEST is written in FORTRAN 77, C-language, and VAX ASSEMBLER for DEC VAX series computers running VMS. It requires 6Mb of RAM for execution. The standard distribution medium for this package is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Instructions for use and sample input and output data are available on the distribution media. This program was released in 1993 and is a copyrighted work with all copyright vested in NASA.
Updating national standards for drinking-water: a Philippine experience.
Lomboy, M; Riego de Dios, J; Magtibay, B; Quizon, R; Molina, V; Fadrilan-Camacho, V; See, J; Enoveso, A; Barbosa, L; Agravante, A
2017-04-01
The latest version of the Philippine National Standards for Drinking-Water (PNSDW) was issued in 2007 by the Department of Health (DOH). Due to several issues and concerns, the DOH decided to make an update which is relevant and necessary to meet the needs of the stakeholders. As an output, the water quality parameters are now categorized into mandatory, primary, and secondary. The ten mandatory parameters are core parameters which all water service providers nationwide are obligated to test. These include thermotolerant coliforms or Escherichia coli, arsenic, cadmium, lead, nitrate, color, turbidity, pH, total dissolved solids, and disinfectant residual. The 55 primary parameters are site-specific and can be adopted as enforceable parameters when developing new water sources or when the existing source is at high risk of contamination. The 11 secondary parameters include operational parameters and those that affect the esthetic quality of drinking-water. In addition, the updated PNSDW include new sections: (1) reporting and interpretation of results and corrective actions; (2) emergency drinking-water parameters; (3) proposed Sustainable Development Goal parameters; and (4) standards for other drinking-water sources. The lessons learned and insights gained from the updating of standards are likewise incorporated in this paper.
NASA Astrophysics Data System (ADS)
Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.
2017-12-01
Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.
Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.
Lv, Jie; Havlak, Paul; Putnam, Nicholas H
2011-10-05
Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.
The Chaotic Long-term X-ray Variability of 4U 1705-44
NASA Astrophysics Data System (ADS)
Phillipson, R. A.; Boyd, P. T.; Smale, A. P.
2018-04-01
The low-mass X-ray binary 4U1705-44 exhibits dramatic long-term X-ray time variability with a timescale of several hundred days. The All-Sky Monitor (ASM) aboard the Rossi X-ray Timing Explorer (RXTE) and the Japanese Monitor of All-sky X-ray Image (MAXI) aboard the International Space Station together have continuously observed the source from December 1995 through May 2014. The combined ASM-MAXI data provide a continuous time series over fifty times the length of the timescale of interest. Topological analysis can help us identify 'fingerprints' in the phase-space of a system unique to its equations of motion. The Birman-Williams theorem postulates that if such fingerprints are the same between two systems, then their equations of motion must be closely related. The phase-space embedding of the source light curve shows a strong resemblance to the double-welled nonlinear Duffing oscillator. We explore a range of parameters for which the Duffing oscillator closely mirrors the time evolution of 4U1705-44. We extract low period, unstable periodic orbits from the 4U1705-44 and Duffing time series and compare their topological information. The Duffing and 4U1705-44 topological properties are identical, providing strong evidence that they share the same underlying template. This suggests that we can look to the Duffing equation to help guide the development of a physical model to describe the long-term X-ray variability of this and other similarly behaved X-ray binary systems.
Dziekońska, A; Fraser, L; Majewska, A; Lecewicz, M; Zasiadczyk, Ł; Kordan, W
2013-01-01
This study was aimed to analyze the metabolic activity and membrane integrity of boar spermatozoa following storage in long-term semen extenders. Boar semen was diluted with Androhep EnduraGuard (AeG), DILU-Cell (DC), SafeCell Plus (SCP) and Vitasem LD (VLD) extenders and stored for 10 days at 17 degrees C. Parameters of the analyzed sperm metabolic activity included total motility (TMOT), progressive motility (PMOT), high mitochondrial membrane potential (MMP) and ATP content, whereas those of the membrane integrity included plasma membrane integrity (PMI) and normal apical ridge (NAR) acrosome. Extender type was a significant (P < 0.05) source of variation in all the analyzed sperm parameters, except for ATP content. Furthermore, the storage time had a significant effect (P < 0.05) on the sperm metabolic activity and membrane integrity during semen storage. In all extenders the metabolic activity and membrane integrity of the stored spermatozoa decreased continuously over time. Among the four analyzed extenders, AeG and SCP showed the best performance in terms of TMOT and PMI on Days 5, 7 and 10 of storage. Marked differences in the proportions of spermatozoa with high MMP were observed between the extenders, particularly on Day 10 of storage. There were not any marked differences in sperm ATP content between the extenders, regardless of the storage time. Furthermore, the percentage of spermatozoa with NAR acrosomes decreased during prolonged storage, being markedly lower in DC-diluted semen compared with semen diluted with either AeG or SCP extender. The results of this study indicated that components of the long-term extenders have different effects on the sperm functionality and prolonged semen longevity by delaying the processes associated with sperm ageing during liquid storage.
Water Resource Assessment in KRS Reservoir Using Remote Sensing and GIS Modelling
NASA Astrophysics Data System (ADS)
Manubabu, V. H.; Gouda, K. C.; Bhat, N.; Reddy, A.
2014-12-01
In the recent time the fresh water resource becomes very important because of various reasons like population growth, pollution, over exploitation of the ground water resources etc. As there is no efficient and proper measures for recharging ground water exists and also the climatological impacts on water resources like global warming exacerbating water shortages, growing populations and rising demand for freshwater in agriculture, industry, and energy production. There is a need and challenging task for analyzing the future changes in regional water availability and it is also very much necessary to asses and predict the fresh water present in a lake or reservoir to make better decision making in the optimal usage of surface water. In the present study is intended to provide a practical discussion of methodology that deals with how to asses and predict amount of surface water available in the future using Remote Sensing(RS) data , Geographical Information System(GIS) techniques, and GCM (Global Circulation Model). Basically the study emphasized over one of the biggest reservoir i.e. the Krishna Raja Sagara (KRS) reservoir situated in the state of Karnataka in India. Multispectral satellite images like IRS LISS III and Landsat L8 from different open source web portals like NRSC-Bhuvan and NASA Earth Explorer respectively are used for the present analysis. The multispectral satellite images are used to identify the temporal changes of the water quantity in the reservoir for the period 2000 to 2014. Also the water volume are being calculated using Advances Space born Thermal Emission and Reflection Radiometer (ASTER) Global DEM over the reservoir basin. The hydro meteorological parameters are also studied using multi-source observed data and the empirical water budget models for the reservoir in terms of rainfall, temperature, run off, water inflow and outflow etc. are being developed and analyzed. Statistical analysis are also carried out to quantify the relation between reservoir water volume and the hydrological parameters (Figure 1). A general circulation model (GCM) is used for the prediction of major hydro meteorological parameters like rainfall and using the GCM predictions the water availability in terms of water volume in future are simulated using the empirical water budget model.
Grba, Nenad; Krčmar, Dejan; Isakovski, Marijana Kragulj; Jazić, Jelena Molnar; Maletić, Snežana; Pešić, Vesna; Dalmacija, Božo
2016-11-01
Surface sediments were subject to systematic long-term monitoring (2002-2014) in the Republic of Serbia (Province of Vojvodina). Eight heavy metals (Ni, Zn, Cd, Cr, Cu, Pb, As and Hg), mineral oils (total petroleum hydrocarbons), 16 EPA PAHs, selected pesticides and polychlorinated biphenyls (PCB) were monitored. As part of this research, this paper presents a sediment contamination spatial and temporal trend study of diverse pollution sources and the ecological risk status of the alluvial sediments of Carska Bara at three representative sampling sites (S1S3), in order to establish the status of contamination and recommend substances of interest for more widespread future monitoring. Multivariate statistical methods including factor analysis of principal component analysis (PCA/FA), Pearson correlation and several synthetic indicators were used to evaluate the extent and origin of contamination (anthropogenic or natural, geogenic sources) and potential ecological risks. Hg, Cd, As, mineral oils and PAHs (dominated by dibenzo(a,h)anthracene and benzo(a)pyrene, contributing 85.7% of the total) are derived from several anthropogenic sources, whereas Ni, Cu, Cr and Zn are convincingly of geogenic origin, and exhibit dual origins. Cd and Hg significantly raise the levels of potential ecological risk for all sampling locations, demonstrating the effect of long-term bioaccumulation and biomagnification. Pb is isolated from the other parameters, implying unique sources. This research suggests four heavy metals (Zn, Cr, Cu and As) and dibenzo(a,h)anthracene be added to the list of priority pollutants within the context of the application of the European Water Framework Directive (WFD), in accordance with significant national and similar environmental data from countries in the region. Copyright © 2016 Elsevier Ltd. All rights reserved.
Earthquake Source Parameters Inferred from T-Wave Observations
NASA Astrophysics Data System (ADS)
Perrot, J.; Dziak, R.; Lau, T. A.; Matsumoto, H.; Goslin, J.
2004-12-01
The seismicity of the North Atlantic Ocean has been recorded by two networks of autonomous hydrophones moored within the SOFAR channel on the flanks of the Mid-Atlantic Ridge (MAR). In February 1999, a consortium of U.S. investigators (NSF and NOAA) deployed a 6-element hydrophone array for long-term monitoring of MAR seismicity between 15o-35oN south of the Azores. In May 2002, an international collaboration of French, Portuguese, and U.S. researchers deployed a 6-element hydrophone array north of the Azores Plateau from 40o-50oN. The northern network (referred to as SIRENA) was recovered in September 2003. The low attenuation properties of the SOFAR channel for earthquake T-wave propagation results in a detection threshold reduction from a magnitude completeness level (Mc) of ˜ 4.7 for MAR events recorded by the land-based seismic networks to Mc=3.0 using hydrophone arrays. Detailed focal depth and mechanism information, however, remain elusive due to the complexities of seismo-acoustic propagation paths. Nonetheless, recent analyses (Dziak, 2001; Park and Odom, 2001) indicate fault parameter information is contained within the T-wave signal packet. We investigate this relationship further by comparing an earthquake's T-wave duration and acoustic energy to seismic magnitude (NEIC) and radiation pattern (for events M>5) from the Harvard moment-tensor catalog. First results show earthquake energy is well represented by the acoustic energy of the T-waves, however T-wave codas are significantly influenced by acoustic propagation effects and do not allow a direct determination of the seismic magnitude of the earthquakes. Second, there appears to be a correlation between T-wave acoustic energy, azimuth from earthquake source to the hydrophone, and the radiation pattern of the earthquake's SH waves. These preliminary results indicate there is a relationship between the T-wave observations and earthquake source parameters, allowing for additional insights into T-wave propagation.
Induced Seismicity from different sources in Italy: how to interpret it?
NASA Astrophysics Data System (ADS)
Pastori, M.; De Gori, P.; Piccinini, D.; Bagh, S.; Improta, L.; Chiarabba, C.
2015-12-01
Typically the term "induced seismicity" is used to refer minor earthquakes and tremors caused by human activities that alter the stresses and strains on the Earth's crust. In the last years, the interest in the induced seismicity related to fluids (oil and gas, and geothermal resources) extraction or injection is increased, because it is believed to be responsible to enucleate earthquakes. Possible sources of induced seismicity are not only represented by the oil and gas production but also, i.e., by changes in the water level of artificial lakes. The aim of this work is to show results from two different sources, wastewater injection and changes in the water level of an artificial reservoir (Pertusillo lake), that can produce induced earthquakes observed in the Val d'Agri basin (Italy) and to compare them with variation in crustal elastic parameters. Val d'Agri basin in the Apennines extensional belt hosts the largest oilfield in onshore Europe and is bordered by NW-SE trending fault systems. Most of the recorded seismicity seems to be related to these structures. We correlated the seismicity rate, injection curves and changes in water levels with temporal variations of Vp/Vs and anisotropic parameters of the crustal reservoirs and in the nearby area. We analysed about 983 high-quality recordings occurred from 2002 to 2014 in Val d'Agri basin from temporary and permanent network held by INGV and ENI corporate. 3D high-precision locations and manual-revised P- and S-picking are used to estimate anisotropic parameters (delay time and fast direction polarization) and Vp/Vs ratio. Seismicity is mainly located in two areas: in the SW of the Pertusillo Lake, and near the Eni Oil field (SW and NE of the Val d'Agri basin respectively). Our correlations well recognize the seismicity diffusion process, caused by both water injection and water level changes; these findings could help to model the active and pre-existing faults failure behaviour.
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.
2015-12-01
Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However, the resulting optimal Z-R and Z-k relations are considerably different from those obtained from disdrometer observations. As such, the best microphysical parameter set results in a minimization of the overall bias, which besides accounting for DSD variations also corrects for the impact of additional error sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A.A. Bingham; R.M. Ferrer; A.M. ougouag
2009-09-01
An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry andmore » cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.« less
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1989-01-01
In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.
a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset
NASA Astrophysics Data System (ADS)
Zhou, Y. K.
2018-05-01
Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Numerical framework for the modeling of electrokinetic flows
NASA Astrophysics Data System (ADS)
Deshpande, Manish; Ghaddar, Chahid; Gilbert, John R.; St. John, Pamela M.; Woudenberg, Timothy M.; Connell, Charles R.; Molho, Joshua; Herr, Amy; Mungal, Godfrey; Kenny, Thomas W.
1998-09-01
This paper presents a numerical framework for design-based analyses of electrokinetic flow in interconnects. Electrokinetic effects, which can be broadly divided into electrophoresis and electroosmosis, are of importance in providing a transport mechanism in microfluidic devices for both pumping and separation. Models for the electrokinetic effects can be derived and coupled to the fluid dynamic equations through appropriate source terms. In the design of practical microdevices, however, accurate coupling of the electrokinetic effects requires the knowledge of several material and physical parameters, such as the diffusivity and the mobility of the solute in the solvent. Additionally wall-based effects such as chemical binding sites might exist that affect the flow patterns. In this paper, we address some of these issues by describing a synergistic numerical/experimental process to extract the parameters required. Experiments were conducted to provide the numerical simulations with a mechanism to extract these parameters based on quantitative comparisons with each other. These parameters were then applied in predicting further experiments to validate the process. As part of this research, we have created NetFlow, a tool for micro-fluid analyses. The tool can be validated and applied in existing technologies by first creating test structures to extract representations of the physical phenomena in the device, and then applying them in the design analyses to predict correct behavior.
GLOBAL ENERGETICS OF SOLAR FLARES. IV. CORONAL MASS EJECTION ENERGETICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aschwanden, Markus J., E-mail: aschwanden@lmsal.com
2016-11-01
This study entails the fourth part of a global flare energetics project, in which the mass m {sub cme}, kinetic energy E {sub kin}, and the gravitational potential energy E {sub grav} of coronal mass ejections (CMEs) is measured in 399 M and X-class flare events observed during the first 3.5 years of the Solar Dynamics Observatory (SDO) mission, using a new method based on the EUV dimming effect. EUV dimming is modeled in terms of a radial adiabatic expansion process, which is fitted to the observed evolution of the total emission measure of the CME source region. The modelmore » derives the evolution of the mean electron density, the emission measure, the bulk plasma expansion velocity, the mass, and the energy in the CME source region. The EUV dimming method is truly complementary to the Thomson scattering method in white light, which probes the CME evolution in the heliosphere at r ≳ 2 R {sub ⊙}, while the EUV dimming method tracks the CME launch in the corona. We compare the CME parameters obtained in white light with the LASCO/C2 coronagraph with those obtained from EUV dimming with the Atmospheric Imaging Assembly onboard the SDO for all identical events in both data sets. We investigate correlations between CME parameters, the relative timing with flare parameters, frequency occurrence distributions, and the energy partition between magnetic, thermal, nonthermal, and CME energies. CME energies are found to be systematically lower than the dissipated magnetic energies, which is consistent with a magnetic origin of CMEs.« less
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application
NASA Astrophysics Data System (ADS)
Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.
2017-12-01
This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and discuss the resolution of interesting and/or important recovered source properties.
Influence of source parameters on the growth of metal nanoparticles by sputter-gas-aggregation
NASA Astrophysics Data System (ADS)
Khojasteh, Malak; Kresin, Vitaly V.
2017-11-01
We describe the production of size-selected manganese nanoclusters using a magnetron sputtering/aggregation source. Since nanoparticle production is sensitive to a range of overlapping operating parameters (in particular, the sputtering discharge power, the inert gas flow rates, and the aggregation length), we focus on a detailed map of the influence of each parameter on the average nanocluster size. In this way, it is possible to identify the main contribution of each parameter to the physical processes taking place within the source. The discharge power and argon flow supply the metal vapor, and argon also plays a crucial role in the formation of condensation nuclei via three-body collisions. However, the argon flow and the discharge power have a relatively weak effect on the average nanocluster size in the exiting beam. Here the defining role is played by the source residence time, governed by the helium supply (which raises the pressure and density of the gas column inside the source, resulting in more efficient transport of nanoparticles to the exit) and by the aggregation path length.
Numerical model of a tracer test on the Santa Clara River, Ventura County, California
Nishikawa, Tracy; Paybins, Katherine S.; Izbicki, John A.; Reichard, Eric G.
1999-01-01
To better understand the flow processes, solute-transport processes, and ground-water/surface-water interactions on the Santa Clara River in Ventura County, California, a 24-hour fluorescent-dye tracer study was performed under steady-state flow conditions on a 45-km reach of the river. The study reach includes perennial (uppermost and lowermost) subreaches and ephemeral subreaches of the lower Piru Creek and the middle Santa Clara River. The tracer-test data were used to calibrate a one-dimensional flow model (DAFLOW) and a solute-transport model (BLTM). The dye-arrival times at each sample location were simulated by calibrating the velocity parameters in DAFLOW. The simulations of dye transport indicated that (1) ground-water recharge explains the loss of mass in the ephemeral middle subreaches, and (2) groundwater recharge does not explain the loss of mass in the perennial uppermost and lowermost subreaches. The observed tracer curves in the perennial subreaches were indicative of sorptive dye losses, transient storage, and (or) photodecay - these phenomena were simulated using a linear decay term. However, analysis of the linear decay terms indicated that photodecay was not a dominant source of dye loss.To better understand the flow processes, solute-transport processes, and ground-water/surface-water interactions on the Santa Clara River in Ventura County, California, a 24-hour fluorescent-dye tracer study was performed under steady-state flow conditions on a 45-km reach of the river. The study reach includes perennial (uppermost and lowermost) subreaches and ephemeral subreaches of the lower Piru Creek and the middle Santa Clara River. The tracer-test data were used to calibrate a one-dimension-al flow model (DAFLOW) and a solute-transport model (BLTM). The dye-arrival times at each sample location were simulated by calibrating the velocity parameters in DAFLOW. The simulations of dye transport indicated that (1) ground-water recharge explains the loss of mass in the ephemeral middle subreaches, and (2) ground-water recharge does not explain the loss of mass in the perennial uppermost and lowermost subreaches. The observed tracer curves in the perennial subreaches were indicative of sorptive dye losses, transient storage, and (or) photodecay - these phenomena were simulated using a linear decay term. However, analysis of the linear decay terms indicated that photodecay was not a dominant source of dye loss.
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2014 CFR
2014-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2012 CFR
2012-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2010 CFR
2010-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2013 CFR
2013-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2011 CFR
2011-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
Archer, Claire; Noble, Paula; Kreamer, David; Piscopo, Vincenzo; Petitta, Marco; Rosen, Michael R.; Poulson, Simon R.; Piovesan, Gianluca; Mensing, Scott
2017-01-01
Lake Lungo and Lake Ripasottile are two shallow (4-5 m) lakes located in the Rieti Basin, central Italy, that have been described previously as surface outcroppings of the groundwater table. In this work, the two lakes as well as springs and rivers that represent their potential source waters are characterized physio-chemically and isotopically, using a combination of environmental tracers. Temperature and pH were measured and water samples were analyzed for alkalinity, major ion concentration, and stable isotope (δ2H, δ18O, δ13C of dissolved inorganic carbon, and δ34S and δ18O of sulfate) composition. Chemical data were also investigated in terms of local meteorological data (air temperature, precipitation) to determine the sensitivity of lake parameters to changes in the surrounding environment. Groundwater represented by samples taken from Santa Susanna Spring was shown to be distinct with SO42- and Mg2+ content of 270 and 29 mg/L, respectively, and heavy sulfate isotopic composition(δ34S=15.2 ‰ and δ18O=10‰). Outflow from the Santa Susanna Spring enters Lake Ripasottile via a canal and both spring and lake water exhibits the same chemical distinctions and comparatively low seasonal variability. Major ion concentrations in Lake Lungo are similar to the Vicenna Riara Spring and are interpreted to represent the groundwater locally recharged within the plain. The δ13CDIC exhibit the same groupings as the other chemical parameters, providing supporting evidence of the source relationships. Lake Lungo exhibited exceptional ranges of δ13CDIC (±5 ‰) and δ2H, δ18O (±5 ‰ and ±7 ‰, respectively), attributed to sensitivity to seasonal changes. The hydrochemistry results, particularly major ion data, highlight how the two lakes, though geographically and morphologically similar, represent distinct hydrochemical facies. These data also show a different response in each lake to temperature and precipitation patterns in the basin that may be attributed to lake water retention time. The sensitivity of each lake to meteorological patterns can be used to understand the potential effects from long-term climate variability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Can I conduct short-term experimental... I conduct short-term experimental production runs that cause parameters to deviate from operating limits? With the approval of the Administrator, you may conduct short-term experimental production runs...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Can I conduct short-term experimental... I conduct short-term experimental production runs that cause parameters to deviate from operating limits? With the approval of the Administrator, you may conduct short-term experimental production runs...
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H
2016-12-15
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.
Rosato, Stefano; D'Errigo, Paola; Badoni, Gabriella; Fusco, Danilo; Perucci, Carlo A; Seccareccia, Fulvia
2008-08-01
The availability of two contemporary sources of information about coronary artery bypass graft (CABG) interventions, allowed 1) to verify the feasibility of performing outcome evaluation studies using administrative data sources, and 2) to compare hospital performance obtainable using the CABG Project clinical database with hospital performance derived from the use of current administrative data. Interventions recorded in the CABG Project were linked to the hospital discharge record (HDR) administrative database. Only the linked records were considered for subsequent analyses (46% of the total CABG Project). A new selected population "clinical card-HDR" was then defined. Two independent risk-adjustment models were applied, each of them using information derived from one of the two different sources. Then, HDR information was supplemented with some patient preoperative conditions from the CABG clinical database. The two models were compared in terms of their adaptability to data. Hospital performances identified by the two different models and significantly different from the mean was compared. In only 4 of the 13 hospitals considered for analysis, the results obtained using the HDR model did not completely overlap with those obtained by the CABG model. When comparing statistical parameters of the HDR model and the HDR model + patient preoperative conditions, the latter showed the best adaptability to data. In this "clinical card-HDR" population, hospital performance assessment obtained using information from the clinical database is similar to that derived from the use of current administrative data. However, when risk-adjustment models built on administrative databases are supplemented with a few clinical variables, their statistical parameters improve and hospital performance assessment becomes more accurate.
Ground Truth Events with Source Geometry in Eurasia and the Middle East
2016-06-02
source properties, including seismic moment, corner frequency, radiated energy , and stress drop have been obtained using spectra for S waves following...PARAMETERS Other source parameters, including radiated energy , corner frequency, seismic moment, and static stress drop were calculated using a spectral...technique (Richardson & Jordan, 2002; Andrews, 1986). The process entails separating event and station spectra and median- stacking each event’s