NASA Technical Reports Server (NTRS)
Fares, Nabil; Li, Victor C.
1986-01-01
An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.
Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Weihong; Sun, Kai; Qi, Junjian
2015-01-01
Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Controllability of semi-infinite rod heating by a point source
NASA Astrophysics Data System (ADS)
Khurshudyan, A.
2018-04-01
The possibility of control over heating of a semi-infinite thin rod by a point source concentrated at an inner point of the rod, is studied. Quadratic and piecewise constant solutions of the problem are derived, and the possibilities of solving appropriate problems of optimal control are indicated. Determining of the parameters of the piecewise constant solution is reduced to a problem of nonlinear programming. Numerical examples are considered.
Exact Closed-form Solutions for Lamb's Problem
NASA Astrophysics Data System (ADS)
Feng, Xi; Zhang, Haiming
2018-04-01
In this article, we report on an exact closed-form solution for the displacement at the surface of an elastic half-space elicited by a buried point source that acts at some point underneath that surface. This is commonly referred to as the 3-D Lamb's problem, for which previous solutions were restricted to sources and receivers placed at the free surface. By means of the reciprocity theorem, our solution should also be valid as a means to obtain the displacements at interior points when the source is placed at the free surface. We manage to obtain explicit results by expressing the solution in terms of elementary algebraic expression as well as elliptic integrals. We anchor our developments on Poisson's ratio 0.25 starting from Johnson's (1974) integral solutions which must be computed numerically. In the end, our closed-form results agree perfectly with the numerical results of Johnson (1974), which strongly confirms the correctness of our explicit formulas. It is hoped that in due time, these formulas may constitute a valuable canonical solution that will serve as a yardstick against which other numerical solutions can be compared and measured.
Exact closed-form solutions for Lamb's problem
NASA Astrophysics Data System (ADS)
Feng, Xi; Zhang, Haiming
2018-07-01
In this paper, we report on an exact closed-form solution for the displacement at the surface of an elastic half-space elicited by a buried point source that acts at some point underneath that surface. This is commonly referred to as the 3-D Lamb's problem for which previous solutions were restricted to sources and receivers placed at the free surface. By means of the reciprocity theorem, our solution should also be valid as a means to obtain the displacements at interior points when the source is placed at the free surface. We manage to obtain explicit results by expressing the solution in terms of elementary algebraic expression as well as elliptic integrals. We anchor our developments on Poisson's ratio 0.25 starting from Johnson's integral solutions which must be computed numerically. In the end, our closed-form results agree perfectly with the numerical results of Johnson, which strongly confirms the correctness of our explicit formulae. It is hoped that in due time, these formulae may constitute a valuable canonical solution that will serve as a yardstick against which other numerical solutions can be compared and measured.
40 CFR 415.165 - New source performance standards (NSPS).
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS INORGANIC CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Sodium Chloride Production... bitterns may be returned to the body of water from which the process brine solution was originally... chloride. (b) Any new source subject to this subpart and using the solution brine-mining process must...
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
40 CFR 420.95 - Pretreatment standards for existing sources (PSES).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Acid Pickling... for existing sources. (a) Sulfuric acid (spent acid solutions and rinse waters)—(1) Rod, wire, and... pickling (spent acid solutions and rinse waters)—(1) Rod, wire, and coil. Subpart I Pollutant or pollutant...
Improved source inversion from joint measurements of translational and rotational ground motions
NASA Astrophysics Data System (ADS)
Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.
2017-12-01
Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.
Solutions of Boltzmann`s Equation for Mono-energetic Neutrons in an Infinite Homogeneous Medium
DOE R&D Accomplishments Database
Wigner, E. P.
1943-11-30
Boltzman's equation is solved for the case of monoenergetic neutrons created by a plane or point source in an infinite medium which has spherically symmetric scattering. The customary solution of the diffusion equation appears to be multiplied by a constant factor which is smaller than 1. In addition to this term the total neutron density contains another term which is important in the neighborhood of the source. It varies as 1/r{sup 2} in the neighborhood of a point source. (auth)
A weighted adjustment of a similarity transformation between two point sets containing errors
NASA Astrophysics Data System (ADS)
Marx, C.
2017-10-01
For an adjustment of a similarity transformation, it is often appropriate to consider that both the source and the target coordinates of the transformation are affected by errors. For the least squares adjustment of this problem, a direct solution is possible in the cases of specific-weighing schemas of the coordinates. Such a problem is considered in the present contribution and a direct solution is generally derived for the m-dimensional space. The applied weighing schema allows (fully populated) point-wise weight matrices for the source and target coordinates, both weight matrices have to be proportional to each other. Additionally, the solutions of two borderline cases of this weighting schema are derived, which only consider errors in the source or target coordinates. The investigated solution of the rotation matrix of the adjustment is independent of the scaling between the weight matrices of the source and the target coordinates. The mentioned borderline cases, therefore, have the same solution of the rotation matrix. The direct solution method is successfully tested on an example of a 3D similarity transformation using a comparison with an iterative solution based on the Gauß-Helmert model.
Exact solutions for sound radiation from a moving monopole above an impedance plane.
Ochmann, Martin
2013-04-01
The acoustic field of a monopole source moving with constant velocity at constant height above an infinite locally reacting plane can be expressed in analytical form by combining the Lorentz transformation with the method of superimposing complex or real point sources. For a plane with masslike response, the solution in Lorentz space consists of a superposition of monopoles only and therefore, does not differ in principle from the solution for the corresponding stationary boundary value problem. However, by considering a frequency independent surface impedance, e.g., with pure absorbing behavior, the half-space Green's function is now comprised of not only a line of monopoles but also of dipoles. For certain field points at a special line g, this solution can be written explicitly by using an exponential integral. For arbitrary field points, the method of stationary phase leads to an asymptotic solution for the reflection coefficient which agrees with prior results from the literature.
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.
2016-01-01
Analytical expressions for column number density (CND) are developed for optical line of sight paths through a variety of steady free molecule point source models including directionally-constrained effusion (Mach number M = 0) and flow from a sonic orifice (M 1). Sonic orifice solutions are approximate, developed using a fair simulacrum fitted to the free molecule solution. Expressions are also developed for a spherically-symmetric thermal expansion (M = 0). CND solutions are found for the most general paths relative to these sources and briefly explored. It is determined that the maximum CND from a distant location through directed effusion and sonic orifice cases occurs along the path parallel to the source plane that intersects the plume axis. For the effusive case this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plumes axis. For sonic plumes this ratio is reduced to about 43. For high Mach number cases the maximum CND will be found along the axial centerline path.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS INORGANIC CHEMICALS MANUFACTURING POINT... from which the process brine solution was originally withdrawn, provided no additional pollutants are... through 125.32, any existing point source subject to this subpart and using the solution brine mining...
Sheppard, Colin J R; Kou, Shan S; Lin, Jiao
2014-12-01
Highly convergent beam modes in two dimensions are considered based on rigorous solutions of the scalar wave (Helmholtz) equation, using the complex source point formalism. The modes are applicable to planar waveguide or surface plasmonic structures and nearly concentric microcavity resonator modes in two dimensions. A novel solution is that of a vortex beam, where the direction of propagation is in the plane of the vortex. The modes also can be used as a basis for the cross section of propagationally invariant beams in three dimensions and bow-tie-shaped optical fiber modes.
40 CFR 415.165 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS INORGANIC CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Sodium Chloride Production... chloride. (b) Any new source subject to this subpart and using the solution brine-mining process must...
40 CFR 420.96 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Acid Pickling...) Sulfuric acid pickling (spent acid solutions and rinse waters)—(1) Rod, wire, coil. Subpart I Pollutant or... operations. (b) Hydrochloric acid pickling (spent acid solutions and rinse waters)—(1) Rod, wire, coil...
Body and Surface Wave Modeling of Observed Seismic Events. Part 2.
1987-05-12
is based on expand - ing the complete three dimensional solution of the wave equation expressed in cylindrical S coordinates in an asymptotic form which...using line source (2-D) theory. It is based on expand - ing the complete three dimensional solution of the wave equation expressed in cylindrical...generating synthetic point-source seismograms for shear dislocation sources using line source (2-D) theory. It is based on expanding the complete three
NASA Astrophysics Data System (ADS)
Rinzema, Kees; ten Bosch, Jaap J.; Ferwerda, Hedzer A.; Hoenders, Bernhard J.
1995-01-01
The diffusion approximation, which is often used to describe the propagation of light in biological tissues, is only good at a sufficient distance from sources and boundaries. Light- tissue interaction is however most intense in the region close to the source. It would therefore be interesting to study this region more closely. Although scattering in biological tissues is predominantly forward peaked, explicit solutions to the transport equation have only been obtained in the case of isotropic scattering. Particularly, for the case of an isotropic point source in an unbounded, isotropically scattering medium the solution is well known. We show that this problem can also be solved analytically if the scattering is no longer isotropic, while everything else remains the same.
Web-based Communication of Water Quality Issues and Potential Solution Exploration
Many United States water bodies are impaired, i.e., do not meet applicable water quality standards. Pollutants enter water bodies from point sources (PS) and non-point sources (NPS). Loadings from PS are regulated by the Clean Water Act and permits limit them. Loadings from NPS a...
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Webb, Jay C.
1994-01-01
In this paper finite-difference solutions of the Helmholtz equation in an open domain are considered. By using a second-order central difference scheme and the Bayliss-Turkel radiation boundary condition, reasonably accurate solutions can be obtained when the number of grid points per acoustic wavelength used is large. However, when a smaller number of grid points per wavelength is used excessive reflections occur which tend to overwhelm the computed solutions. Excessive reflections are due to the incompability between the governing finite difference equation and the Bayliss-Turkel radiation boundary condition. The Bayliss-Turkel radiation boundary condition was developed from the asymptotic solution of the partial differential equation. To obtain compatibility, the radiation boundary condition should be constructed from the asymptotic solution of the finite difference equation instead. Examples are provided using the improved radiation boundary condition based on the asymptotic solution of the governing finite difference equation. The computed results are free of reflections even when only five grid points per wavelength are used. The improved radiation boundary condition has also been tested for problems with complex acoustic sources and sources embedded in a uniform mean flow. The present method of developing a radiation boundary condition is also applicable to higher order finite difference schemes. In all these cases no reflected waves could be detected. The use of finite difference approximation inevita bly introduces anisotropy into the governing field equation. The effect of anisotropy is to distort the directional distribution of the amplitude and phase of the computed solution. It can be quite large when the number of grid points per wavelength used in the computation is small. A way to correct this effect is proposed. The correction factor developed from the asymptotic solutions is source independent and, hence, can be determined once and for all. The effectiveness of the correction factor in providing improvements to the computed solution is demonstrated in this paper.
Groundwater flow to a horizontal or slanted well in an unconfined aquifer
NASA Astrophysics Data System (ADS)
Zhan, Hongbin; Zlotnik, Vitaly A.
2002-07-01
New semianalytical solutions for evaluation of the drawdown near horizontal and slanted wells with finite length screens in unconfined aquifers are presented. These fully three-dimensional solutions consider instantaneous drainage or delayed yield and aquifer anisotropy. As a basis, solution for the drawdown created by a point source in a uniform anisotropic unconfined aquifer is derived in Laplace domain. Using superposition, the point source solution is extended to the cases of the horizontal and slanted wells. The previous solutions for vertical wells can be described as a special case of the new solutions. Numerical Laplace inversion allows effective evaluation of the drawdown in real time. Examples illustrate the effects of well geometry and the aquifer parameters on drawdown. Results can be used to generate type curves from observations in piezometers and partially or fully penetrating observation wells. The proposed solutions and software are useful for the parameter identification, design of remediation systems, drainage, and mine dewatering.
NASA Technical Reports Server (NTRS)
Woronowicz, Michael
2016-01-01
Analytical expressions for column number density (CND) are developed for optical line of sight paths through a variety of steady free molecule point source models including directionally-constrained effusion (Mach number M = 0) and flow from a sonic orifice (M = 1). Sonic orifice solutions are approximate, developed using a fair simulacrum fitted to the free molecule solution. Expressions are also developed for a spherically-symmetric thermal expansion (M = 0). CND solutions are found for the most general paths relative to these sources and briefly explored. It is determined that the maximum CND from a distant location through directed effusion and sonic orifice cases occurs along the path parallel to the source plane that intersects the plume axis. For the effusive case this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plume's axis. For sonic plumes this ratio is reduced to about 4/3. For high Mach number cases the maximum CND will be found along the axial centerline path. Keywords: column number density, plume flows, outgassing, free molecule flow.
NASA Astrophysics Data System (ADS)
Konca, A. O.; Ji, C.; Helmberger, D. V.
2004-12-01
We observed the effect of the fault finiteness in the Pnl waveforms from regional distances (4° to 12° ) for the Mw6.5 San Simeon Earthquake on 22 December 2003. We aimed to include more of the high frequencies (2 seconds and longer periods) than the studies that use regional data for focal solutions (5 to 8 seconds and longer periods). We calculated 1-D synthetic seismograms for the Pn_l portion for both a point source, and a finite fault solution. The comparison of the point source and finite fault waveforms with data show that the first several seconds of the point source synthetics have considerably higher amplitude than the data, while finite fault does not have a similar problem. This can be explained by reversely polarized depth phases overlapping with the P waves from the later portion of the fault, and causing smaller amplitudes for the beginning portion of the seismogram. This is clearly a finite fault phenomenon; therefore, can not be explained by point source calculations. Moreover, the point source synthetics, which are calculated with a focal solution from a long period regional inversion, are overestimating the amplitude by three to four times relative to the data amplitude, while finite fault waveforms have the similar amplitudes to the data. Hence, a moment estimation based only on the point source solution of the regional data could have been wrong by half of magnitude. We have also calculated the shifts of synthetics relative to data to fit the seismograms. Our results reveal that the paths from Central California to the south are faster than to the paths to the east and north. The P wave arrival to the TUC station in Arizona is 4 seconds earlier than the predicted Southern California model, while most stations to the east are delayed around 1 second. The observed higher uppermost mantle velocities to the south are consistent with some recent tomographic models. Synthetics generated with these models significantly improves the fits and the timing at most stations. This means that regional waveform data can be used to help locate and establish source complexities for future events.
Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann
2012-02-01
A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America
Naser, Mohamed A.; Patterson, Michael S.
2011-01-01
Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647
Evaluation of ground-water quality in the Santa Maria Valley, California
Hughes, Jerry L.
1977-01-01
The quality and quantity of recharge to the Santa Maria Valley, Calif., ground-water basin from natural sources, point sources, and agriculture are expressed in terms of a hydrologic budget, a solute balance, and maps showing the distribution of select chemical constituents. Point sources includes a sugar-beet refinery, oil refineries, stockyards, golf courses, poultry farms, solid-waste landfills, and municipal and industrial wastewater-treatment facilities. Pumpage has exceeded recharge by about 10,000 acre-feet per year. The result is a declining potentiometric surface with an accumulation of solutes and an increase in nitrogen in ground water. Nitrogen concentrations have reached as much as 50 milligrams per liter. In comparison to the solutes from irrigation return, natural recharge, and rain, discharge of wastewater from municipal and industrial wastewater-treatment facilities contributes less than 10 percent. The quality of treated wastewater is often lower in select chemical constituents than the receiving water. (Woodard-USGS)
Discretizing singular point sources in hyperbolic wave propagation problems
Petersson, N. Anders; O'Reilly, Ossian; Sjogreen, Bjorn; ...
2016-06-01
Here, we develop high order accurate source discretizations for hyperbolic wave propagation problems in first order formulation that are discretized by finite difference schemes. By studying the Fourier series expansions of the source discretization and the finite difference operator, we derive sufficient conditions for achieving design accuracy in the numerical solution. Only half of the conditions in Fourier space can be satisfied through moment conditions on the source discretization, and we develop smoothness conditions for satisfying the remaining accuracy conditions. The resulting source discretization has compact support in physical space, and is spread over as many grid points as themore » number of moment and smoothness conditions. In numerical experiments we demonstrate high order of accuracy in the numerical solution of the 1-D advection equation (both in the interior and near a boundary), the 3-D elastic wave equation, and the 3-D linearized Euler equations.« less
NASA Astrophysics Data System (ADS)
Sedghi, Mohammad Mahdi; Samani, Nozar; Sleep, Brent
2009-06-01
The Laplace domain solutions have been obtained for three-dimensional groundwater flow to a well in confined and unconfined wedge-shaped aquifers. The solutions take into account partial penetration effects, instantaneous drainage or delayed yield, vertical anisotropy and the water table boundary condition. As a basis, the Laplace domain solutions for drawdown created by a point source in uniform, anisotropic confined and unconfined wedge-shaped aquifers are first derived. Then, by the principle of superposition the point source solutions are extended to the cases of partially and fully penetrating wells. Unlike the previous solution for the confined aquifer that contains improper integrals arising from the Hankel transform [Yeh HD, Chang YC. New analytical solutions for groundwater flow in wedge-shaped aquifers with various topographic boundary conditions. Adv Water Resour 2006;26:471-80], numerical evaluation of our solution is relatively easy using well known numerical Laplace inversion methods. The effects of wedge angle, pumping well location and observation point location on drawdown and the effects of partial penetration, screen location and delay index on the wedge boundary hydraulic gradient in unconfined aquifers have also been investigated. The results are presented in the form of dimensionless drawdown-time and boundary gradient-time type curves. The curves are useful for parameter identification, calculation of stream depletion rates and the assessment of water budgets in river basins.
Comparison of finite source and plane wave scattering from corrugated surfaces
NASA Technical Reports Server (NTRS)
Levine, D. M.
1977-01-01
The choice of a plane wave to represent incident radiation in the analysis of scatter from corrugated surfaces was examined. The physical optics solution obtained for the scattered fields due to an incident plane wave was compared with the solution obtained when the incident radiation is produced by a source of finite size and finite distance from the surface. The two solutions are equivalent if the observer is in the far field of the scatterer and the distance from observer to scatterer is large compared to the radius of curvature at the scatter points, condition not easily satisfied with extended scatterers such as rough surfaces. In general, the two solutions have essential differences such as in the location of the scatter points and the dependence of the scattered fields on the surface properties. The implication of these differences to the definition of a meaningful radar cross section was examined.
Strategies for satellite-based monitoring of CO2 from distributed area and point sources
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David
2014-05-01
Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.
Subsurface solute transport with one-, two-, and three-dimensional arbitrary shape sources
NASA Astrophysics Data System (ADS)
Chen, Kewei; Zhan, Hongbin; Zhou, Renjie
2016-07-01
Solutions with one-, two-, and three-dimensional arbitrary shape source geometries will be very helpful tools for investigating a variety of contaminant transport problems in the geological media. This study proposed a general method to develop new solutions for solute transport in a saturated, homogeneous aquifer (confined or unconfined) with a constant, unilateral groundwater flow velocity. Several typical source geometries, such as arbitrary line sources, vertical and horizontal patch sources, circular and volumetric sources, were considered. The sources can sit on the upper or lower aquifer boundary to simulate light non-aqueous-phase-liquids (LNAPLs) or dense non-aqueous-phase-liquids (DNAPLs), respectively, or can be located anywhere inside the aquifer. The developed new solutions were tested against previous benchmark solutions under special circumstances and were shown to be robust and accurate. Such solutions can also be used as a starting point for the inverse problem of source zone and source geometry identification in the future. The following findings can be obtained from analyzing the solutions. The source geometry, including shape and orientation, generally played an important role for the concentration profile through the entire transport process. When comparing the inclined line sources with the horizontal line sources, the concentration contours expanded considerably along the vertical direction, and shrank considerably along the groundwater flow direction. A planar source sitting on the upper aquifer boundary (such as a LNAPL pool) would lead to significantly different concentration profiles compared to a planar source positioned in a vertical plane perpendicular to the flow direction. For a volumetric source, its dimension along the groundwater flow direction became less important compared to its other two dimensions.
NASA Astrophysics Data System (ADS)
Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.
2017-12-01
Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.
NASA Astrophysics Data System (ADS)
Lachat, E.; Landes, T.; Grussenmeyer, P.
2018-05-01
Terrestrial and airborne laser scanning, photogrammetry and more generally 3D recording techniques are used in a wide range of applications. After recording several individual 3D datasets known in local systems, one of the first crucial processing steps is the registration of these data into a common reference frame. To perform such a 3D transformation, commercial and open source software as well as programs from the academic community are available. Due to some lacks in terms of computation transparency and quality assessment in these solutions, it has been decided to develop an open source algorithm which is presented in this paper. It is dedicated to the simultaneous registration of multiple point clouds as well as their georeferencing. The idea is to use this algorithm as a start point for further implementations, involving the possibility of combining 3D data from different sources. Parallel to the presentation of the global registration methodology which has been employed, the aim of this paper is to confront the results achieved this way with the above-mentioned existing solutions. For this purpose, first results obtained with the proposed algorithm to perform the global registration of ten laser scanning point clouds are presented. An analysis of the quality criteria delivered by two selected software used in this study and a reflexion about these criteria is also performed to complete the comparison of the obtained results. The final aim of this paper is to validate the current efficiency of the proposed method through these comparisons.
NASA Astrophysics Data System (ADS)
Moore, J.; Bird, D. L.; Dobbis, S. K.; Woodward, G.
2016-12-01
Urban areas and associated impervious surface cover (ISC) are among the fastest growing land use types. Rapid growth of urban lands has significant implications for geochemical cycling and solute sources to streams, estuaries, and coastal waters. However, little work has been done to investigate the impacts of urbanization on Critical Processes, including on the export of solutes from urban watersheds. Despite observed elevated solute concentrations in urban streams in some previous studies, neither solute sources nor total solute fluxes have been quantified due to mixed bedrock geology, lack of a forested reference watershed, or the presence of point sources that confounded separation of anthropologic and natural sources. We investigated the geochemical signal of the urban built environment (e.g., roads, parking lots, buildings) in a set of five USGS-gaged watersheds across a rural (forested) to urban gradient in the Maryland Piedmont. These watersheds have ISC ranging from 0 to 25%, no point sources, and similar felsic bedrock chemistry. Weathering from the urban built environment and ISC produces dramatically higher solute concentrations in urban watersheds than in the forested watershed. Higher solute concentrations result in chemical weathering fluxes from urban watersheds that are 11-13 times higher than the forested watershed and are similar to fluxes from mountainous, weathering-limited watersheds rather than fluxes from transport-limited, dilute streams like the forested watershed. Weathering of concrete in urban watersheds produces geochemistry similar to weathering-limited watersheds with high concentrations of Ca2+, Mg2+, and DIC, which is similar to stream chemistry due to carbonate weathering. Road salt dissolution results in high Na+ and Cl- concentrations similar to evaporite weathering. Quantifying processes causing elevated solute fluxes from urban areas is essential to understanding cycling of Ca2+, Mg2+, and DIC in urban streams and in downgradient estuarine or coastal waters.
Mercury Contaminated Sediment Sites: A Review Of Remedial Solutions
Mercury (Hg) can accumulate in sediment from point and non-point sources, depending on a number of physical, chemical, biological, geological and anthropogenic environmental processes. It is believed that the associated Hg contamination in aquatic systems can be decreased by imp...
The Osher scheme for non-equilibrium reacting flows
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.
40 CFR 415.161 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... AND STANDARDS INORGANIC CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Sodium Chloride Production... the saturated brine solution remaining after precipitation of sodium chloride in the solar evaporation...
NASA Astrophysics Data System (ADS)
Burinskii, Alexander
2016-01-01
It is known that gravitational and electromagnetic fields of an electron are described by the ultra-extreme Kerr-Newman (KN) black hole solution with extremely high spin/mass ratio. This solution is singular and has a topological defect, the Kerr singular ring, which may be regularized by introducing the solitonic source based on the Higgs mechanism of symmetry breaking. The source represents a domain wall bubble interpolating between the flat region inside the bubble and external KN solution. It was shown recently that the source represents a supersymmetric bag model, and its structure is unambiguously determined by Bogomolnyi equations. The Dirac equation is embedded inside the bag consistently with twistor structure of the Kerr geometry, and acquires the mass from the Yukawa coupling with Higgs field. The KN bag turns out to be flexible, and for parameters of an electron, it takes the form of very thin disk with a circular string placed along sharp boundary of the disk. Excitation of this string by a traveling wave creates a circulating singular pole, indicating that the bag-like source of KN solution unifies the dressed and point-like electron in a single bag-string-quark system.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1993-01-01
Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
NASA Astrophysics Data System (ADS)
Nikkhoo, M.; Walter, T. R.; Lundgren, P.; Prats-Iraola, P.
2015-12-01
Ground deformation at active volcanoes is one of the key precursors of volcanic unrest, monitored by InSAR and GPS techniques at high spatial and temporal resolution, respectively. Modelling of the observed displacements establishes the link between them and the underlying subsurface processes and volume change. The so-called Mogi model and the rectangular dislocation are two commonly applied analytical solutions that allow for quick interpretations based on the location, depth and volume change of pressurized spherical cavities and planar intrusions, respectively. Geological observations worldwide, however, suggest elongated, tabular or other non-equidimensional geometries for the magma chambers. How can these be modelled? Generalized models such as the Davis's point ellipsoidal cavity or the rectangular dislocation solutions, are geometrically limited and could barely improve the interpretation of data. We develop a new analytical artefact-free solution for a rectangular dislocation, which also possesses full rotational degrees of freedom. We construct a kinematic model in terms of three pairwise-perpendicular rectangular dislocations with a prescribed opening only. This model represents a generalized point source in the far field, and also performs as a finite dislocation model for planar intrusions in the near field. We show that through calculating the Eshelby's shape tensor the far-field displacements and stresses of any arbitrary triaxial ellipsoidal cavity can be reproduced by using this model. Regardless of its aspect ratios, the volume change of this model is simply the sum of the volume change of the individual dislocations. Our model can be integrated in any inversion scheme as simply as the Mogi model, profiting at the same time from the advantages of a generalized point source. After evaluating our model by using a boundary element method code, we apply it to ground displacements of the 2015 Calbuco eruption, Chile, observed by the Sentinel-1 satellite. We infer the parameters of a deflating elongated source located beneath Calbuco, and find significant differences to Mogi type solutions. The results imply that interpretations based on our model may help us better understand source characteristics, and in the case of Calubuco volcano infer a volcano-tectonic coupling mechanism.
40 CFR 467.26 - Pretreatment standards for new sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... 467.26 Section 467.26 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ALUMINUM FORMING POINT SOURCE CATEGORY Rolling With Emulsions Subcategory § 467.26... parameter) 13.29 13.29 Subpart B Solution Heat Treatment Contact Cooling Water Pollutant or pollutant...
Singularity and Bohm criterion in hot positive ion species in the electronegative ion sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aslaninejad, Morteza; Yasserian, Kiomars
2016-05-15
The structure of the discharge for a magnetized electronegative ion source with two species of positive ions is investigated. The thermal motion of hot positive ions and the singularities involved with it are taken into account. By analytical solution of the neutral region, the location of the singular point and also the values of the plasma parameter such as electric potential and ion density at the singular point are obtained. A generalized Bohm criterion is recovered and discussed. In addition, for the non-neutral solution, the numerical method is used. In contrast with cold ion plasma, qualitative changes are observed. Themore » parameter space region within which oscillations in the density and potential can be observed has been scanned and discussed. The space charge behavior in the vicinity of edge of the ion sources has also been discussed in detail.« less
Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.
2006-01-01
We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.
Design of TIR collimating lens for ordinary differential equation of extended light source
NASA Astrophysics Data System (ADS)
Zhan, Qianjing; Liu, Xiaoqin; Hou, Zaihong; Wu, Yi
2017-10-01
The source of LED has been widely used in our daily life. The intensity angle distribution of single LED is lambert distribution, which does not satisfy the requirement of people. Therefore, we need to distribute light and change the LED's intensity angle distribution. The most commonly method to change its intensity angle distribution is the free surface. Generally, using ordinary differential equations to calculate free surface can only be applied in a point source, but it will lead to a big error for the expand light. This paper proposes a LED collimating lens based on the ordinary differential equation, combined with the LED's light distribution curve, and adopt the method of calculating the center gravity of the extended light to get the normal vector. According to the law of Snell, the ordinary differential equations are constructed. Using the runge-kutta method for solution of ordinary differential equation solution, the curve point coordinates are gotten. Meanwhile, the edge point data of lens are imported into the optical simulation software TracePro. Based on 1mm×1mm single lambert body for light conditions, The degrees of collimating light can be close to +/-3. Furthermore, the energy utilization rate is higher than 85%. In this paper, the point light source is used to calculate partial differential equation method and compared with the simulation of the lens, which improve the effect of 1 degree of collimation.
NASA Technical Reports Server (NTRS)
Bernstein, Ira B.; Brookshaw, Leigh; Fox, Peter A.
1992-01-01
The present numerical method for accurate and efficient solution of systems of linear equations proceeds by numerically developing a set of basis solutions characterized by slowly varying dependent variables. The solutions thus obtained are shown to have a computational overhead largely independent of the small size of the scale length which characterizes the solutions; in many cases, the technique obviates series solutions near singular points, and its known sources of error can be easily controlled without a substantial increase in computational time.
Traveling wavefront solutions to nonlinear reaction-diffusion-convection equations
NASA Astrophysics Data System (ADS)
Indekeu, Joseph O.; Smets, Ruben
2017-08-01
Physically motivated modified Fisher equations are studied in which nonlinear convection and nonlinear diffusion is allowed for besides the usual growth and spread of a population. It is pointed out that in a large variety of cases separable functions in the form of exponentially decaying sharp wavefronts solve the differential equation exactly provided a co-moving point source or sink is active at the wavefront. The velocity dispersion and front steepness may differ from those of some previously studied exact smooth traveling wave solutions. For an extension of the reaction-diffusion-convection equation, featuring a memory effect in the form of a maturity delay for growth and spread, also smooth exact wavefront solutions are obtained. The stability of the solutions is verified analytically and numerically.
NASA Technical Reports Server (NTRS)
Maskew, Brian
1987-01-01
The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
Device for isolation of seed crystals during processing of solution
Montgomery, Kenneth E.; Zaitseva, Natalia P.; Deyoreo, James J.; Vital, Russell L.
1999-01-01
A device for isolation of see crystals during processing of solutions. The device enables a seed crystal to be introduced into the solution without exposing the solution to contaminants or to sources of drying and cooling. The device constitutes a seed protector which allows the seed to be present in the growth solution during filtration and overheating operations while at the same time preventing the seed from being dissolved by the under saturated solution. When the solution processing has been completed and the solution cooled to near the saturation point, the seed protector is opened, exposing the seed to the solution and allowing growth to begin.
Experimental and Analytical Studies of Shielding Concepts for Point Sources and Jet Noises.
NASA Astrophysics Data System (ADS)
Wong, Raymond Lee Man
This analytical and experimental study explores concepts for jet noise shielding. Model experiments centre on solid planar shields, simulating engine-over-wing installations, and 'sugar scoop' shields. Tradeoff on effective shielding length is set by interference 'edge noise' as the shield trailing edge approaches the spreading jet. Edge noise is minimized by (i) hyperbolic cutouts which trim off the portions of most intense interference between the jet flow and the barrier and (ii) hybrid shields--a thermal refractive extension (a flame); for (ii) the tradeoff is combustion noise. In general, shielding attenuation increases steadily with frequency, following low frequency enhancement by edge noise. Although broadband attenuation is typically only several dB, the reduction of the subjectively weighted perceived noise levels is higher. In addition, calculated ground contours of peak PN dB show a substantial contraction due to shielding: this reaches 66% for one of the 'sugar scoop' shields for the 90 PN dB contour. The experiments are complemented by analytical predictions. They are divided into an engineering scheme for jet noise shielding and more rigorous analysis for point source shielding. The former approach combines point source shielding with a suitable jet source distribution. The results are synthesized into a predictive algorithm for jet noise shielding: the jet is modelled as a line distribution of incoherent sources with narrow band frequency (TURN)(axial distance)('-1). The predictive version agrees well with experiment (1 to 1.5 dB) up to moderate frequencies. The insertion loss deduced from the point source measurements for semi-infinite as well as finite rectangular shields agrees rather well with theoretical calculation based on the exact half plane solution and the superposition of asymptotic closed-form solutions. An approximate theory, the Maggi-Rubinowicz line integral, is found to yield reasonable predictions for thin barriers including cutouts if a certain correction is applied. The more exact integral equation approach (solved numerically) is applied to a more demanding geometry: a half round sugar scoop shield. It is found that the solutions of integral equation derived from Helmholtz formula in normal derivative form show satisfactory agreement with measurements.
40 CFR 409.21 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.21... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
40 CFR 409.21 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.21... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
40 CFR 409.21 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.21... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
40 CFR 409.21 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.21... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
40 CFR 409.11 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.11 Specialized... or related to the concentration and crystallization of sugar solutions. (c) The term product shall mean crystallized refined sugar. ...
40 CFR 409.11 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.11 Specialized... or related to the concentration and crystallization of sugar solutions. (c) The term product shall mean crystallized refined sugar. ...
40 CFR 409.11 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.11 Specialized... or related to the concentration and crystallization of sugar solutions. (c) The term product shall mean crystallized refined sugar. ...
40 CFR 409.11 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.11 Specialized... or related to the concentration and crystallization of sugar solutions. (c) The term product shall mean crystallized refined sugar. ...
40 CFR 409.31 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.31... (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
Possible Solutions for Financial Crises of the Private Sector of Higher Education.
ERIC Educational Resources Information Center
Bolling, Landrum R.
Our society is at a point where a number of interlocking crises-inflation, ever rising expectations, war, urban problems, youth's discontent-are coming together. Money is needed at every point and the private college cannot rely on the federal government or private sources to save them from financial disaster. The private college can tackle its…
40 CFR 409.21 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409... raw material (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
40 CFR 409.11 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.11... associated with or related to the concentration and crystallization of sugar solutions. (c) The term product shall mean crystallized refined sugar. ...
Device for isolation of seed crystals during processing of solution
Montgomery, K.E.; Zaitseva, N.P.; Deyoreo, J.J.; Vital, R.L.
1999-05-18
A device is described for isolation of seed crystals during processing of solutions. The device enables a seed crystal to be introduced into the solution without exposing the solution to contaminants or to sources of drying and cooling. The device constitutes a seed protector which allows the seed to be present in the growth solution during filtration and overheating operations while at the same time preventing the seed from being dissolved by the under saturated solution. When the solution processing has been completed and the solution cooled to near the saturation point, the seed protector is opened, exposing the seed to the solution and allowing growth to begin. 3 figs.
Open Source Hbim for Cultural Heritage: a Project Proposal
NASA Astrophysics Data System (ADS)
Diara, F.; Rinaudo, F.
2018-05-01
Actual technologies are changing Cultural Heritage research, analysis, conservation and development ways, allowing new innovative approaches. The possibility of integrating Cultural Heritage data, like archaeological information, inside a three-dimensional environment system (like a Building Information Modelling) involve huge benefits for its management, monitoring and valorisation. Nowadays there are many commercial BIM solutions. However, these tools are thought and developed mostly for architecture design or technical installations. An example of better solution could be a dynamic and open platform that might consider Cultural Heritage needs as priority. Suitable solution for better and complete data usability and accessibility could be guaranteed by open source protocols. This choice would allow adapting software to Cultural Heritage needs and not the opposite, thus avoiding methodological stretches. This work will focus exactly on analysis and experimentations about specific characteristics of these kind of open source software (DBMS, CAD, Servers) applied to a Cultural Heritage example, in order to verifying their flexibility, reliability and then creating a dynamic HBIM open source prototype. Indeed, it might be a starting point for a future creation of a complete HBIM open source solution that we could adapt to others Cultural Heritage researches and analysis.
NASA Technical Reports Server (NTRS)
Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.
2016-01-01
Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).
On concentrated solute sources in faulted aquifers
NASA Astrophysics Data System (ADS)
Robinson, N. I.; Werner, A. D.
2017-06-01
Finite aperture faults and fractures within aquifers (collectively called 'faults' hereafter) theoretically enable flowing water to move through them but with refractive displacement, both on entry and exit. When a 2D or 3D point source of solute concentration is located upstream of the fault, the plume emanating from the source relative to one in a fault-free aquifer is affected by the fault, both before it and after it. Previous attempts to analyze this situation using numerical methods faced challenges in overcoming computational constraints that accompany requisite fine mesh resolutions. To address these, an analytical solution of this problem is developed and interrogated using statistical evaluation of solute distributions. The method of solution is based on novel spatial integral representations of the source with axes rotated from the direction of uniform water flow and aligning with fault faces and normals. Numerical exemplification is given to the case of a 2D steady state source, using various parameter combinations. Statistical attributes of solute plumes show the relative impact of parameters, the most important being, fault rotation, aperture and conductivity ratio. New general observations of fault-affected solution plumes are offered, including: (a) the plume's mode (i.e. peak concentration) on the downstream face of the fault is less displaced than the refracted groundwater flowline, but at some distance downstream of the fault, these realign; (b) porosities have no influence in steady state calculations; (c) previous numerical modeling results of barrier faults show significant boundary effects. The current solution adds to available benchmark problems involving fractures, faults and layered aquifers, in which grid resolution effects are often barriers to accurate simulation.
Optimal simultaneous superpositioning of multiple structures with missing data.
Theobald, Douglas L; Steindel, Phillip A
2012-08-01
Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.
Finite Element modelling of deformation induced by interacting volcanic sources
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jürgen; Rivalta, Eleonora
2010-05-01
The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system comprises more than one source, the assumption of homogeneity in the half-space is violated and several sources are combined, their respective deformation field being summed. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying their relative position. Furthermore we considered the impact of topography, loading, and magma compressibility. To quantify the discrepancies and compare the various models, we calculated the difference between analytical and numerical maximum horizontal or vertical surface displacements.We will demonstrate that for certain conditions combining analytical sources can cause an error of up to 20%. References: McTigue, D. F. (1987), Elastic Stress and Deformation Near a Finite Spherical Magma Body: Resolution of the Point Source Paradox, J. Geophys. Res. 92, 12931-12940. Mogi, K. (1958), Relations between the eruptions of various volcanoes and the deformations of the ground surfaces around them, Bull Earthquake Res Inst, Univ Tokyo 36, 99-134. Okada, Y. (1992), Internal Deformation Due to Shear and Tensile Faults in a Half-Space, Bulletin of the Seismological Society of America 82(2), 1018-1040.
Toward a Nonlinear Acoustic Analogy: Turbulence as a Source of Sound and Nonlinear Propagation
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
An acoustic analogy is proposed that directly includes nonlinear propagation effects. We examine the Lighthill acoustic analogy and replace the Green's function of the wave equation with numerical solutions of the generalized Burgers' equation. This is justified mathematically by using similar arguments that are the basis of the solution of the Lighthill acoustic analogy. This approach is superior to alternatives because propagation is accounted for directly from the source to the far-field observer instead of from an arbitrary intermediate point. Validation of a numerical solver for the generalized Burgers' equation is performed by comparing solutions with the Blackstock bridging function and measurement data. Most importantly, the mathematical relationship between the Navier- Stokes equations, the acoustic analogy that describes the source, and canonical nonlinear propagation equations is shown. Example predictions are presented for nonlinear propagation of jet mixing noise at the sideline angle
Toward a Nonlinear Acoustic Analogy: Turbulence as a Source of Sound and Nonlinear Propagation
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
An acoustic analogy is proposed that directly includes nonlinear propagation effects. We examine the Lighthill acoustic analogy and replace the Green's function of the wave equation with numerical solutions of the generalized Burgers' equation. This is justified mathematically by using similar arguments that are the basis of the solution of the Lighthill acoustic analogy. This approach is superior to alternatives because propagation is accounted for directly from the source to the far-field observer instead of from an arbitrary intermediate point. Validation of a numerical solver for the generalized Burgers' equation is performed by comparing solutions with the Blackstock bridging function and measurement data. Most importantly, the mathematical relationship between the Navier-Stokes equations, the acoustic analogy that describes the source, and canonical nonlinear propagation equations is shown. Example predictions are presented for nonlinear propagation of jet mixing noise at the sideline angle.
Radiation and the classical double copy for color charges
NASA Astrophysics Data System (ADS)
Goldberger, Walter D.; Ridgway, Alexander K.
2017-06-01
We construct perturbative classical solutions of the Yang-Mills equations coupled to dynamical point particles carrying color charge. By applying a set of color to kinematics replacement rules first introduced by Bern, Carrasco and Johansson, these are shown to generate solutions of d -dimensional dilaton gravity, which we also explicitly construct. Agreement between the gravity result and the gauge theory double copy implies a correspondence between non-Abelian particles and gravitating sources with dilaton charge. When the color sources are highly relativistic, dilaton exchange decouples, and the solutions we obtain match those of pure gravity. We comment on possible implications of our findings to the calculation of gravitational waveforms in astrophysical black hole collisions, directly from computationally simpler gluon radiation in Yang-Mills theory.
Techniques for determining physical zones of influence
Hamann, Hendrik F; Lopez-Marrero, Vanessa
2013-11-26
Techniques for analyzing flow of a quantity in a given domain are provided. In one aspect, a method for modeling regions in a domain affected by a flow of a quantity is provided which includes the following steps. A physical representation of the domain is provided. A grid that contains a plurality of grid-points in the domain is created. Sources are identified in the domain. Given a vector field that defines a direction of flow of the quantity within the domain, a boundary value problem is defined for each of one or more of the sources identified in the domain. Each of the boundary value problems is solved numerically to obtain a solution for the boundary value problems at each of the grid-points. The boundary problem solutions are post-processed to model the regions affected by the flow of the quantity on the physical representation of the domain.
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
2010-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
McCollom, Brittany A; Collis, Jon M
2014-09-01
A normal mode solution to the ocean acoustic problem of the Pekeris waveguide with an elastic bottom using a Green's function formulation for a compressional wave point source is considered. Analytic solutions to these types of waveguide propagation problems are strongly dependent on the eigenvalues of the problem; these eigenvalues represent horizontal wavenumbers, corresponding to propagating modes of energy. The eigenvalues arise as singularities in the inverse Hankel transform integral and are specified by roots to a characteristic equation. These roots manifest themselves as poles in the inverse transform integral and can be both subtle and difficult to determine. Following methods previously developed [S. Ivansson et al., J. Sound Vib. 161 (1993)], a root finding routine has been implemented using the argument principle. Using the roots to the characteristic equation in the Green's function formulation, full-field solutions are calculated for scenarios where an acoustic source lies in either the water column or elastic half space. Solutions are benchmarked against laboratory data and existing numerical solutions.
Transient pressure analysis of fractured well in bi-zonal gas reservoirs
NASA Astrophysics Data System (ADS)
Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo
2015-05-01
For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.
A smart market for nutrient credit trading to incentivize wetland construction
NASA Astrophysics Data System (ADS)
Raffensperger, John F.; Prabodanie, R. A. Ranga; Kostel, Jill A.
2017-03-01
Nutrient trading and constructed wetlands are widely discussed solutions to reduce nutrient pollution. Nutrient markets usually include agricultural nonpoint sources and municipal and industrial point sources, but these markets rarely include investors who construct wetlands to sell nutrient reduction credits. We propose a new market design for trading nutrient credits, with both point source and non-point source traders, explicitly incorporating the option of landowners to build nutrient removal wetlands. The proposed trading program is designed as a smart market with centralized clearing, done with an optimization. The market design addresses the varying impacts of runoff over space and time, and the lumpiness of wetland investments. We simulated the market for the Big Bureau Creek watershed in north-central Illinois. We found that the proposed smart market would incentivize wetland construction by assuring reasonable payments for the ecosystem services provided. The proposed market mechanism selects wetland locations strategically taking into account both the cost and nutrient removal efficiencies. The centralized market produces locational prices that would incentivize farmers to reduce nutrients, which is voluntary. As we illustrate, wetland builders' participation in nutrient trading would enable the point sources and environmental organizations to buy low cost nutrient credits.
NASA Astrophysics Data System (ADS)
Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem
2017-04-01
The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.
Using rare earth elements to trace wind-driven dispersion of sediments from a point source
NASA Astrophysics Data System (ADS)
Van Pelt, R. Scott; Barnes, Melanie C. W.; Strack, John E.
2018-06-01
The entrainment and movement of aeolian sediments is determined by the direction and intensity of erosive winds. Although erosive winds may blow from all directions, in most regions there is a predominant direction. Dust emission causes the removal preferentially of soil nutrients and contaminants which may be transported tens to even thousands of kilometers from the source and deposited into other ecosystems. It would be beneficial to understand spatially and temporally how the soil source may be degraded and depositional zones enriched. A stable chemical tracer not found in the soil but applied to the surface of all particles in the surface soil would facilitate this endeavor. This study examined whether solution-applied rare earth elements (REEs) could be used to trace aeolian sediment movement from a point source through space and time at the field scale. We applied erbium nitrate solution to a 5 m2 area in the center of a 100 m diameter field 7854 m2 on the Southern High Plains of Texas. The solution application resulted in a soil-borne concentration three orders of magnitude greater than natively found in the field soil. We installed BSNE sampler masts in circular configurations and collected the trapped sediment weekly. We found that REE-tagged sediment was blown into every sampler mast during the course of the study but that there was a predominant direction of transport during the spring. This preliminary investigation suggests that the REEs provide a viable and incisive technique to study spatial and temporal variation of aeolian sediment movement from specific sources to identifiable locations of deposition or locations through which the sediments were transported as horizontal mass flux and the relative contribution of the specific source to the total mass flux.
40 CFR 428.11 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... AND STANDARDS (CONTINUED) RUBBER MANUFACTURING POINT SOURCE CATEGORY Tire and Inner Tube Plants... black, oils, chemical compounds, fabric and wire used in the manufacture of pneumatic tires and inner... inner tube plants constructed before 1959, discharges from the following: Soapstone solution...
40 CFR 428.11 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... AND STANDARDS (CONTINUED) RUBBER MANUFACTURING POINT SOURCE CATEGORY Tire and Inner Tube Plants... black, oils, chemical compounds, fabric and wire used in the manufacture of pneumatic tires and inner... inner tube plants constructed before 1959, discharges from the following: Soapstone solution...
Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.
NASA Astrophysics Data System (ADS)
Dodd, Stirling Scott
1995-01-01
Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.
New solutions with accelerated expansion in string theory
Dodelson, Matthew; Dong, Xi; Silverstein, Eva; ...
2014-12-05
We present concrete solutions with accelerated expansion in string theory, requiring a small, tractable list of stress energy sources. We explain how this construction (and others in progress) evades previous no go theorems for simple accelerating solutions. Our solutions respect an approximate scaling symmetry and realize discrete sequences of values for the equation of state, including one with an accumulation point at w = –1 and another accumulating near w = –1/3 from below. In another class of models, a density of defects generates scaling solutions with accelerated expansion. Here, we briefly discuss potential applications to dark energy phenomenology, andmore » to holography for cosmology.« less
Developing a Near Real-time System for Earthquake Slip Distribution Inversion
NASA Astrophysics Data System (ADS)
Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen
2016-04-01
Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.
The sound field of a rotating dipole in a plug flow.
Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H
2018-04-01
An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.
Error Estimation and Compensation in Reduced Dynamic Models of Large Space Structures
1987-04-23
PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (if aplicable ) AFWAL I FIBRA F33615-84-C-3219 8c. ADDRESS (City, Stateand ZIP Code) ?0 SOURCE...10 Modes of the Full Model 15 5 Comparison of Various Reduced Models 18 6 Driving Point Mobilities , Wing Tip (Z55) 19 7 Driving Point Mobilities , Wing...Root Trailing Edge (Z19) 20 8 AMI Improvement 23 9 Frequency Domain Solution, Driving Point Mobilities , Wing Tip (Z55), RM1I 25 10 Frequency Domain
40 CFR 415.161 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AND STANDARDS INORGANIC CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Sodium Chloride Production... apply to this subpart. (b) The term product shall mean sodium chloride. (c) The term bitterns shall mean the saturated brine solution remaining after precipitation of sodium chloride in the solar evaporation...
Zhang, Yong; Weissmann, Gary S; Fogg, Graham E; Lu, Bingqing; Sun, HongGuang; Zheng, Chunmiao
2018-06-05
Groundwater susceptibility to non-point source contamination is typically quantified by stable indexes, while groundwater quality evolution (or deterioration globally) can be a long-term process that may last for decades and exhibit strong temporal variations. This study proposes a three-dimensional (3- d ), transient index map built upon physical models to characterize the complete temporal evolution of deep aquifer susceptibility. For illustration purposes, the previous travel time probability density (BTTPD) approach is extended to assess the 3- d deep groundwater susceptibility to non-point source contamination within a sequence stratigraphic framework observed in the Kings River fluvial fan (KRFF) aquifer. The BTTPD, which represents complete age distributions underlying a single groundwater sample in a regional-scale aquifer, is used as a quantitative, transient measure of aquifer susceptibility. The resultant 3- d imaging of susceptibility using the simulated BTTPDs in KRFF reveals the strong influence of regional-scale heterogeneity on susceptibility. The regional-scale incised-valley fill deposits increase the susceptibility of aquifers by enhancing rapid downward solute movement and displaying relatively narrow and young age distributions. In contrast, the regional-scale sequence-boundary paleosols within the open-fan deposits "protect" deep aquifers by slowing downward solute movement and displaying a relatively broad and old age distribution. Further comparison of the simulated susceptibility index maps to known contaminant distributions shows that these maps are generally consistent with the high concentration and quick evolution of 1,2-dibromo-3-chloropropane (DBCP) in groundwater around the incised-valley fill since the 1970s'. This application demonstrates that the BTTPDs can be used as quantitative and transient measures of deep aquifer susceptibility to non-point source contamination.
Optimal simultaneous superpositioning of multiple structures with missing data
Theobald, Douglas L.; Steindel, Phillip A.
2012-01-01
Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369
Unsteady solute-transport simulation in streamflow using a finite-difference model
Land, Larry F.
1978-01-01
This report documents a rather simple, general purpose, one-dimensional, one-parameter, mass-transport model for field use. The model assumes a well-mixed conservative solute that may be coming from an unsteady source and is moving in unsteady streamflow. The quantity of solute being transported is in the units of concentration. Results are reported as such. An implicit finite-difference technique is used to solve the mass transport equation. It consists of creating a tridiagonal matrix and using the Thomas algorithm to solve the matrix for the unknown concentrations at the new time step. The computer program pesented is designed to compute the concentration of a water-quality constituent at any point and at any preselected time in a one-dimensional stream. The model is driven by the inflowing concentration of solute at the upstream boundary and is influenced by the solute entering the stream from tributaries and lateral ground-water inflow and from a source or sink. (Woodard-USGS)
SAVAH: Source Address Validation with Host Identity Protocol
NASA Astrophysics Data System (ADS)
Kuptsov, Dmitriy; Gurtov, Andrei
Explosive growth of the Internet and lack of mechanisms that validate the authenticity of a packet source produced serious security and accounting issues. In this paper, we propose validating source addresses in LAN using Host Identity Protocol (HIP) deployed in a first-hop router. Compared to alternative solutions such as CGA, our approach is suitable both for IPv4 and IPv6. We have implemented SAVAH in Wi-Fi access points and evaluated its overhead for clients and the first-hop router.
40 CFR 420.101 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420.101 Specialized definitions. (a) The term recirculation means those cold rolling operations which include recirculation of rolling solutions at all mill stands. (b) The term combination means those cold rolling...
40 CFR 420.101 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420.101 Specialized definitions. (a) The term recirculation means those cold rolling operations which include recirculation of rolling solutions at all mill stands. (b) The term combination means those cold rolling...
40 CFR 420.101 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420.101 Specialized definitions. (a) The term recirculation means those cold rolling operations which include recirculation of rolling solutions at all mill stands. (b) The term combination means those cold rolling...
40 CFR 420.101 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Cold Forming Subcategory § 420.101 Specialized definitions. (a) The term recirculation means those cold rolling operations which include recirculation of rolling solutions at all mill stands. (b) The term combination means those cold rolling...
STREAM CORRIDOR RESTORATION AND ITS POTENTIAL TO IMPROVE WATER QUALITY
Watershed stream corridors are being degraded by anthropogenic impacts of increased flow from runoff, sediment loading from erosion and contaminants such as nitrate from non-point sources. One solution is to restore stream corridors with bank stabilization and energy dissipation ...
40 CFR 415.91 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS INORGANIC CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Hydrogen Peroxide Production... apply to this subpart. (b) The term product shall mean hydrogen peroxide as a one hundred percent hydrogen peroxide solution. (c) The term Cyanide A shall mean those cyanides amenable to chlorination and...
40 CFR 421.190 - Applicability: Description of the secondary indium subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE... subcategory. The provisions of this subpart are applicable to discharges resulting from the production of indium at secondary indium facilities processing spent electrolyte solutions and scrap indium metal raw...
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
1974-09-07
ellipticity filter. The source waveforms are recreated by an inverse transform of those complex ampli- tudes associated with the same azimuth...terms of the three complex data points and the ellipticity. Having solved the equations for all frequency bins, the inverse transform of...Transform of those complex amplitudes associated with Source 1, yielding the signal a (t). Similarly, take the inverse Transform of all
Organic solutes in ground water at the Idaho National Engineering Laboratory
Leenheer, Jerry A.; Bagby, Jefferson C.
1982-01-01
In August 1980, the U.S. Geological Survey started a reconnaissance survey of organic solutes in drinking water sources, ground-water monitoring wells, perched water table monitoring wells, and in select waste streams at the Idaho National Engineering Laboratory (INEL). The survey was to be a two-phase program. In the first phase, 77 wells and 4 potential point sources were sampled for dissolved organic carbon (DOC). Four wells and several potential point sources of insecticides and herbicides were sampled for insecticides and herbicides. Fourteen wells and four potential organic sources were sampled for volatile and semivolatile organic compounds. The results of the DOC analyses indicate no high level (>20 mg/L DOC) organic contamination of ground water. The only detectable insecticide or herbicide was a DDT concentration of 10 parts per trillion (0.01 microgram per liter) in one observation well. The volatile and semivolatile analyses do not indicate the presence of hazardous organic contaminants in significant amounts (>10 micrograms per liter) in the samples taken. Due to the lack of any significant organic ground-water contamination in this reconnaissance survey, the second phase of the study, which was to follow up the first phase by additional sampling of any contaminated wells, was canceled.
NASA Astrophysics Data System (ADS)
Gibbons, Gary W.; Volkov, Mikhail S.
2017-05-01
We study solutions obtained via applying dualities and complexifications to the vacuum Weyl metrics generated by massive rods and by point masses. Rescaling them and extending to complex parameter values yields axially symmetric vacuum solutions containing singularities along circles that can be viewed as singular matter sources. These solutions have wormhole topology with several asymptotic regions interconnected by throats and their sources can be viewed as thin rings of negative tension encircling the throats. For a particular value of the ring tension the geometry becomes exactly flat although the topology remains non-trivial, so that the rings literally produce holes in flat space. To create a single ring wormhole of one metre radius one needs a negative energy equivalent to the mass of Jupiter. Further duality transformations dress the rings with the scalar field, either conventional or phantom. This gives rise to large classes of static, axially symmetric solutions, presumably including all previously known solutions for a gravity-coupled massless scalar field, as for example the spherically symmetric Bronnikov-Ellis wormholes with phantom scalar. The multi-wormholes contain infinite struts everywhere at the symmetry axes, apart from solutions with locally flat geometry.
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
40 CFR 420.91 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY Acid Pickling Subcategory § 420.91 Specialized definitions. (a) The term sulfuric acid pickling means those operations in which steel products are immersed... steel products are immersed in hydrochloric acid solutions to chemically remove oxides and scale, and...
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
Wormholes with fluid sources: A no-go theorem and new examples
NASA Astrophysics Data System (ADS)
Bronnikov, K. A.; Baleevskikh, K. A.; Skvortsova, M. V.
2017-12-01
For static, spherically symmetric space-times in general relativity (GR), a no-go theorem is proved: it excludes the existence of wormholes with flat and/or anti-de Sitter asymptotic regions on both sides of the throat if the source matter is isotropic, i.e., the radial and tangential pressures coincide. It explains why in all previous attempts to build such solutions it was necessary to introduce boundaries with thin shells that manifestly violate the isotropy of matter. Under a simple assumption on the behavior of the spherical radius r (x ), we obtain a number of examples of wormholes with isotropic matter and one or both de Sitter asymptotic regions, allowed by the no-go theorem. We also obtain twice asymptotically flat wormholes with anisotropic matter, both symmetric and asymmetric with respect to the throat, under the assumption that the scalar curvature is zero. These solutions may be on equal grounds interpreted as those of GR with a traceless stress-energy tensor and as vacuum solutions in a brane world. For such wormholes, the traversability conditions and gravitational lensing properties are briefly discussed. As a byproduct, we obtain twice asymptotically flat regular black hole solutions with up to four Killing horizons. As another byproduct, we point out intersection points in families of integral curves for the function A (x )=gt t, parametrized by its values on the throat.
40 CFR 409.31 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.31 Specialized... shall mean the addition of pollutants. (c) Melt shall mean that amount of raw material (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
40 CFR 409.31 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.31 Specialized... shall mean the addition of pollutants. (c) Melt shall mean that amount of raw material (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS RUBBER MANUFACTURING POINT SOURCE CATEGORY Solution...) (1) English units (lb/1,000 lb of product) COD 5.91 3.94 BOD5 0.60 .40 TSS 0.98 .65 Oil and grease 0...
40 CFR 409.31 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.31 Specialized... shall mean the addition of pollutants. (c) Melt shall mean that amount of raw material (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
40 CFR 409.31 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.31 Specialized... shall mean the addition of pollutants. (c) Melt shall mean that amount of raw material (raw sugar) contained within aqueous solution at the beginning of the process for production of refined cane sugar. ...
40 CFR 417.71 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417... all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for final...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and Powders... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.31 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Fatty Acid... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.71 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417... all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for final...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch Kettle... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and Powders... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.71 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417... all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for final...
40 CFR 417.71 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and Powders... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and Powders... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch Kettle... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.71 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417... all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for final...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch Kettle... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.31 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Fatty Acid... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch Kettle... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.31 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Fatty Acid... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
Global Situational Awareness with Free Tools
2015-01-15
Client Technical Solutions • Software Engineering Measurement and Analysis • Architecture Practices • Product Line Practice • Team Software Process...multiple data sources • Snort (Snorby on Security Onion ) • Nagios • SharePoint RSS • Flow • Others • Leverage standard data formats • Keyhole Markup Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less
METHOD OF PREPARING RADIOACTIVE CESIUM SOURCES
Quinby, T.C.
1963-12-17
A method of preparing a cesium-containing radiation source with physical and chemical properties suitable for high-level use is presented. Finely divided silica is suspended in a solution containing cesium, normally the fission-product isotope cesium 137. Sodium tetraphenyl boron is then added to quantitatively precipitate the cesium. The cesium-containing precipitate is converted to borosilicate glass by heating to the melting point and cooling. Up to 60 weight percent cesium, with a resulting source activity of up to 21 curies per gram, is incorporated in the glass. (AEC)
Scattering of focused ultrasonic beams by cavities in a solid half-space.
Rahni, Ehsan Kabiri; Hajzargarbashi, Talieh; Kundu, Tribikram
2012-08-01
The ultrasonic field generated by a point focused acoustic lens placed in a fluid medium adjacent to a solid half-space, containing one or more spherical cavities, is modeled. The semi-analytical distributed point source method (DPSM) is followed for the modeling. This technique properly takes into account the interaction effect between the cavities placed in the focused ultrasonic field, fluid-solid interface and the lens surface. The approximate analytical solution that is available in the literature for the single cavity geometry is very restrictive and cannot handle multiple cavity problems. Finite element solutions for such problems are also prohibitively time consuming at high frequencies. Solution of this problem is necessary to predict when two cavities placed in close proximity inside a solid can be distinguished by an acoustic lens placed outside the solid medium and when such distinction is not possible.
Distinguishing one from many using super-resolution compressive sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Distinguishing one from many using super-resolution compressive sensing
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...
2018-05-14
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
On the Motion of Agents across Terrain with Obstacles
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.
2018-01-01
The paper is devoted to finding the time optimal route of an agent travelling across a region from a given source point to a given target point. At each point of this region, a maximum allowed speed is specified. This speed limit may vary in time. The continuous statement of this problem and the case when the agent travels on a grid with square cells are considered. In the latter case, the time is also discrete, and the number of admissible directions of motion at each point in time is eight. The existence of an optimal solution of this problem is proved, and estimates of the approximate solution obtained on the grid are obtained. It is found that decreasing the size of cells below a certain limit does not further improve the approximation. These results can be used to estimate the quasi-optimal trajectory of the agent motion across the rugged terrain produced by an algorithm based on a cellular automaton that was earlier developed by the author.
Interference effects in phased beam tracing using exact half-space solutions.
Boucher, Matthew A; Pluymers, Bert; Desmet, Wim
2016-12-01
Geometrical acoustics provides a correct solution to the wave equation for rectangular rooms with rigid boundaries and is an accurate approximation at high frequencies with nearly hard walls. When interference effects are important, phased geometrical acoustics is employed in order to account for phase shifts due to propagation and reflection. Error increases, however, with more absorption, complex impedance values, grazing incidence, smaller volumes and lower frequencies. Replacing the plane wave reflection coefficient with a spherical one reduces the error but results in slower convergence. Frequency-dependent stopping criteria are then applied to avoid calculating higher order reflections for frequencies that have already converged. Exact half-space solutions are used to derive two additional spherical wave reflection coefficients: (i) the Sommerfeld integral, consisting of a plane wave decomposition of a point source and (ii) a line of image sources located at complex coordinates. Phased beam tracing using exact half-space solutions agrees well with the finite element method for rectangular rooms with absorbing boundaries, at low frequencies and for rooms with different aspect ratios. Results are accurate even for long source-to-receiver distances. Finally, the crossover frequency between the plane and spherical wave reflection coefficients is discussed.
Yang, S A
2002-10-01
This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.
NASA Astrophysics Data System (ADS)
Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen
2017-04-01
In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS RUBBER MANUFACTURING POINT SOURCE CATEGORY Solution Crumb...) English units (lb/1,000 lb of product) COD 3.12 2.08 BOD5 0.12 .08 TSS 0.24 .16 Oil and grease 0.12 .08 pH...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.31 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Fatty Acid... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
Yates, S R
2009-01-01
An analytical solution describing the fate and transport of pesticides applied to soils has been developed. Two pesticide application methods can be simulated: point-source applications, such as idealized shank or a hot-gas injection method, and a more realistic shank-source application method that includes a vertical pesticide distribution in the soil domain due to a soil fracture caused by a shank. The solutions allow determination of the volatilization rate and other information that could be important for understanding fumigant movement and in the development of regulatory permitting conditions. The solutions can be used to characterize differences in emissions relative to changes in the soil degradation rate, surface barrier conditions, application depth, and soil packing. In some cases, simple algebraic expressions are provided that can be used to obtain the total emissions and total soil degradation. The solutions provide a consistent methodology for determining the total emissions and can be used with other information, such as field and laboratory experimental data, to support the development of fumigant regulations. The uses of the models are illustrated by several examples.
Solution of the three-dimensional Helmholtz equation with nonlocal boundary conditions
NASA Technical Reports Server (NTRS)
Hodge, Steve L.; Zorumski, William E.; Watson, Willie R.
1995-01-01
The Helmholtz equation is solved within a three-dimensional rectangular duct with a nonlocal radiation boundary condition at the duct exit plane. This condition accurately models the acoustic admittance at an arbitrarily-located computational boundary plane. A linear system of equations is constructed with second-order central differences for the Helmholtz operator and second-order backward differences for both local admittance conditions and the gradient term in the nonlocal radiation boundary condition. The resulting matrix equation is large, sparse, and non-Hermitian. The size and structure of the matrix makes direct solution techniques impractical; as a result, a nonstationary iterative technique is used for its solution. The theory behind the nonstationary technique is reviewed, and numerical results are presented for radiation from both a point source and a planar acoustic source. The solutions with the nonlocal boundary conditions are invariant to the location of the computational boundary, and the same nonlocal conditions are valid for all solutions. The nonlocal conditions thus provide a means of minimizing the size of three-dimensional computational domains.
Stable plume rise in a shear layer.
Overcamp, Thomas J
2007-03-01
Solutions are given for plume rise assuming a power-law wind speed profile in a stably stratified layer for point and finite sources with initial vertical momentum and buoyancy. For a constant wind speed, these solutions simplify to the conventional plume rise equations in a stable atmosphere. In a shear layer, the point of maximum rise occurs further downwind and is slightly lower compared with the plume rise with a constant wind speed equal to the wind speed at the top of the stack. If the predictions with shear are compared with predictions for an equivalent average wind speed over the depth of the plume, the plume rise with shear is higher than plume rise with an equivalent average wind speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James; Kuruganti, Teja
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Strong anti-gravity Life in the shock wave
NASA Astrophysics Data System (ADS)
Fabbrichesi, Marco; Roland, Kaj
1992-12-01
Strong anti-gravity is the vanishing of the net force between two massive particles at rest, to all orders in Newton's constant. We study this phenomenon and show that it occurs in any effective theory of gravity which is obtained from a higher-dimensional model by compactification on a manifold with flat directions. We find the exact solution of the Einstein equations in the presence of a point-like source of strong anti-gravity by dimensional reduction of a shock-wave solution in the higher-dimensional model.
Mellow, Tim; Kärkkäinen, Leo
2014-03-01
An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.
NASA Astrophysics Data System (ADS)
Sanan, Patrick; May, Dave A.; Schenk, Olaf; Bollhöffer, Matthias
2017-04-01
Geodynamics simulations typically involve the repeated solution of saddle-point systems arising from the Stokes equations. These computations often dominate the time to solution. Direct solvers are known for their robustness and ``black box'' properties, yet exhibit superlinear memory requirements and time to solution. More complex multilevel-preconditioned iterative solvers have been very successful for large problems, yet their use can require more effort from the practitioner in terms of setting up a solver and choosing its parameters. We champion an intermediate approach, based on leveraging the power of modern incomplete factorization techniques for indefinite symmetric matrices. These provide an interesting alternative in situations in between the regimes where direct solvers are an obvious choice and those where complex, scalable, iterative solvers are an obvious choice. That is, much like their relatives for definite systems, ILU/ICC-preconditioned Krylov methods and ILU/ICC-smoothed multigrid methods, the approaches demonstrated here provide a useful addition to the solver toolkit. We present results with a simple, PETSc-based, open-source Q2-Q1 (Taylor-Hood) finite element discretization, in 2 and 3 dimensions, with the Stokes and Lamé (linear elasticity) saddle point systems. Attention is paid to cases in which full-operator incomplete factorization gives an improvement in time to solution over direct solution methods (which may not even be feasible due to memory limitations), without the complication of more complex (or at least, less-automatic) preconditioners or smoothers. As an important factor in the relevance of these tools is their availability in portable software, we also describe open-source PETSc interfaces to the factorization routines.
The Green’s functions for peridynamic non-local diffusion
Wang, L. J.; Xu, J. F.
2016-01-01
In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbons, Gary W.; Volkov, Mikhail S., E-mail: gwg1@cam.ac.uk, E-mail: volkov@lmpt.univ-tours.fr
We study solutions obtained via applying dualities and complexifications to the vacuum Weyl metrics generated by massive rods and by point masses. Rescaling them and extending to complex parameter values yields axially symmetric vacuum solutions containing singularities along circles that can be viewed as singular matter sources. These solutions have wormhole topology with several asymptotic regions interconnected by throats and their sources can be viewed as thin rings of negative tension encircling the throats. For a particular value of the ring tension the geometry becomes exactly flat although the topology remains non-trivial, so that the rings literally produce holes inmore » flat space. To create a single ring wormhole of one metre radius one needs a negative energy equivalent to the mass of Jupiter. Further duality transformations dress the rings with the scalar field, either conventional or phantom. This gives rise to large classes of static, axially symmetric solutions, presumably including all previously known solutions for a gravity-coupled massless scalar field, as for example the spherically symmetric Bronnikov-Ellis wormholes with phantom scalar. The multi-wormholes contain infinite struts everywhere at the symmetry axes, apart from solutions with locally flat geometry.« less
McCleskey, R. Blaine; Nordstrom, D. Kirk; Susong, David D.; Ball, James W.; Holloway, JoAnn M.
2010-01-01
The Gibbon River in Yellowstone National Park (YNP) is an important natural resource and habitat for fisheries and wildlife. However, the Gibbon River differs from most other mountain rivers because its chemistry is affected by several geothermal sources including Norris Geyser Basin, Chocolate Pots, Gibbon Geyser Basin, Beryl Spring, and Terrace Spring. Norris Geyser Basin is one of the most dynamic geothermal areas in YNP, and the water discharging from Norris is much more acidic (pH 3) than other geothermal basins in the upper-Madison drainage (Gibbon and Firehole Rivers). Water samples and discharge data were obtained from the Gibbon River and its major tributaries near Norris Geyser Basin under the low-flow conditions of September 2006. Surface inflows from Norris Geyser Basin were sampled to identify point sources and to quantify solute loading to the Gibbon River. The source and fate of the major solutes (Ca, Mg, Na, K, SiO2, Cl, F, HCO3, SO4, NO3, and NH4) in the Gibbon River were determined in this study and these results may provide an important link in understanding the health of the ecosystem and the behavior of many trace solutes. Norris Geyser Basin is the primary source of Na, K, Cl, SO4, and N loads (35–58%) in the Gibbon River. The largest source of HCO3 and F is in the lower Gibbon River reach. Most of the Ca and Mg originate in the Gibbon River upstream from Norris Geyser Basin. All the major solutes behave conservatively except for NH4, which decreased substantially downstream from Gibbon Geyser Basin, and SiO2, small amounts of which precipitated on mixing of thermal drainage with the river. As much as 9–14% of the river discharge at the gage is from thermal flows during this period.
NASA Astrophysics Data System (ADS)
Verginelli, Iason; Nocentini, Massimo; Baciocchi, Renato
2017-09-01
Simplified analytical solutions of fate and transport models are often used to carry out risk assessment on contaminated sites, to evaluate the long-term air quality in relation to volatile organic compounds in either soil or groundwater. Among the different assumptions employed to develop these solutions, in this work we focus on those used in the ASTM-RBCA ;box model; for the evaluation of contaminant dispersion in the atmosphere. In this simple model, it is assumed that the contaminant volatilized from the subsurface is dispersed in the atmosphere within a mixing height equal to two meters, i.e. the height of the breathing zone. In certain cases, this simplification could lead to an overestimation of the outdoor air concentration at the point of exposure. In this paper we first discuss the maximum source lengths (in the wind direction) for which the application of the ;box model; can be considered acceptable. Specifically, by comparing the results of ;box model; with the SCREEN3 model of U.S.EPA we found that under very stable atmospheric conditions (class F) the ASTM-RBCA approach provides acceptable results for source lengths up to 200 m while for very unstable atmospheric conditions (class A and B) the overestimation of the concentrations at the point of the exposure can be already observed for source lengths of only 10 m. In the latter case, the overestimation of the ;box model; can be of more than one order of magnitude for source lengths above 500 m. To overcome this limitation, in this paper we introduce a simple analytical solution that can be used for the calculation of the concentration at the point of exposure for large contaminated sites. The method consists in the introduction of an equivalent mixing zone height that allows to account for the dispersion of the contaminants along the source length while keeping the simplistic ;box model; approach that is implemented in most of risk assessment tools that are based on the ASTM-RBCA standard (e.g. RBCA toolkit). Based on our testing, we found that the developed model replicates very well the results of the more sophisticated dispersion SCREEN3 model with deviations always below 10%. The key advantage of this approach is that it can be very easily incorporated in the current risk assessment screening tools that are based on the ASTM standards while ensuring a more accurate evaluation of the concentration at the point of exposure.
NASA Astrophysics Data System (ADS)
Gomez-Gonzalez, J. M.; Mellors, R.
2007-05-01
We investigate the kinematics of the rupture process for the September 27, 2003, Mw7.3, Altai earthquake and its associated large aftershocks. This is the largest earthquake striking the Altai mountains within the last 50 years, which provides important constraints on the ongoing tectonics. The fault plane solution obtained by teleseismic body waveform modeling indicated a predominantly strike-slip event (strike=130, dip=75, rake 170), Scalar moment for the main shock ranges from 0.688 to 1.196E+20 N m, a source duration of about 20 to 42 s, and an average centroid depth of 10 km. Source duration would indicate a fault length of about 130 - 270 km. The main shock was followed closely by two aftershocks (Mw5.7, Mw6.4) occurred the same day, another aftershock (Mw6.7) occurred on 1 October , 2003. We also modeled the second aftershock (Mw6.4) to asses geometric similarities during their respective rupture process. This aftershock occurred spatially very close to the mainshock and possesses a similar fault plane solution (strike=128, dip=71, rake=154), and centroid depth (13 km). Several local conditions, such as the crustal model and fault geometry, affect the correct estimation of some source parameters. We perfume a sensitivity evaluation of several parameters, including centroid depth, scalar moment and source duration, based on a point and finite source modeling. The point source approximation results are the departure parameters for the finite source exploration. We evaluate the different reported parameters to discard poor constrained models. In addition, deformation data acquired by InSAR are also included in the analysis.
Development of a Low-Cost Spectrophotometric Sensor for ClO2 Gas
NASA Astrophysics Data System (ADS)
Conry, Jessica; Scott, Dane; Apblett, Allen; Materer, Nicholas
2006-04-01
ClO2 is of interest because of it's capability to kill biological hazards such as E. coli and mold. However, ClO2 is a toxic, reactive gas that must be generated at the point-of-use. Gas storage is not possible due to the possibility of an explosion. The need to detect the amount of ClO2 at the point-of-use necessitates a low cost sensor. A low-cost spectrophotometric sensor based on a broad-band light source, a photodiode detector and a band-pass filter is proposed. To verify the design, precise determinations of the gas-phase cross-section and characterization of the optical components are necessary. Known concentrations of ClO2(g) are prepared using the equilibrium relationship between an aqueous solution and the gas phase. The aqueous solutions are obtained by generating the gas via a chemical reaction and passing it through water. The concentrations of the aqueous solutions are then determined by chemical titration and UV-visible absorption measurements. For the solutions, a maximum absorption is observed at 359 nm, and the cross section at this wavelength is determined to be 4.79x10-18cm^2, in agreement with previous observations. Using a broad-band source, the absorption of ClO2 gas is successfully analyzed and concentrations are determined as low as 100 ppm. A more recent prototype based on an UV LED can measure down to concentrations as low as one ppm.
Influence of Mean-Density Gradient on Small-Scale Turbulence Noise
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
2000-01-01
A physics-based methodology is described to predict jet-mixing noise due to small-scale turbulence. Both self- and shear-noise source teens of Lilley's equation are modeled and the far-field aerodynamic noise is expressed as an integral over the jet volume of the source multiplied by an appropriate Green's function which accounts for source convection and mean-flow refraction. Our primary interest here is to include transverse gradients of the mean density in the source modeling. It is shown that, in addition to the usual quadrupole type sources which scale to the fourth-power of the acoustic wave number, additional dipole and monopole sources are present that scale to lower powers of wave number. Various two-point correlations are modeled and an approximate solution to noise spectra due to multipole sources of various orders is developed. Mean flow and turbulence information is provided through RANS-k(epsilon) solution. Numerical results are presented for a subsonic jet at a range of temperatures and Mach numbers. Predictions indicated a decrease in high frequency noise with added heat, while changes in the low frequency noise depend on jet velocity and observer angle.
Code of Federal Regulations, 2010 CFR
2010-07-01
... achievable. 467.33 Section 467.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ALUMINUM FORMING POINT SOURCE CATEGORY Extrusion Subcategory § 467.33....25 Aluminum 13.10 6.52 Subpart C Solution Heat Treatment Contact Cooling Water Pollutant or pollutant...
Discretized energy minimization in a wave guide with point sources
NASA Technical Reports Server (NTRS)
Propst, G.
1994-01-01
An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Lomax, Harvard
1950-01-01
Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.
The acoustic field of a point source in a uniform boundary layer over an impedance plane
NASA Technical Reports Server (NTRS)
Zorumski, W. E.; Willshire, W. L., Jr.
1986-01-01
The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.
CosmoQuest Transient Tracker: Opensource Photometry & Astrometry software
NASA Astrophysics Data System (ADS)
Myers, Joseph L.; Lehan, Cory; Gay, Pamela; Richardson, Matthew; CosmoQuest Team
2018-01-01
CosmoQuest is moving from online citizen science, to observational astronomy with the creation of Transient Trackers. This open source software is designed to identify asteroids and other transient/variable objects in image sets. Transient Tracker’s features in final form will include: astrometric and photometric solutions, identification of moving/transient objects, identification of variable objects, and lightcurve analysis. In this poster we present our initial, v0.1 release and seek community input.This software builds on the existing NIH funded ImageJ libraries. Creation of this suite of opensource image manipulation routines is lead by Wayne Rasband and is released primarily under the MIT license. In this release, we are building on these libraries to add source identification for point / point-like sources, and to do astrometry. Our materials released under the Apache 2.0 license on github (http://github.com/CosmoQuestTeam) and documentation can be found at http://cosmoquest.org/TransientTracker.
A novel solution for LED wall lamp design and simulation
NASA Astrophysics Data System (ADS)
Ge, Rui; Hong, Weibin; Li, Kuangqi; Liang, Pengxiang; Zhao, Fuli
2014-11-01
The model of the wall washer lamp and the practical illumination application have been established with a new design of the lens to meet the uniform illumination demand for wall washer lamp based on the Lambertian light sources. Our secondary optical design of freeform surface lens to LED wall washer lamp based on the conservation law of energy and Snell's law can improve the lighting effects as a uniform illumination. With the relationship between the surface of the lens and the surface of the target, a great number of discrete points of the freeform profile curve were obtained through the iterative method. After importing the data into our modeling program, the optical entity was obtained. Finally, to verify the feasibility of the algorithm, the model was simulated by specialized software, with both the LED Lambertian point source and LED panel source model.
SIFT optimization and automation for matching images from multiple temporal sources
NASA Astrophysics Data System (ADS)
Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio
2017-05-01
Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.
NASA Astrophysics Data System (ADS)
Barkeshli, Sina
A relatively simple and efficient closed form asymptotic representation of the microstrip dyadic surface Green's function is developed. The large parameter in this asymptotic development is proportional to the lateral separation between the source and field points along the planar microstrip configuration. Surprisingly, this asymptotic solution remains accurate even for very small (almost two tenths of a wavelength) lateral separation of the source and field points. The present asymptotic Green's function will thus allow a very efficient calculation of the currents excited on microstrip antenna patches/feed lines and monolithic millimeter and microwave integrated circuit (MIMIC) elements based on a moment method (MM) solution of an integral equation for these currents. The kernal of the latter integral equation is the present asymptotic form of the microstrip Green's function. It is noted that the conventional Sommerfeld integral representation of the microstrip surface Green's function is very poorly convergent when used in this MM formulation. In addition, an efficient exact steepest descent path integral form employing a radially propagating representation of the microstrip dyadic Green's function is also derived which exhibits a relatively faster convergence when compared to the conventional Sommerfeld integral representation. The same steepest descent form could also be obtained by deforming the integration contour of the conventional Sommerfeld representation; however, the radially propagating integral representation exhibits better convergence properties for laterally separated source and field points even before the steepest descent path of integration is used. Numerical results based on the efficient closed form asymptotic solution for the microstrip surface Green's function developed in this work are presented for the mutual coupling between a pair of dipoles on a single layer grounded dielectric slab. The accuracy of the latter calculations is confirmed by comparison with results based on an exact integral representation for that Green's function.
Martelli, Fabrizio; Sassaroli, Angelo; Pifferi, Antonio; Torricelli, Alessandro; Spinelli, Lorenzo; Zaccanti, Giovanni
2007-12-24
The Green's function of the time dependent radiative transfer equation for the semi-infinite medium is derived for the first time by a heuristic approach based on the extrapolated boundary condition and on an almost exact solution for the infinite medium. Monte Carlo simulations performed both in the simple case of isotropic scattering and of an isotropic point-like source, and in the more realistic case of anisotropic scattering and pencil beam source, are used to validate the heuristic Green's function. Except for the very early times, the proposed solution has an excellent accuracy (> 98 % for the isotropic case, and > 97 % for the anisotropic case) significantly better than the diffusion equation. The use of this solution could be extremely useful in the biomedical optics field where it can be directly employed in conditions where the use of the diffusion equation is limited, e.g. small volume samples, high absorption and/or low scattering media, short source-receiver distances and early times. Also it represents a first step to derive tools for other geometries (e.g. slab and slab with inhomogeneities inside) of practical interest for noninvasive spectroscopy and diffuse optical imaging. Moreover the proposed solution can be useful to several research fields where the study of a transport process is fundamental.
NASA Technical Reports Server (NTRS)
Williams, J. H., Jr.; Marques, E. R. C.; Lee, S. S.
1986-01-01
The far-field displacements in an infinite transversely isotropic elastic medium subjected to an oscillatory concentrated force are derived. The concepts of velocity surface, slowness surface and wave surface are used to describe the geometry of the wave propagation process. It is shown that the decay of the wave amplitudes depends not only on the distance from the source (as in isotropic media) but also depends on the direction of the point of interest from the source. As an example, the displacement field is computed for a laboratory fabricated unidirectional fiberglass epoxy composite. The solution for the displacements is expressed as an amplitude distribution and is presented in polar diagrams. This analysis has potential usefulness in the acoustic emission (AE) and ultrasonic nondestructive evaluation of composite materials. For example, the transient localized disturbances which are generally associated with AE sources can be modeled via this analysis. In which case, knowledge of the displacement field which arrives at a receiving transducer allows inferences regarding the strength and orientation of the source, and consequently perhaps the degree of damage within the composite.
Detection of ferromagnetic target based on mobile magnetic gradient tensor system
NASA Astrophysics Data System (ADS)
Gang, Y. I. N.; Yingtang, Zhang; Zhining, Li; Hongbo, Fan; Guoquan, Ren
2016-03-01
Attitude change of mobile magnetic gradient tensor system critically affects the precision of gradient measurements, thereby increasing ambiguity in target detection. This paper presents a rotational invariant-based method for locating and identifying ferromagnetic targets. Firstly, unit magnetic moment vector was derived based on the geometrical invariant, such that the intermediate eigenvector of the magnetic gradient tensor is perpendicular to the magnetic moment vector and the source-sensor displacement vector. Secondly, unit source-sensor displacement vector was derived based on the characteristic that the angle between magnetic moment vector and source-sensor displacement is a rotational invariant. By introducing a displacement vector between two measurement points, the magnetic moment vector and the source-sensor displacement vector were theoretically derived. To resolve the problem of measurement noises existing in the realistic detection applications, linear equations were formulated using invariants corresponding to several distinct measurement points and least square solution of magnetic moment vector and source-sensor displacement vector were obtained. Results of simulation and principal verification experiment showed the correctness of the analytical method, along with the practicability of the least square method.
AUGUSTO'S Sundial: Image-Based Modeling for Reverse Engeneering Purposes
NASA Astrophysics Data System (ADS)
Baiocchi, V.; Barbarella, M.; Del Pizzo, S.; Giannone, F.; Troisi, S.; Piccaro, C.; Marcantonio, D.
2017-02-01
A photogrammetric survey of a unique archaeological site is reported in this paper. The survey was performed using both a panoramic image-based solution and by classical procedure. The panoramic image-based solution was carried out employing a commercial solution: the Trimble V10 Imaging Rover (IR). Such instrument is an integrated cameras system that captures 360 degrees digital panoramas, composed of 12 images, with a single push. The direct comparison of the point clouds obtained with traditional photogrammetric procedure and V10 stations, using the same GCP coordinates has been carried out in Cloud Compare, open source software that can provide the comparison between two point clouds supplied by all the main statistical data. The site is a portion of the dial plate of the "Horologium Augusti" inaugurated in 9 B.C.E. in the area of Campo Marzio and still present intact in the same position, in a cellar of a building in Rome, around 7 meter below the present ground level.
Studies of earthquakes and microearthquakes using near-field seismic and geodetic observations
NASA Astrophysics Data System (ADS)
O'Toole, Thomas Bartholomew
The Centroid-Moment Tensor (CMT) method allows an optimal point-source description of an earthquake to be recovered from a set of seismic observations, and, for over 30 years, has been routinely applied to determine the location and source mechanism of teleseismically recorded earthquakes. The CMT approach is, however, entirely general: any measurements of seismic displacement fields could, in theory, be used within the CMT inversion formulation, so long as the treatment of the earthquake as a point source is valid for that data. We modify the CMT algorithm to enable a variety of near-field seismic observables to be inverted for the source parameters of an earthquake. The first two data types that we implement are provided by Global Positioning System receivers operating at sampling frequencies of 1,Hz and above. When deployed in the seismic near field, these instruments may be used as long-period-strong-motion seismometers, recording displacement time series that include the static offset. We show that both the displacement waveforms, and static displacements alone, can be used to obtain CMT solutions for moderate-magnitude earthquakes, and that performing analyses using these data may be useful for earthquake early warning. We also investigate using waveform recordings - made by conventional seismometers deployed at the surface, or by geophone arrays placed in boreholes - to determine CMT solutions, and their uncertainties, for microearthquakes induced by hydraulic fracturing. A similar waveform inversion approach could be applied in many other settings where induced seismicity and microseismicity occurs..
Acoustic propagation in a thermally stratified atmosphere
NASA Technical Reports Server (NTRS)
Vanmoorhem, W. K.
1988-01-01
Acoustic propagation in an atmosphere with a specific form of a temperature profile has been investigated by analytical means. The temperature profile used is representative of an actual atmospheric profile and contains three free parameters. Both lapse and inversion cases have been considered. Although ray solutions have been considered, the primary emphasis has been on solutions of the acoustic wave equation with point source where the sound speed varies with height above the ground corresponding to the assumed temperature profile. The method used to obtain the solution of the wave equation is based on Hankel transformation of the wave equation, approximate solution of the transformed equation for wavelength small compared to the scale of the temperature (or sound speed) profile, and approximate or numerical inversion of the Hankel transformed solution. The solution displays the characteristics found in experimental data but extensive comparison between the models and experimental data has not been carried out.
Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.
Firtha, Gergely; Fiala, Péter
2017-08-01
The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.
An innovative use of instant messaging technology to support a library's single-service point.
Horne, Andrea S; Ragon, Bart; Wilson, Daniel T
2012-01-01
A library service model that provides reference and instructional services by summoning reference librarians from a single service point is described. The system utilizes Libraryh3lp, an open-source, multioperator instant messaging system. The selection and refinement of this solution and technical challenges encountered are explored, as is the design of public services around this technology, usage of the system, and best practices. This service model, while a major cultural and procedural change at first, is now a routine aspect of customer service for this library.
2009-09-01
investment and breakeven point ( BEP ).Two analysts could look at the same data and generate different outcomes if they use different assumptions or modeling...solution. You can then estimate the cost, by year, of a proactive approach to DMSMS management. One principal output of the BCA is the BEP , which shows...approach.The BEP — the point at which the plot crosses the x-axis, as shown in Figure 4—signifies that the cumula- tive investment in the proactive
Gangadharan, R; Prasanna, G; Bhat, M R; Murthy, C R L; Gopalakrishnan, S
2009-11-01
Conventional analytical/numerical methods employing triangulation technique are suitable for locating acoustic emission (AE) source in a planar structure without structural discontinuities. But these methods cannot be extended to structures with complicated geometry, and, also, the problem gets compounded if the material of the structure is anisotropic warranting complex analytical velocity models. A geodesic approach using Voronoi construction is proposed in this work to locate the AE source in a composite structure. The approach is based on the fact that the wave takes minimum energy path to travel from the source to any other point in the connected domain. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. In this work, the geodesic approach is shown more suitable for a practicable source location solution in a composite structure with arbitrary surface containing finite discontinuities. Experiments have been conducted on composite plate specimens of simple and complex geometry to validate this method.
Harb, Afif; von Horn, Alexander; Gocalek, Kornelia; Schäck, Luisa Marilena; Clausen, Jan; Krettek, Christian; Noack, Sandra; Neunaber, Claudia
2017-07-01
Due to the rising interest in Europe to treat large cartilage defects with osteochondrale allografts, research aims to find a suitable solution for long-term storage of osteochondral allografts. This is further encouraged by the fact that legal restrictions currently limit the use of the ingredients from animal or human sources that are being used in other regions of the world (e.g. in the USA). Therefore, the aim of this study was A) to analyze if a Lactated Ringer (LR) based solution is as efficient as a Dulbecco modified Eagle's minimal essential medium (DMEM) in maintaining chondrocyte viability and B) at which storage temperature (4°C vs. 37°C) chondrocyte survival of the osteochondral allograft is optimally sustained. 300 cartilage grafts were collected from knees of ten one year-old Black Head German Sheep. The grafts were stored in four different storage solutions (one of them DMEM-based, the other three based on Lactated Ringer Solution), at two different temperatures (4 and 37°C) for 14 and 56days. At both points in time, chondrocyte survival as well as death rate, Glycosaminoglycan (GAG) content, and Hydroxyproline (HP) concentration were measured and compared between the grafts stored in the different solutions and at the different temperatures. Independent of the storage solutions tested, chondrocyte survival rates were higher when stored at 4°C compared to storage at 37°C both after short-term (14days) and long-term storage (56days). At no point in time did the DMEM-based solution show a superior chondrocyte survival compared to lactated Ringer based solution. GAG and HP content were comparable across all time points, temperatures and solutions. LR based solutions that contain only substances that are approved in Germany may be just as efficient for storing grafts as the USA DMEM-based solution gold standard. Moreover, in the present experiment storage of osteochondral allografts at 4°C was superior to storage at 37°C. Copyright © 2017 Elsevier Ltd. All rights reserved.
Transition and mixing in axisymmetric jets and vortex rings
NASA Technical Reports Server (NTRS)
Allen, G. A., Jr.; Cantwell, B. J.
1986-01-01
A class of impulsively started, axisymmetric, laminar jets produced by a time dependent joint source of momentum are considered. These jets are different flows, each initially at rest in an unbounded fluid. The study is conducted at three levels of detail. First, a generalized set of analytic creeping flow solutions are derived with a method of flow classification. Second, from this set, three specific creeping flow solutions are studied in detail: the vortex ring, the round jet, and the ramp jet. This study involves derivation of vorticity, stream function, entrainment diagrams, and evolution of time lines through computer animation. From entrainment diagrams, critical points are derived and analyzed. The flow geometry is dictated by the properties and location of critical points which undergo bifurcation and topological transformation (a form of transition) with changing Reynolds number. Transition Reynolds numbers were calculated. A state space trajectory was derived describing the topological behavior of these critical points. This state space derivation yielded three states of motion which are universal for all axisymmetric jets. Third, the axisymmetric round jet is solved numerically using the unsteady laminar Navier Stokes equations. These equations were shown to be self similar for the round jet. Numerical calculations were performed up to a Reynolds number of 30 for a 60x60 point mesh. Animations generated from numerical solution showed each of the three states of motion for the round jet, including the Re = 30 case.
Nutaro, James; Kuruganti, Teja
2017-02-24
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Rapid Online Non-Enzymatic Protein Digestion Analysis with High Pressure Superheated ESI-MS
NASA Astrophysics Data System (ADS)
Chen, Lee Chuin; Kinoshita, Masato; Noda, Masato; Ninomiya, Satoshi; Hiraoka, Kenzo
2015-07-01
Recently, we reported a new ESI ion source that could electrospray the super-heated aqueous solution with liquid temperature much higher than the normal boiling point ( J. Am. Soc. Mass Spectrom. 25, 1862-1869). The boiling of liquid was prevented by pressurizing the ion source to a pressure greater than atmospheric pressure. The maximum operating pressure in our previous prototype was 11 atm, and the highest achievable temperature was 180°C. In this paper, a more compact prototype that can operate up to 27 atm and 250°C liquid temperatures is constructed, and reproducible MS acquisition can be extended to electrospray temperatures that have never before been tested. Here, we apply this super-heated ESI source to the rapid online protein digestion MS. The sample solution is rapidly heated when flowing through a heated ESI capillary, and the digestion products are ionized by ESI in situ when the solution emerges from the tip of the heated capillary. With weak acid such as formic acid as solution, the thermally accelerated digestion (acid hydrolysis) has the selective cleavage at the aspartate (Asp, D) residue sites. The residence time of liquid within the active heating region is about 20 s. The online operation eliminates the need to transfer the sample from the digestion reactor, and the output of the digestive reaction can be monitored and manipulated by the solution flow rate and heater temperature in a near real-time basis.
Numerical Electromagnetic Code (NEC)-Basic Scattering Code. Part I. User’s Manual.
1979-09-01
Command RT : 29 I. Command PG: 32 J. Command GP: 35 K. Command CG: 36 L. Command SG: 39 M. Command AM: 44 N. Conumand PR: 48 0. Command NP: 49 P...these points and con- firm the validity of the solution. 1 0 1 -.- ’----.- ... The source presently considered in the computer code is an Plec - tric...Range Input 28 * RT : Translate and/or Rotate Coordinates 29 SG: Source Geometry Input IQ TO: Test Data Generation Options 17 [IN: Units of Input U)S
NASA Astrophysics Data System (ADS)
Crittenden, P. E.; Balachandar, S.
2018-07-01
The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+-up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.
NASA Astrophysics Data System (ADS)
Crittenden, P. E.; Balachandar, S.
2018-03-01
The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+ -up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.
Naser, Mohamed A.; Patterson, Michael S.
2010-01-01
Reconstruction algorithms are presented for a two-step solution of the bioluminescence tomography (BLT) problem. In the first step, a priori anatomical information provided by x-ray computed tomography or by other methods is used to solve the continuous wave (cw) diffuse optical tomography (DOT) problem. A Taylor series expansion approximates the light fluence rate dependence on the optical properties of each region where first and second order direct derivatives of the light fluence rate with respect to scattering and absorption coefficients are obtained and used for the reconstruction. In the second step, the reconstructed optical properties at different wavelengths are used to calculate the Green’s function of the system. Then an iterative minimization solution based on the L1 norm shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. This provides an efficient BLT reconstruction algorithm with the ability to determine relative source magnitudes and positions in the presence of noise. PMID:21258486
An integral equation formulation for the diffraction from convex plates and polyhedra.
Asheim, Andreas; Svensson, U Peter
2013-06-01
A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.
A fast marching algorithm for the factored eikonal equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC
The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less
NASA Technical Reports Server (NTRS)
Yee, H. C.; Shinn, J. L.
1986-01-01
Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.
ERIC Educational Resources Information Center
Mariola, Matt J.
2012-01-01
Water quality trading (WQT) is a market arrangement in which a point-source water polluter pays farmers to implement conservation practices and claims the resulting benefits as credits toward meeting a pollution permit. Success rates of WQT programs nationwide are highly variable. Most of the literature on WQT is from an economic perspective…
Quantifying the errors due to the superposition of analytical deformation sources
NASA Astrophysics Data System (ADS)
Neuberg, J. W.; Pascal, K.
2012-04-01
The displacement field due to magma movement in the subsurface is often modelled using a Mogi point source or a dislocation Okada source embedded in a homogeneous elastic half-space. When the magmatic system cannot be modelled by a single source it is often represented by several sources, their respective deformation fields are superimposed. However, in such a case the assumption of homogeneity in the half-space is violated and the interaction between sources in an elastic medium is neglected. In this investigation we have quantified the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or dislocation of the sources and their relative position. We also investigated three numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. The errors induced when neglecting the source interaction are expected to vary greatly with the physical and geometrical parameters of the model. We demonstrated that for certain scenarios these discrepancies can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources
Benchmark solution for vibrations from a moving point source in a tunnel embedded in a half-space
NASA Astrophysics Data System (ADS)
Yuan, Zonghao; Boström, Anders; Cai, Yuanqiang
2017-01-01
A closed-form semi-analytical solution for the vibrations due to a moving point load in a tunnel embedded in a half-space is given in this paper. The tunnel is modelled as an elastic hollow cylinder and the ground surrounding the tunnel as a linear viscoelastic material. The total wave field in the half-space with a cylindrical hole is represented by outgoing cylindrical waves and down-going plane waves. To apply the boundary conditions on the ground surface and at the tunnel-soil interface, the transformation properties between the plane and cylindrical wave functions are employed. The proposed solution can predict the ground vibration from an underground railway tunnel of circular cross-section with a reasonable computational effort and can serve as a benchmark solution for other computational methods. Numerical results for the ground vibrations on the free surface due to a moving constant load and a moving harmonic load applied at the tunnel invert are presented for different load velocities and excitation frequencies. It is found that Rayleigh waves play an important role in the ground vibrations from a shallow tunnel.
Noniterative three-dimensional grid generation using parabolic partial differential equations
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1985-01-01
A new algorithm for generating three-dimensional grids has been developed and implemented which numerically solves a parabolic partial differential equation (PDE). The solution procedure marches outward in two coordinate directions, and requires inversion of a scalar tridiagonal system in the third. Source terms have been introduced to control the spacing and angle of grid lines near the grid boundaries, and to control the outer boundary point distribution. The method has been found to generate grids about 100 times faster than comparable grids generated via solution of elliptic PDEs, and produces smooth grids for finite-difference flow calculations.
Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strom, Daniel J.; Cerra, Frank
The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3)more » a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow “pencil” beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source, or b) during the traversal of a point source, is a unifying concept. The “universal source strength” of air kerma rate at a meter from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.« less
Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners.
Strom, Daniel J; Cerra, Frank
2016-06-01
The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3) a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow "pencil" beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source or b) during the traversal of a point source is a unifying concept. The "universal source strength" of air kerma rate at 1 m from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.
A novel molecular index for secondary oil migration distance
Zhang, Liuping; Li, Maowen; Wang, Yang; Yin, Qing-Zhu; Zhang, Wenzheng
2013-01-01
Determining oil migration distances from source rocks to reservoirs can greatly help in the search for new petroleum accumulations. Concentrations and ratios of polar organic compounds are known to change due to preferential sorption of these compounds in migrating oils onto immobile mineral surfaces. However, these compounds cannot be directly used as proxies for oil migration distances because of the influence of source variability. Here we show that for each source facies, the ratio of the concentration of a select polar organic compound to its initial concentration at a reference point is independent of source variability and correlates solely with migration distance from source rock to reservoir. Case studies serve to demonstrate that this new index provides a valid solution for determining source-reservoir distance and could lead to many applications in fundamental and applied petroleum geoscience studies. PMID:23965930
Haney, Matthew M.; Chouet, Bernard A.; Dawson, Phillip B.; Power, John A.
2013-01-01
The 2009 eruption of Redoubt produced several very-long-period (VLP) signals associated with explosions. We invert for the source location and mechanism of an explosion at Redoubt volcano using waveform methods applied to broadband recordings. Such characterization of the source carries information on the geometry of the conduit and the physics of the explosion process. Inversions are carried out assuming the volcanic source can be modeled as a point source, with mechanisms described by a) a set of 3 orthogonal forces, b) a moment tensor consisting of force couples, and c) both forces and moment tensor components. We find that the source of the VLP seismic waves during the explosion is well-described by either a combined moment/force source located northeast of the crater and at an elevation of 1.6 km ASL or a moment source at an elevation of 800 m to the southwest of the crater. The moment tensors for the solutions with moment and force and moment-only share similar characteristics. The source time functions for both moment tensors begin with inflation (pressurization) and execute two cycles of deflation-reinflation (depressurization–repressurization). Although the moment/force source provides a better fit to the data, we find that owing to the limited coverage of the broadband stations at Redoubt the moment-only source is the more robust and reliable solution. Based on the moment-only solution, we estimate a volume change of 19,000 m3 and a pressure change of 7 MPa in a dominant sill and an out-of-phase volume change of 5000 m3 and pressure change of 1.8 MPa in a subdominant dike at the source location. These results shed new light on the magmatic plumbing system beneath Redoubt and complement previous studies on Vulcanian explosions at other volcanoes.
Solution of the weighted symmetric similarity transformations based on quaternions
NASA Astrophysics Data System (ADS)
Mercan, H.; Akyilmaz, O.; Aydin, C.
2017-12-01
A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.
NASA Astrophysics Data System (ADS)
Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos
2017-12-01
An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.
NASA Astrophysics Data System (ADS)
Royston, Thomas J.; Yazicioglu, Yigit; Loth, Francis
2003-02-01
The response at the surface of an isotropic viscoelastic medium to buried fundamental acoustic sources is studied theoretically, computationally and experimentally. Finite and infinitesimal monopole and dipole sources within the low audible frequency range (40-400 Hz) are considered. Analytical and numerical integral solutions that account for compression, shear and surface wave response to the buried sources are formulated and compared with numerical finite element simulations and experimental studies on finite dimension phantom models. It is found that at low audible frequencies, compression and shear wave propagation from point sources can both be significant, with shear wave effects becoming less significant as frequency increases. Additionally, it is shown that simple closed-form analytical approximations based on an infinite medium model agree well with numerically obtained ``exact'' half-space solutions for the frequency range and material of interest in this study. The focus here is on developing a better understanding of how biological soft tissue affects the transmission of vibro-acoustic energy from biological acoustic sources below the skin surface, whose typical spectral content is in the low audible frequency range. Examples include sound radiated from pulmonary, gastro-intestinal and cardiovascular system functions, such as breath sounds, bowel sounds and vascular bruits, respectively.
OnEarth: An Open Source Solution for Efficiently Serving High-Resolution Mapped Image Products
NASA Astrophysics Data System (ADS)
Thompson, C. K.; Plesea, L.; Hall, J. R.; Roberts, J. T.; Cechini, M. F.; Schmaltz, J. E.; Alarcon, C.; Huang, T.; McGann, J. M.; Chang, G.; Boller, R. A.; Ilavajhala, S.; Murphy, K. J.; Bingham, A. W.
2013-12-01
This presentation introduces OnEarth, a server side software package originally developed at the Jet Propulsion Laboratory (JPL), that facilitates network-based, minimum-latency geolocated image access independent of image size or spatial resolution. The key component in this package is the Meta Raster Format (MRF), a specialized raster file extension to the Geospatial Data Abstraction Library (GDAL) consisting of an internal indexed pyramid of image tiles. Imagery to be served is converted to the MRF format and made accessible online via an expandable set of server modules handling requests in several common protocols, including the Open Geospatial Consortium (OGC) compliant Web Map Tile Service (WMTS) as well as Tiled WMS and Keyhole Markup Language (KML). OnEarth has recently transitioned to open source status and is maintained and actively developed as part of GIBS (Global Imagery Browse Services), a collaborative project between JPL and Goddard Space Flight Center (GSFC). The primary function of GIBS is to enhance and streamline the data discovery process and to support near real-time (NRT) applications via the expeditious ingestion and serving of full-resolution imagery representing science products from across the NASA Earth Science spectrum. Open source software solutions are leveraged where possible in order to utilize existing available technologies, reduce development time, and enlist wider community participation. We will discuss some of the factors and decision points in transitioning OnEarth to a suitable open source paradigm, including repository and licensing agreement decision points, institutional hurdles, and perceived benefits. We will also provide examples illustrating how OnEarth is integrated within GIBS and other applications.
Invited Article: Terahertz microfluidic chips sensitivity-enhanced with a few arrays of meta-atoms
NASA Astrophysics Data System (ADS)
Serita, Kazunori; Matsuda, Eiki; Okada, Kosuke; Murakami, Hironaru; Kawayama, Iwao; Tonouchi, Masayoshi
2018-05-01
We present a nonlinear optical crystal (NLOC)-based terahertz (THz) microfluidic chip with a few arrays of split ring resonators (SRRs) for ultra-trace and quantitative measurements of liquid solutions. The proposed chip operates on the basis of near-field coupling between the SRRs and a local emission of point like THz source that is generated in the process of optical rectification in NLOCs on a sub-wavelength scale. The liquid solutions flowing inside the microchannel modify the resonance frequency and peak attenuation in the THz transmission spectra. In contrast to conventional bio-sensing with far/near-field THz waves, our technique can be expected to compactify the chip design as well as realize high sensitive near-field measurement of liquid solutions without any high-power optical/THz source, near-field probes, and prisms. Using this chip, we have succeeded in observing the 31.8 fmol of ion concentration in actual amount of 318 pl water solutions from the shift of the resonance frequency. The technique opens the door to microanalysis of biological samples with THz waves and accelerates development of THz lab-on-chip devices.
NASA Astrophysics Data System (ADS)
Bambi, Cosimo; Modesto, Leonardo; Wang, Yixu
2017-01-01
We derive and study an approximate static vacuum solution generated by a point-like source in a higher derivative gravitational theory with a pair of complex conjugate ghosts. The gravitational theory is local and characterized by a high derivative operator compatible with Lee-Wick unitarity. In particular, the tree-level two-point function only shows a pair of complex conjugate poles besides the massless spin two graviton. We show that singularity-free black holes exist when the mass of the source M exceeds a critical value Mcrit. For M >Mcrit the spacetime structure is characterized by an outer event horizon and an inner Cauchy horizon, while for M =Mcrit we have an extremal black hole with vanishing Hawking temperature. The evaporation process leads to a remnant that approaches the zero-temperature extremal black hole state in an infinite amount of time.
Design and evaluation of an imaging spectrophotometer incorporating a uniform light source.
Noble, S D; Brown, R B; Crowe, T G
2012-03-01
Accounting for light that is diffusely scattered from a surface is one of the practical challenges in reflectance measurement. Integrating spheres are commonly used for this purpose in point measurements of reflectance and transmittance. This solution is not directly applicable to a spectral imaging application for which diffuse reflectance measurements are desired. In this paper, an imaging spectrophotometer design is presented that employs a uniform light source to provide diffuse illumination. This creates the inverse measurement geometry to the directional illumination/diffuse reflectance mode typically used for point measurements. The final system had a spectral range between 400 and 1000 nm with a 5.2 nm resolution, a field of view of approximately 0.5 m by 0.5 m, and millimeter spatial resolution. Testing results indicate illumination uniformity typically exceeding 95% and reflectance precision better than 1.7%.
Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.
Low-Dispersion Scheme for Nonlinear Acoustic Waves in Nonuniform Flow
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Kaushik, Dinesh K.; Idres, Moumen
1997-01-01
The linear dispersion-relation-preserving scheme and its boundary conditions have been extended to the nonlinear Euler equations. This allowed computing, a nonuniform flowfield and a nonlinear acoustic wave propagation in such a medium, by the same scheme. By casting all the equations, boundary conditions, and the solution scheme in generalized curvilinear coordinates, the solutions were made possible for non-Cartesian domains and, for the better deployment of the grid points, nonuniform grid step sizes could be used. It has been tested for a number of simple initial-value and periodic-source problems. A simple demonstration of the difference between a linear and nonlinear propagation was conducted. The wall boundary condition, derived from the momentum equations and implemented through a pressure at a ghost point, and the radiation boundary condition, derived from the asymptotic solution to the Euler equations, have proven to be effective for the nonlinear equations and nonuniform flows. The nonreflective characteristic boundary conditions also have shown success but limited to the nonlinear waves in no mean flow, and failed for nonlinear waves in nonuniform flow.
On the VHF Source Retrieval Errors Associated with Lightning Mapping Arrays (LMAs)
NASA Technical Reports Server (NTRS)
Koshak, W.
2016-01-01
This presentation examines in detail the standard retrieval method: that of retrieving the (x, y, z, t) parameters of a lightning VHF point source from multiple ground-based Lightning Mapping Array (LMA) time-of-arrival (TOA) observations. The solution is found by minimizing a chi-squared function via the Levenberg-Marquardt algorithm. The associated forward problem is examined to illustrate the importance of signal-to-noise ratio (SNR). Monte Carlo simulated retrievals are used to assess the benefits of changing various LMA network properties. A generalized retrieval method is also introduced that, in addition to TOA data, uses LMA electric field amplitude measurements to retrieve a transient VHF dipole moment source.
Comparative study of superconducting fault current limiter both for LCC-HVDC and VSC-HVDC systems
NASA Astrophysics Data System (ADS)
Lee, Jong-Geon; Khan, Umer Amir; Lim, Sung-Woo; Shin, Woo-ju; Seo, In-Jin; Lee, Bang-Wook
2015-11-01
High Voltage Direct Current (HVDC) system has been evaluated as the optimum solution for the renewable energy transmission and long-distance power grid connections. In spite of the various advantages of HVDC system, it still has been regarded as an unreliable system compared to AC system due to its vulnerable characteristics on the power system fault. Furthermore, unlike AC system, optimum protection and switching device has not been fully developed yet. Therefore, in order to enhance the reliability of the HVDC systems mitigation of power system fault and reliable fault current limiting and switching devices should be developed. In this paper, in order to mitigate HVDC fault, both for Line Commutated Converter HVDC (LCC-HVDC) and Voltage Source Converter HVDC (VSC-HVDC) system, an application of resistive superconducting fault current limiter which has been known as optimum solution to cope with the power system fault was considered. Firstly, simulation models for two types of LCC-HVDC and VSC-HVDC system which has point to point connection model were developed. From the designed model, fault current characteristics of faulty condition were analyzed. Second, application of SFCL on each types of HVDC system and comparative study of modified fault current characteristics were analyzed. Consequently, it was deduced that an application of AC-SFCL on LCC-HVDC system with point to point connection was desirable solution to mitigate the fault current stresses and to prevent commutation failure in HVDC electric power system interconnected with AC grid.
NASA Astrophysics Data System (ADS)
Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua
2017-03-01
A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.
Convective boundary conditions effect on peristaltic flow of a MHD Jeffery nanofluid
NASA Astrophysics Data System (ADS)
Kothandapani, M.; Prakash, J.
2016-03-01
This work is aimed at describing the influences of MHD, chemical reaction, thermal radiation and heat source/sink parameter on peristaltic flow of Jeffery nanofluids in a tapered asymmetric channel along with slip and convective boundary conditions. The governing equations of a nanofluid are first formulated and then simplified under long-wavelength and low-Reynolds number approaches. The equation of nanoparticles temperature and concentration is coupled; hence, homotopy perturbation method has been used to obtain the solutions of temperature and concentration of nanoparticles. Analytical solutions for axial velocity, stream function and pressure gradient have also constructed. Effects of various influential flow parameters have been pointed out through with help of the graphs. Analysis indicates that the temperature of nanofluids decreases for a given increase in heat transfer Biot number and chemical reaction parameter, but it possesses converse behavior in respect of mass transfer Biot number and heat source/sink parameter.
Inhomogeneity-induced cosmic acceleration in a dust universe
NASA Astrophysics Data System (ADS)
Chuang, Chia-Hsun; Gu, Je-An; Hwang, W.-Y. P.
2008-09-01
It is the common consensus that the expansion of a universe always slows down if the gravity provided by the energy sources therein is attractive and accordingly one needs to invoke dark energy as a source of anti-gravity for understanding the cosmic acceleration. To examine this point we find counterexamples for a spherically symmetric dust fluid described by the Lemaître Tolman Bondi solution without singularity. Thus, the validity of this naive consensus is indeed doubtful and the effects of inhomogeneities should be restudied. These counter-intuitive examples open a new perspective on the understanding of the evolution of our universe.
Application of a water quality model in the White Cart water catchment, Glasgow, UK.
Liu, S; Tucker, P; Mansell, M; Hursthouse, A
2003-03-01
Water quality models of urban systems have previously focused on point source (sewerage system) inputs. Little attention has been given to diffuse inputs and research into diffuse pollution has been largely confined to agriculture sources. This paper reports on new research that is aimed at integrating diffuse inputs into an urban water quality model. An integrated model is introduced that is made up of four modules: hydrology, contaminant point sources, nutrient cycling and leaching. The hydrology module, T&T consists of a TOPMODEL (a TOPography-based hydrological MODEL), which simulates runoff from pervious areas and a two-tank model, which simulates runoff from impervious urban areas. Linked into the two-tank model, the contaminant point source module simulates the overflow from the sewerage system in heavy rain. The widely known SOILN (SOIL Nitrate model) is the basis of nitrogen cycle module. Finally, the leaching module consists of two functions: the production function and the transfer function. The production function is based on SLIM (Solute Leaching Intermediate Model) while the transfer function is based on the 'flushing hypothesis' which postulates a relationship between contaminant concentrations in the receiving water course and the extent to which the catchment is saturated. This paper outlines the modelling methodology and the model structures that have been developed. An application of this model in the White Cart catchment (Glasgow) is also included.
Cyanide and migratory birds at gold mines in Nevada, USA
Henny, C.J.; Hallock, R.J.; Hill, E.F.
1994-01-01
Since the mid-1980s, cyanide in heap leach solutions and mill tailings ponds at gold mines in Nevada has killed a large but incompletely documented number of wildlife ( gt 9,500 individuals, primarily migratory birds). This field investigation documents the availability of cyanide at a variety of 'typical' Nevada gold mines during 1990 and 1991, describes wildlife reactions to cyanide solutions, and discusses procedures for eliminating wildlife loss from cyanide poisoning. Substantial progress has been made to reduce wildlife loss. About half of the mill tailings ponds (some up to 150 ha) in Nevada have been chemically treated to reduce cyanide concentrations (the number needing treatment is uncertain) and many of the smaller heap leach solution ponds and channels are now covered with netting to exclude birds and most mammals. The discovery of a cyanide gradient in mill tailings ponds (concentration usually 2-3 times higher at the inflow point than at reclaim point) provides new insight into wildlife responses (mortality) observed in different portions of the ponds. Finding dead birds on the tops of ore heaps and associated with solution puddling is a new problem, but management procedures for eliminating this source of mortality are available. A safe threshold concentration of cyanide to eliminate wildlife loss could not be determined from the field data and initial laboratory studies. New analytical methods may be required to assess further the wildlife hazard of cyanide in mining solutions.
NASA Astrophysics Data System (ADS)
Zander, C.; Plastino, A. R.; Díaz-Alonso, J.
2015-11-01
We investigate time-dependent solutions for a non-linear Schrödinger equation recently proposed by Nassar and Miret-Artés (NM) to describe the continuous measurement of the position of a quantum particle (Nassar, 2013; Nassar and Miret-Artés, 2013). Here we extend these previous studies in two different directions. On the one hand, we incorporate a potential energy term in the NM equation and explore the corresponding wave packet dynamics, while in the previous works the analysis was restricted to the free-particle case. On the other hand, we investigate time-dependent solutions while previous studies focused on a stationary one. We obtain exact wave packet solutions for linear and quadratic potentials, and approximate solutions for the Morse potential. The free-particle case is also revisited from a time-dependent point of view. Our analysis of time-dependent solutions allows us to determine the stability properties of the stationary solution considered in Nassar (2013), Nassar and Miret-Artés (2013). On the basis of these results we reconsider the Bohmian approach to the NM equation, taking into account the fact that the evolution equation for the probability density ρ =| ψ | 2 is not a continuity equation. We show that the effect of the source term appearing in the evolution equation for ρ has to be explicitly taken into account when interpreting the NM equation from a Bohmian point of view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopper, Seth; Evans, Charles R.
2010-10-15
We calculate the gravitational perturbations produced by a small mass in eccentric orbit about a much more massive Schwarzschild black hole and use the numerically computed perturbations to solve for the metric. The calculations are initially made in the frequency domain and provide Fourier-harmonic modes for the gauge-invariant master functions that satisfy inhomogeneous versions of the Regge-Wheeler and Zerilli equations. These gravitational master equations have specific singular sources containing both delta function and derivative-of-delta function terms. We demonstrate in this paper successful application of the method of extended homogeneous solutions, developed recently by Barack, Ori, and Sago, to handle sourcemore » terms of this type. The method allows transformation back to the time domain, with exponential convergence of the partial mode sums that represent the field. This rapid convergence holds even in the region of r traversed by the point mass and includes the time-dependent location of the point mass itself. We present numerical results of mode calculations for certain orbital parameters, including highly accurate energy and angular momentum fluxes at infinity and at the black hole event horizon. We then address the issue of reconstructing the metric perturbation amplitudes from the master functions, the latter being weak solutions of a particular form to the wave equations. The spherical harmonic amplitudes that represent the metric in Regge-Wheeler gauge can themselves be viewed as weak solutions. They are in general a combination of (1) two differentiable solutions that adjoin at the instantaneous location of the point mass (a result that has order of continuity C{sup -1} typically) and (2) (in some cases) a delta function distribution term with a computable time-dependent amplitude.« less
A stochastic chemostat model with an inhibitor and noise independent of population sizes
NASA Astrophysics Data System (ADS)
Sun, Shulin; Zhang, Xiaolu
2018-02-01
In this paper, a stochastic chemostat model with an inhibitor is considered, here the inhibitor is input from an external source and two organisms in chemostat compete for a nutrient. Firstly, we show that the system has a unique global positive solution. Secondly, by constructing some suitable Lyapunov functions, we investigate that the average in time of the second moment of the solutions of the stochastic model is bounded for a relatively small noise. That is, the asymptotic behaviors of the stochastic system around the equilibrium points of the deterministic system are studied. However, the sufficient large noise can make the microorganisms become extinct with probability one, although the solutions to the original deterministic model may be persistent. Finally, the obtained analytical results are illustrated by computer simulations.
Contaminant transport from point source on water surface in open channel flow with bed absorption
NASA Astrophysics Data System (ADS)
Guo, Jinlan; Wu, Xudong; Jiang, Weiquan; Chen, Guoqian
2018-06-01
Studying solute dispersion in channel flows is of significance for environmental and industrial applications. Two-dimensional concentration distribution for a most typical case of a point source release on the free water surface in a channel flow with bed absorption is presented by means of Chatwin's long-time asymptotic technique. Five basic characteristics of Taylor dispersion and vertical mean concentration distribution with skewness and kurtosis modifications are also analyzed. The results reveal that bed absorption affects both the longitudinal and vertical concentration distributions and causes the contaminant cloud to concentrate in the upper layer. Additionally, the cross-sectional concentration distribution shows an asymptotic Gaussian distribution at large time which is unaffected by the bed absorption. The vertical concentration distribution is found to be nonuniform even at large time. The obtained results are essential for practical implements with strict environmental standards.
Seismic Waves, 4th order accurate
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-08-16
SW4 is a program for simulating seismic wave propagation on parallel computers. SW4 colves the seismic wave equations in Cartesian corrdinates. It is therefore appropriate for regional simulations, where the curvature of the earth can be neglected. SW4 implements a free surface boundary condition on a realistic topography, absorbing super-grid conditions on the far-field boundaries, and a kinematic source model consisting of point force and/or point moment tensor source terms. SW4 supports a fully 3-D heterogeneous material model that can be specified in several formats. SW4 can output synthetic seismograms in an ASCII test format, or in the SAC finarymore » format. It can also present simulation information as GMT scripts, whixh can be used to create annotated maps. Furthermore, SW4 can output the solution as well as the material model along 2-D grid planes.« less
A Survey of Insider Attack Detection Research
2008-08-25
modeling of statistical features , such as the frequency of events, the duration of events, the co-occurrence of multiple events combined through...forms of attack that have been reported [Error! Reference source not found.]. For example: • Unauthorized extraction , duplication, or exfiltration...network level. Schultz pointed out that not one approach will work but solutions need to be based on multiple sensors to be able to find any combination
NASA Astrophysics Data System (ADS)
Huang, Ching-Sheng; Yeh, Hund-Der
2016-11-01
This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.
Properties of two-temperature dissipative accretion flow around black holes
NASA Astrophysics Data System (ADS)
Dihingia, Indu K.; Das, Santabrata; Mandal, Samir
2018-04-01
We study the properties of two-temperature accretion flow around a non-rotating black hole in presence of various dissipative processes where pseudo-Newtonian potential is adopted to mimic the effect of general relativity. The flow encounters energy loss by means of radiative processes acted on the electrons and at the same time, flow heats up as a consequence of viscous heating effective on ions. We assumed that the flow is exposed with the stochastic magnetic fields that leads to Synchrotron emission of electrons and these emissions are further strengthen by Compton scattering. We obtain the two-temperature global accretion solutions in terms of dissipation parameters, namely, viscosity (α) and accretion rate ({\\dot{m}}), and find for the first time in the literature that such solutions may contain standing shock waves. Solutions of this kind are multitransonic in nature, as they simultaneously pass through both inner critical point (xin) and outer critical point (xout) before crossing the black hole horizon. We calculate the properties of shock-induced global accretion solutions in terms of the flow parameters. We further show that two-temperature shocked accretion flow is not a discrete solution, instead such solution exists for wide range of flow parameters. We identify the effective domain of the parameter space for standing shock and observe that parameter space shrinks as the dissipation is increased. Since the post-shock region is hotter due to the effect of shock compression, it naturally emits hard X-rays, and therefore, the two-temperature shocked accretion solution has the potential to explain the spectral properties of the black hole sources.
The Prediction of Scattered Broadband Shock-Associated Noise
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.
Chen, Huiting; Reinhard, Martin; Nguyen, Tung Viet; You, Luhua; He, Yiliang; Gin, Karina Yew-Hoong
2017-08-01
Understanding the sources, occurrence and sinks of perfluoroalkyl and polyfluoroalkyl substances (PFASs) in the urban water cycle is important to protect and utilize local water resources. Concentrations of 22 target PFASs and general water quality parameters were determined monthly for a year in filtered water samples from five tributaries and three sampling stations of an urban water body. Of the 22 target PFASs, 17 PFASs were detected with a frequency >93% including PFCAs: C4-C12 perfluoroalkyl carboxylates, C4, C6, C8, and C10 perfluoroalkane sulfonates, perfluorooctane sulfonamides and perfluorooctane sulfonamide substances (FOSAMs), C10 perfluoroalkyl phosphonic acid (C10 PFPA), 6:2 fluorotelomer sulfonic acid (6:2 FTSA) and C8/C8 perfluoroalkyl phosphinic acid (C8/C8-PFPIA). The most abundant PFASs in water were PFBS (1.4-55 ng/L), PFBA (1.0-23 ng/L), PFOS (1.5-24 ng/L) and PFOA (2.0-21 ng/L). In the tributaries, PFNA concentrations ranged from 1.2 to 87.1 ng/L except in the May 2013 samples of two tributaries, which reached 520 and 260 ng/L. Total PFAS concentrations in the sediment samples ranged from 1.6 to 15 ng/g d.w. with EtFOSAA, PFDoA, PFOS and PFDA being the dominant species. Based on water and sediment data, two types of sources were inferred: one-time or intermittent point sources and continuous non-point sources. FOSAMs and PFOS released continually from non-point sources, C8/C8 PFPIA, PFDoA and PFUnA was released from point sources. The highly water soluble short-chain PFASs including PFBA, PFPeA and PFBS remained predominantly in the water column. The factors governing solution phase concentrations appear to be compound hydrophobicity and sorption to suspended particles. Correlation of the dissolved phase concentrations with precipitation data suggested stormwater was a significant source of PFBA, PFBS, PFUnA and PFDoA. Negative correlations with precipitation indicated sources feeding FOSAA and FOSA directly into the tributaries. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Quirin, Sean Albert
The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.
Towards a Comprehensive Model of Jet Noise Using an Acoustic Analogy and Steady RANS Solutions
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2013-01-01
An acoustic analogy is developed to predict the noise from jet flows. It contains two source models that independently predict the noise from turbulence and shock wave shear layer interactions. The acoustic analogy is based on the Euler equations and separates the sources from propagation. Propagation effects are taken into account by calculating the vector Green's function of the linearized Euler equations. The sources are modeled following the work of Tam and Auriault, Morris and Boluriaan, and Morris and Miller. A statistical model of the two-point cross-correlation of the velocity fluctuations is used to describe the turbulence. The acoustic analogy attempts to take into account the correct scaling of the sources for a wide range of nozzle pressure and temperature ratios. It does not make assumptions regarding fine- or large-scale turbulent noise sources, self- or shear-noise, or convective amplification. The acoustic analogy is partially informed by three-dimensional steady Reynolds-Averaged Navier-Stokes solutions that include the nozzle geometry. The predictions are compared with experiments of jets operating subsonically through supersonically and at unheated and heated temperatures. Predictions generally capture the scaling of both mixing noise and BBSAN for the conditions examined, but some discrepancies remain that are due to the accuracy of the steady RANS turbulence model closure, the equivalent sources, and the use of a simplified vector Green's function solver of the linearized Euler equations.
Gossip-based solutions for discrete rendezvous in populations of communicating agents.
Hollander, Christopher D; Wu, Annie S
2014-01-01
The objective of the rendezvous problem is to construct a method that enables a population of agents to agree on a spatial (and possibly temporal) meeting location. We introduce the buffered gossip algorithm as a general solution to the rendezvous problem in a discrete domain with direct communication between decentralized agents. We compare the performance of the buffered gossip algorithm against the well known uniform gossip algorithm. We believe that a buffered solution is preferable to an unbuffered solution, such as the uniform gossip algorithm, because the use of a buffer allows an agent to use multiple information sources when determining its desired rendezvous point, and that access to multiple information sources may improve agent decision making by reinforcing or contradicting an initial choice. To show that the buffered gossip algorithm is an actual solution for the rendezvous problem, we construct a theoretical proof of convergence and derive the conditions under which the buffered gossip algorithm is guaranteed to produce a consensus on rendezvous location. We use these results to verify that the uniform gossip algorithm also solves the rendezvous problem. We then use a multi-agent simulation to conduct a series of simulation experiments to compare the performance between the buffered and uniform gossip algorithms. Our results suggest that the buffered gossip algorithm can solve the rendezvous problem faster than the uniform gossip algorithm; however, the relative performance between these two solutions depends on the specific constraints of the problem and the parameters of the buffered gossip algorithm.
Gossip-Based Solutions for Discrete Rendezvous in Populations of Communicating Agents
Hollander, Christopher D.; Wu, Annie S.
2014-01-01
The objective of the rendezvous problem is to construct a method that enables a population of agents to agree on a spatial (and possibly temporal) meeting location. We introduce the buffered gossip algorithm as a general solution to the rendezvous problem in a discrete domain with direct communication between decentralized agents. We compare the performance of the buffered gossip algorithm against the well known uniform gossip algorithm. We believe that a buffered solution is preferable to an unbuffered solution, such as the uniform gossip algorithm, because the use of a buffer allows an agent to use multiple information sources when determining its desired rendezvous point, and that access to multiple information sources may improve agent decision making by reinforcing or contradicting an initial choice. To show that the buffered gossip algorithm is an actual solution for the rendezvous problem, we construct a theoretical proof of convergence and derive the conditions under which the buffered gossip algorithm is guaranteed to produce a consensus on rendezvous location. We use these results to verify that the uniform gossip algorithm also solves the rendezvous problem. We then use a multi-agent simulation to conduct a series of simulation experiments to compare the performance between the buffered and uniform gossip algorithms. Our results suggest that the buffered gossip algorithm can solve the rendezvous problem faster than the uniform gossip algorithm; however, the relative performance between these two solutions depends on the specific constraints of the problem and the parameters of the buffered gossip algorithm. PMID:25397882
Fast sweeping method for the factored eikonal equation
NASA Astrophysics Data System (ADS)
Fomel, Sergey; Luo, Songting; Zhao, Hongkai
2009-09-01
We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.
Gravitational lensing by ring-like structures
NASA Astrophysics Data System (ADS)
Lake, Ethan; Zheng, Zheng
2017-02-01
We study a class of gravitational lensing systems consisting of an inclined ring/belt, with and without an added point mass at the centre. We show that a common feature of such systems are so-called pseudo-caustics, across which the magnification of a point source changes discontinuously and yet remains finite. Such a magnification change can be associated with either a change in image multiplicity or a sudden change in the size of a lensed image. The existence of pseudo-caustics and the complex interplay between them and the formal caustics (which correspond to points of infinite magnification) can lead to interesting consequences, such as truncated or open caustics and a non-conservation of total image parity. The origin of the pseudo-caustics is found to be the non-differentiability of the solutions to the lens equation across the ring/belt boundaries, with the pseudo-caustics corresponding to ring/belt boundaries mapped into the source plane. We provide a few illustrative examples to understand the pseudo-caustic features, and in a separate paper consider a specific astronomical application of our results to study microlensing by extrasolar asteroid belts.
NASA Astrophysics Data System (ADS)
Ofek, Eran O.; Zackay, Barak
2018-04-01
Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.
Ruusmann, Villu; Maran, Uko
2013-07-01
The scientific literature is important source of experimental and chemical structure data. Very often this data has been harvested into smaller or bigger data collections leaving the data quality and curation issues on shoulders of users. The current research presents a systematic and reproducible workflow for collecting series of data points from scientific literature and assembling a database that is suitable for the purposes of high quality modelling and decision support. The quality assurance aspect of the workflow is concerned with the curation of both chemical structures and associated toxicity values at (1) single data point level and (2) collection of data points level. The assembly of a database employs a novel "timeline" approach. The workflow is implemented as a software solution and its applicability is demonstrated on the example of the Tetrahymena pyriformis acute aquatic toxicity endpoint. A literature collection of 86 primary publications for T. pyriformis was found to contain 2,072 chemical compounds and 2,498 unique toxicity values, which divide into 2,440 numerical and 58 textual values. Every chemical compound was assigned to a preferred toxicity value. Examples for most common chemical and toxicological data curation scenarios are discussed.
Modular Advanced Oxidation Process Enabled by Cathodic Hydrogen Peroxide Production
2015-01-01
Hydrogen peroxide (H2O2) is frequently used in combination with ultraviolet (UV) light to treat trace organic contaminants in advanced oxidation processes (AOPs). In small-scale applications, such as wellhead and point-of-entry water treatment systems, the need to maintain a stock solution of concentrated H2O2 increases the operational cost and complicates the operation of AOPs. To avoid the need for replenishing a stock solution of H2O2, a gas diffusion electrode was used to generate low concentrations of H2O2 directly in the water prior to its exposure to UV light. Following the AOP, the solution was passed through an anodic chamber to lower the solution pH and remove the residual H2O2. The effectiveness of the technology was evaluated using a suite of trace contaminants that spanned a range of reactivity with UV light and hydroxyl radical (HO•) in three different types of source waters (i.e., simulated groundwater, simulated surface water, and municipal wastewater effluent) as well as a sodium chloride solution. Irrespective of the source water, the system produced enough H2O2 to treat up to 120 L water d–1. The extent of transformation of trace organic contaminants was affected by the current density and the concentrations of HO• scavengers in the source water. The electrical energy per order (EEO) ranged from 1 to 3 kWh m–3, with the UV lamp accounting for most of the energy consumption. The gas diffusion electrode exhibited high efficiency for H2O2 production over extended periods and did not show a diminution in performance in any of the matrices. PMID:26039560
Tinkelman, Igor; Melamed, Timor
2005-06-01
In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.
NASA Astrophysics Data System (ADS)
Pascal, K.; Neuberg, J. W.; Rivalta, E.
2011-12-01
The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system is represented by several sources, their respective deformation fields are summed, and the assumption of homogeneity in the half-space is violated. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or opening of the sources and their relative position. We also investigated various numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. In the former case, the dike opening was either defined as two boundaries displaced from a central location, or as one boundary displaced relative to the other. We finally considered two case studies based on Soufrière Hills Volcano (Montserrat, West Indies) and the Dabbahu rift segment (Afar, Ethiopia) magmatic systems. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. Their magnitude may be comparable with the errors due to neglecting the topography, the inhomogeneities in crustal properties or more realistic rheologies. In the models considered, the errors induced when neglecting the source interaction can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources. Furthermore, this study underlines fundamental issues related to the numerical method chosen to model a dike or a magma chamber. It clearly demonstrates that, while the magma compressibility can be neglected to model the deformation due to one source or distant sources, it is necessary to take it into account in models combining close sources.
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
Toward a probabilistic acoustic emission source location algorithm: A Bayesian approach
NASA Astrophysics Data System (ADS)
Schumacher, Thomas; Straub, Daniel; Higgins, Christopher
2012-09-01
Acoustic emissions (AE) are stress waves initiated by sudden strain releases within a solid body. These can be caused by internal mechanisms such as crack opening or propagation, crushing, or rubbing of crack surfaces. One application for the AE technique in the field of Structural Engineering is Structural Health Monitoring (SHM). With piezo-electric sensors mounted to the surface of the structure, stress waves can be detected, recorded, and stored for later analysis. An important step in quantitative AE analysis is the estimation of the stress wave source locations. Commonly, source location results are presented in a rather deterministic manner as spatial and temporal points, excluding information about uncertainties and errors. Due to variability in the material properties and uncertainty in the mathematical model, measures of uncertainty are needed beyond best-fit point solutions for source locations. This paper introduces a novel holistic framework for the development of a probabilistic source location algorithm. Bayesian analysis methods with Markov Chain Monte Carlo (MCMC) simulation are employed where all source location parameters are described with posterior probability density functions (PDFs). The proposed methodology is applied to an example employing data collected from a realistic section of a reinforced concrete bridge column. The selected approach is general and has the advantage that it can be extended and refined efficiently. Results are discussed and future steps to improve the algorithm are suggested.
NASA Astrophysics Data System (ADS)
Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot
2014-03-01
The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cvetic, Mirjam; Papadimitriou, Ioannis
Here, we construct the holographic dictionary for both running and constant dilaton solutions of the two dimensional Einstein-Maxwell-Dilaton theory that is obtained by a circle reduction from Einstein-Hilbert gravity with negative cosmological constant in three dimensions. This specific model ensures that the dual theory has a well defined ultraviolet completion in terms of a two dimensional conformal field theory, but our results apply qualitatively to a wider class of two dimensional dilaton gravity theories. For each type of solutions we perform holographic renormalization, compute the exact renormalized one-point functions in the presence of arbitrary sources, and derive the asymptotic symmetriesmore » and the corresponding conserved charges. In both cases we find that the scalar operator dual to the dilaton plays a crucial role in the description of the dynamics. Its source gives rise to a matter conformal anomaly for the running dilaton solutions, while its expectation value is the only non trivial observable for constant dilaton solutions. The role of this operator has been largely overlooked in the literature. We further show that the only non trivial conserved charges for running dilaton solutions are the mass and the electric charge, while for constant dilaton solutions only the electric charge is non zero. However, by uplifting the solutions to three dimensions we show that constant dilaton solutions can support non trivial extended symmetry algebras, including the one found by Compère, Song and Strominger, in agreement with the results of Castro and Song. Finally, we demonstrate that any solution of this specific dilaton gravity model can be uplifted to a family of asymptotically AdS 2 × S 2 or conformally AdS 2 × S 2 solutions of the STU model in four dimensions, including non extremal black holes. As a result, the four dimensional solutions obtained by uplifting the running dilaton solutions coincide with the so called ‘subtracted geometries’, while those obtained from the uplift of the constant dilaton ones are new.« less
Cvetic, Mirjam; Papadimitriou, Ioannis
2016-12-02
Here, we construct the holographic dictionary for both running and constant dilaton solutions of the two dimensional Einstein-Maxwell-Dilaton theory that is obtained by a circle reduction from Einstein-Hilbert gravity with negative cosmological constant in three dimensions. This specific model ensures that the dual theory has a well defined ultraviolet completion in terms of a two dimensional conformal field theory, but our results apply qualitatively to a wider class of two dimensional dilaton gravity theories. For each type of solutions we perform holographic renormalization, compute the exact renormalized one-point functions in the presence of arbitrary sources, and derive the asymptotic symmetriesmore » and the corresponding conserved charges. In both cases we find that the scalar operator dual to the dilaton plays a crucial role in the description of the dynamics. Its source gives rise to a matter conformal anomaly for the running dilaton solutions, while its expectation value is the only non trivial observable for constant dilaton solutions. The role of this operator has been largely overlooked in the literature. We further show that the only non trivial conserved charges for running dilaton solutions are the mass and the electric charge, while for constant dilaton solutions only the electric charge is non zero. However, by uplifting the solutions to three dimensions we show that constant dilaton solutions can support non trivial extended symmetry algebras, including the one found by Compère, Song and Strominger, in agreement with the results of Castro and Song. Finally, we demonstrate that any solution of this specific dilaton gravity model can be uplifted to a family of asymptotically AdS 2 × S 2 or conformally AdS 2 × S 2 solutions of the STU model in four dimensions, including non extremal black holes. As a result, the four dimensional solutions obtained by uplifting the running dilaton solutions coincide with the so called ‘subtracted geometries’, while those obtained from the uplift of the constant dilaton ones are new.« less
Battlespace Awareness: Heterogeneous Sensor Maps of Large Scale, Complex Environments
2017-06-13
reference frames enable a system designer to describe the position of any sensor or platform at any point of time. This section introduces the...analysis to evaluate the quality of reconstructions created by our algorithms. CloudCompare is an open-source tool designed for this purpose [65]. In...structure of the data. The data term seeks to keep the proposed solution (u) similar to the originally observed values ( f ). A systems designer must
Performance testing of 3D point cloud software
NASA Astrophysics Data System (ADS)
Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.
2013-10-01
LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.
The prediction of the flash point for binary aqueous-organic solutions.
Liaw, Horng-Jang; Chiu, Yi-Yu
2003-07-18
A mathematical model, which may be used for predicting the flash point of aqueous-organic solutions, has been proposed and subsequently verified by experimentally-derived data. The results reveal that this model is able to precisely predict the flash point over the entire composition range of binary aqueous-organic solutions by way of utilizing the flash point data pertaining to the flammable component. The derivative of flash point with respect to composition (solution composition effect upon flash point) can be applied to process safety design/operation in order to identify as to whether the dilution of a flammable liquid solution with water is effective in reducing the fire and explosion hazard of the solution at a specified composition. Such a derivative equation was thus derived based upon the flash point prediction model referred to above and then verified by the application of experimentally-derived data.
NASA Astrophysics Data System (ADS)
Bradley, A. M.; Segall, P.
2012-12-01
We describe software, in development, to calculate elastostatic displacement Green's functions and their derivatives for point and polygonal dislocations in three-dimensional homogeneous elastic layers above an elastic or a viscoelastic halfspace. The steps to calculate a Green's function for a point source at depth zs are as follows. 1. A grid in wavenumber space is chosen. 2. A six-element complex rotated stress-displacement vector x is obtained at each grid point by solving a two-point boundary value problem (2P-BVP). If the halfspace is viscoelastic, the solution is inverse Laplace transformed. 3. For each receiver, x is propagated to the receiver depth zr (often zr = 0) and then, 4, inverse Fourier transformed, with the Fourier component corresponding to the receiver's horizontal position. 5. The six elements are linearly combined into displacements and their derivatives. The dominant work is in step 2. The grid is chosen to represent the wavenumber-space solution with as few points as possible. First, the wavenumber space is transformed to increase sampling density near 0 wavenumber. Second, a tensor-product grid of Chebyshev points of the first kind is constructed in each quadrant of the transformed wavenumber space. Moment-tensor-dependent symmetries further reduce work. The numerical solution of the 2P-BVP problem in step 2 involves solving a linear equation A x = b. Half of the elements of x are of geophysical interest; the subset depends on whether zr ≤ zs. Denote these \\hat x. As wavenumber k increases, \\hat x can become inaccurate in finite precision arithmetic for two reasons: 1. The condition number of A becomes too large. 2. The norm-wise relative error (NWRE) in \\hat x is large even though it is small in x. To address this problem, a number of researchers have used determinants to obtain x. This may be the best approach for 6-dimensional or smaller 2P-BVP, where the combinatorial increase in work is still moderate. But there is an alternative. Let \\bar A be the matrix after scaling its columns to unit infinity norm and \\bar x the scaled x. If \\bar A is well conditioned, as it often is in (visco)elastostatic problems, then using determinants is unnecessary. Multiply each side of A x = b by a propagator matrix to the computation depth zcd prior to storing the matrix in finite precision. zcd is determined by the rule that zr and zcd must be on opposite sides of zs. Let the resulting matrix be A(zcd). Three facts imply that this rule controls the NWRE in \\hat x: 1. Diagonally scaling a matrix changes the accuracy of an element of the solution by about one ULP (unit in the last place). 2. If the NWRE of \\bar x is small, then the largest elements are accurate. 3. zcd controls the magnitude of elements in \\bar x. In step 4, to avoid numerically Fourier transforming the (nearly) non-square-integrable functions that arise when the receiver and source depths are (nearly) the same, a function is divided into an analytical part and a numerical part that goes quickly to 0 as k -> ∞ . Our poster will describe these calculations, present a preliminary interface to a C-language package in development, and show some physical results.
NASA Astrophysics Data System (ADS)
Milić, Ivan; Atanacković, Olga
2014-10-01
State-of-the-art methods in multidimensional NLTE radiative transfer are based on the use of local approximate lambda operator within either Jacobi or Gauss-Seidel iterative schemes. Here we propose another approach to the solution of 2D NLTE RT problems, Forth-and-Back Implicit Lambda Iteration (FBILI), developed earlier for 1D geometry. In order to present the method and examine its convergence properties we use the well-known instance of the two-level atom line formation with complete frequency redistribution. In the formal solution of the RT equation we employ short characteristics with two-point algorithm. Using an implicit representation of the source function in the computation of the specific intensities, we compute and store the coefficients of the linear relations J=a+bS between the mean intensity J and the corresponding source function S. The use of iteration factors in the ‘local’ coefficients of these implicit relations in two ‘inward’ sweeps of 2D grid, along with the update of the source function in other two ‘outward’ sweeps leads to four times faster solution than the Jacobi’s one. Moreover, the update made in all four consecutive sweeps of the grid leads to an acceleration by a factor of 6-7 compared to the Jacobi iterative scheme.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Evaluation of several non-reflecting computational boundary conditions for duct acoustics
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Zorumski, William E.; Hodge, Steve L.
1994-01-01
Several non-reflecting computational boundary conditions that meet certain criteria and have potential applications to duct acoustics are evaluated for their effectiveness. The same interior solution scheme, grid, and order of approximation are used to evaluate each condition. Sparse matrix solution techniques are applied to solve the matrix equation resulting from the discretization. Modal series solutions for the sound attenuation in an infinite duct are used to evaluate the accuracy of each non-reflecting boundary conditions. The evaluations are performed for sound propagation in a softwall duct, for several sources, sound frequencies, and duct lengths. It is shown that a recently developed nonlocal boundary condition leads to sound attenuation predictions considerably more accurate for short ducts. This leads to a substantial reduction in the number of grid points when compared to other non-reflecting conditions.
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J.
2011-12-01
Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.
Practical Use Of It In Traceability In Food Value Chains
NASA Astrophysics Data System (ADS)
Ratcliff, Jon; Boddington, Michael
Traceability is today considered an essential requirement for the food value chain due to the need to provide consumers with accurate information in the event of food safety recalls, to provide assurance with regard the source and production systems for food products and in certain countries to comply with government legislation. Within an individual business traceability can be quite simple to implement, however, in a global trading market, traceability of the entire supply chain, including logistics is extremely complex. For this reason IT solutions such as TraceTracker have been developed which not only provide electronic solutions for complete traceability but also allow products to be tracked at any point in the supply chain.
Curcumin based optical sensing of fluoride in organo-aqueous media using irradiation technique
NASA Astrophysics Data System (ADS)
Venkataraj, Roopa; Radhakrishnan, P.; Kailasnath, M.
2017-06-01
The present work describes the degradation of natural dye Curcumin in organic-aqueous media upon irradiation by a multi-wavelength source of light like mercury lamp. The presence of anions in the solution leads to degradation of Curcumin and this degradation is especially enhanced in the case of fluoride ion. The degradation of Curcumin is investigated by studying the change in its absorption and fluorescence characteristics in organoaqueous solution upon irradiation. A broad detection range of fluoride ranging from 2.3×10-6-2.22×10-3 M points to the potential of the method of visible light irradiation enabling aqueous based sensing of fluoride using Curcumin.
ON THE LAUNCHING AND STRUCTURE OF RADIATIVELY DRIVEN WINDS IN WOLF–RAYET STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ro, Stephen; Matzner, Christopher D., E-mail: ro@astro.utoronto.ca
Hydrostatic models of Wolf–Rayet (WR) stars typically contain low-density outer envelopes that inflate the stellar radii by a factor of several and are capped by a denser shell of gas. Inflated envelopes and density inversions are hallmarks of envelopes that become super-Eddington as they cross the iron-group opacity peak, but these features disappear when mass loss is sufficiently rapid. We re-examine the structures of steady, spherically symmetric wind solutions that cross a sonic point at high optical depth, identifying the physical mechanism through which the outflow affects the stellar structure, and provide an improved analytical estimate for the critical mass-lossmore » rate above which extended structures are erased. Weak-flow solutions below this limit resemble hydrostatic stars even in supersonic zones; however, we infer that these fail to successfully launch optically thick winds. WR envelopes will therefore likely correspond to the strong, compact solutions. We also find that wind solutions with negligible gas pressure are stably stratified at and below the sonic point. This implies that convection is not the source of variability in WR stars, as has been suggested; however, acoustic instabilities provide an alternative explanation. Our solutions are limited to high optical depths by our neglect of Doppler enhancements to the opacity, and do not account for acoustic instabilities at high Eddington factors; yet, they do provide useful insights into WR stellar structures.« less
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%
Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.
2016-01-01
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.
Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M
2016-09-13
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.
Improved Cluster Method Applied to the InSAR data of the 2007 Piton de la Fournaise eruption
NASA Astrophysics Data System (ADS)
Cayol, V.; Augier, A.; Froger, J. L.; Menassian, S.
2016-12-01
Interpretation of surface displacement induced by reservoirs, whether magmatic, hydrothermal or gaseous, can be done at reduced numerical cost and with little a priori knowledge using cluster methods, where reservoirs are represented by point sources embedded in an elastic half-space. Most of the time, the solution representing the best trade-off between the data fit and the model smoothness (L-curve criterion) is chosen. This study relies on synthetic tests to improve cluster methods in several ways. Firstly, to solve problems involving steep topographies, we construct unit sources numerically. Secondly, we show that the L-curve criterion leads to several plausible solutions where the most realistic are not necessarily the best fitting. We determine that the cross-validation method, with data geographically grouped, is a more reliable way to determine the solution. Thirdly, we propose a new method, based on source ranking according to their contribution and minimization of the Akaike information criteria, to retrieve reservoirs' geometry more accurately and to better reflect information contained in the data. We show that the solution is robust in the presence of correlated noise and that reservoir complexity that can be retrieved decreases with increasing noise. We also show that it is inappropriate to use cluster methods for pressurized fractures. Finally, the method is applied to the summit deflation recorded by InSAR after the caldera collapse which occurred at Piton de la Fournaise in April 2007. Comparison with other data indicate that the deflation is probably related to poro-elastic compaction and fluid flow subsequent to the crater collapse.
An extension of the Lighthill theory of jet noise to encompass refraction and shielding
NASA Technical Reports Server (NTRS)
Ribner, Herbert S.
1995-01-01
A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).
NASA Astrophysics Data System (ADS)
Hansen, S. K.; Haslauer, C. P.; Cirpka, O. A.; Vesselinov, V. V.
2016-12-01
It is desirable to predict the shape of breakthrough curves downgradient of a solute source from subsurface structural parameters (as in the small-perturbation macrodispersion theory) both for realistically heterogeneous fields, and at early time, before any sort of Fickian model is applicable. Using a combination of a priori knowledge, large-scale Monte Carlo simulation, and regression techniques, we have developed closed-form predictive expressions for pre- and post-Fickian flux-weighted solute breakthrough curves as a function of distance from the source (in integral scales) and variance of the log hydraulic conductivity field. Using the ensemble of Monte Carlo realizations, we have simultaneously computed error envelopes for the estimated flux-weighted breakthrough, and for the divergence of point breakthrough curves from the flux-weighted average, as functions of the predictive parameters. We have also obtained implied late-time macrodispersion coefficients for highly heterogeneous environments from the breakthrough statistics. This analysis is relevant for the modelling of reactive as well as conservative transport, since for many kinetic sorption and decay reactions, Laplace-domain modification of the breakthrough curve for conservative solute produces the correct curve for the reactive system.
Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.
2001-01-01
An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.
An approach to detect afterslips in giant earthquakes in the normal-mode frequency band
NASA Astrophysics Data System (ADS)
Tanimoto, Toshiro; Ji, Chen; Igarashi, Mitsutsugu
2012-08-01
An approach to detect afterslips in the source process of giant earthquakes is presented in the normal-mode frequency band (0.3-2.0 mHz). The method is designed to avoid a potential systematic bias problem in the determination of earthquake moment by a typical normal-mode approach. The source of bias is the uncertainties in Q (modal attenuation parameter) which varies by up to about ±10 per cent among published studies. A choice of Q values within this range affects amplitudes in synthetic seismograms significantly if a long time-series of about 5-7 d is used for analysis. We present an alternative time-domain approach that can reduce this problem by focusing on a shorter time span with a length of about 1 d. Application of this technique to four recent giant earthquakes is presented: (1) the Tohoku, Japan, earthquake of 2011 March 11, (2) the 2010 Maule, Chile earthquake, (3) the 2004 Sumatra-Andaman earthquake and (4) the Solomon earthquake of 2007 April 1. The Global Centroid Moment Tensor (GCMT) solution for the Tohoku earthquake explains the normal-mode frequency band quite well. The analysis for the 2010 Chile earthquake indicates that the moment is about 7-10 per cent higher than the moment determined by its GCMT solution but further analysis shows that there is little evidence of afterslip; the deviation in moment can be explained by an increase of the dip angle from 18° in the GCMT solution to 19°. This may be a simple trade-off problem between the moment and dip angle but it may also be due to a deeper centroid in the normal-mode frequency band data, as a deeper source could have steeper dip angle due to changes in geometry of the Benioff zone. For the 2004 Sumatra-Andaman earthquake, the five point-source solution by Tsai et al. explains most of the signals but a sixth point-source with long duration improves the fit to the normal-mode frequency band data. The 2007 Solomon earthquake shows that the high-frequency part of our analysis (above 1 mHz) is compatible with the GCMT solution but the low-frequency part requires afterslip to explain the increasing amplitude ratios towards lower frequency. The required slip has the moment about 19 per cent of the GCMT solution and the rise time of 260 s. The total moment of these earthquakes are 5.31 × 1022 N m (Tohoku), (1.86-1.96) × 1022 N m (Chile), 1.33 × 1023 N m (Sumatra) and 1.86 × 1021 N m (Solomon). The moment magnitudes are 9.08, 8.78-8.79, 9.35 and 8.11, respectively, using Kanamori's original formula between the moment and the moment magnitude. However, the trade-off problem between the moment and dip angle can modify these estimates for moment up to about 40-50 per cent and the corresponding magnitude ±0.1.
Human performance on the traveling salesman problem.
MacGregor, J N; Ormerod, T
1996-05-01
Two experiments on performance on the traveling salesman problem (TSP) are reported. The TSP consists of finding the shortest path through a set of points, returning to the origin. It appears to be an intransigent mathematical problem, and heuristics have been developed to find approximate solutions. The first experiment used 10-point, the second, 20-point problems. The experiments tested the hypothesis that complexity of TSPs is a function of number of nonboundary points, not total number of points. Both experiments supported the hypothesis. The experiments provided information on the quality of subjects' solutions. Their solutions clustered close to the best known solutions, were an order of magnitude better than solutions produced by three well-known heuristics, and on average fell beyond the 99.9th percentile in the distribution of random solutions. The solution process appeared to be perceptually based.
A Generic analytical solution for modelling pumping tests in wells intersecting fractures
NASA Astrophysics Data System (ADS)
Dewandel, Benoît; Lanini, Sandra; Lachassagne, Patrick; Maréchal, Jean-Christophe
2018-04-01
The behaviour of transient flow due to pumping in fractured rocks has been studied for at least the past 80 years. Analytical solutions were proposed for solving the issue of a well intersecting and pumping from one vertical, horizontal or inclined fracture in homogeneous aquifers, but their domain of application-even if covering various fracture geometries-was restricted to isotropic or anisotropic aquifers, whose potential boundaries had to be parallel or orthogonal to the fracture direction. The issue thus remains unsolved for many field cases. For example, a well intersecting and pumping a fracture in a multilayer or a dual-porosity aquifer, where intersected fractures are not necessarily parallel or orthogonal to aquifer boundaries, where several fractures with various orientations intersect the well, or the effect of pumping not only in fractures, but also in the aquifer through the screened interval of the well. Using a mathematical demonstration, we show that integrating the well-known Theis analytical solution (Theis, 1935) along the fracture axis is identical to the equally well-known analytical solution of Gringarten et al. (1974) for a uniform-flux fracture fully penetrating a homogeneous aquifer. This result implies that any existing line- or point-source solution can be used for implementing one or more discrete fractures that are intersected by the well. Several theoretical examples are presented and discussed: a single vertical fracture in a dual-porosity aquifer or in a multi-layer system (with a partially intersecting fracture); one and two inclined fractures in a leaky-aquifer system with pumping either only from the fracture(s), or also from the aquifer between fracture(s) in the screened interval of the well. For the cases with several pumping sources, analytical solutions of flowrate contribution from each individual source (fractures and well) are presented, and the drawdown behaviour according to the length of the pumped screened interval of the well is discussed. Other advantages of this proposed generic analytical solution are also given. The application of this solution to field data should provide additional field information on fracture geometry, as well as identifying the connectivity between the pumped fractures and other aquifers.
The Mean Curvature of the Influence Surface of Wave Equation With Sources on a Moving Surface
NASA Technical Reports Server (NTRS)
Farassat, F.; Farris, Mark
1999-01-01
The mean curvature of the influence surface of the space-time point (x, t) appears in linear supersonic propeller noise theory and in the Kirchhoff formula for a supersonic surface. Both these problems are governed by the linear wave equation with sources on a moving surface. The influence surface is also called the Sigma - surface in the aeroacoustic literature. This surface is the locus, in a frame fixed to the quiescent medium, of all the points of a radiating surface f(x, t) = 0 whose acoustic signals arrive simultaneously to an observer at position x and at the time t. Mathematically, the Sigma- surface is produced by the intersection of the characteristic conoid of the space-time point (x, t) and the moving surface. In this paper, we derive the expression for the local mean curvature of the Sigma - space of the space-time point for a moving rigid or deformable surface f(x, t) = 0. This expression is a complicated function of the geometric and kinematic parameters of the surface f(x, t) = 0. Using the results of this paper, the solution of the governing wave equation of high speed propeller noise radiation as well as the Kirchhoff formula for a supersonic surface can be written as very compact analytic expression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kos, L.; Tskhakaya, D. D.; Jelic, N.
2011-05-15
A plasma-sheath transition analysis requires a reliable mathematical expression for the plasma potential profile {Phi}(x) near the sheath edge x{sub s} in the limit {epsilon}{identical_to}{lambda}{sub D}/l=0 (where {lambda}{sub D} is the Debye length and l is a proper characteristic length of the discharge). Such expressions have been explicitly calculated for the fluid model and the singular (cold ion source) kinetic model, where exact analytic solutions for plasma equation ({epsilon}=0) are known, but not for the regular (warm ion source) kinetic model, where no analytic solution of the plasma equation has ever been obtained. For the latter case, Riemann [J. Phys.more » D: Appl. Phys. 24, 493 (1991)] only predicted a general formula assuming relatively high ion-source temperatures, i.e., much higher than the plasma-sheath potential drop. Riemann's formula, however, according to him, never was confirmed in explicit solutions of particular models (e.g., that of Bissell and Johnson [Phys. Fluids 30, 779 (1987)] and Scheuer and Emmert [Phys. Fluids 31, 3645 (1988)]) since ''the accuracy of the classical solutions is not sufficient to analyze the sheath vicinity''[Riemann, in Proceedings of the 62nd Annual Gaseous Electronic Conference, APS Meeting Abstracts, Vol. 54 (APS, 2009)]. Therefore, for many years, there has been a need for explicit calculation that might confirm the Riemann's general formula regarding the potential profile at the sheath edge in the cases of regular very warm ion sources. Fortunately, now we are able to achieve a very high accuracy of results [see, e.g., Kos et al., Phys. Plasmas 16, 093503 (2009)]. We perform this task by using both the analytic and the numerical method with explicit Maxwellian and ''water-bag'' ion source velocity distributions. We find the potential profile near the plasma-sheath edge in the whole range of ion source temperatures of general interest to plasma physics, from zero to ''practical infinity.'' While within limits of ''very low'' and ''relatively high'' ion source temperatures, the potential is proportional to the space coordinate powered by rational numbers {alpha}=1/2 and {alpha}=2/3, with medium ion source temperatures. We found {alpha} between these values being a non-rational number strongly dependent on the ion source temperature. The range of the non-rational power-law turns out to be a very narrow one, at the expense of the extension of {alpha}=2/3 region towards unexpectedly low ion source temperatures.« less
Poynting-Flux-Driven Bubbles and Shocks Around Merging Neutron Star Binaries
NASA Astrophysics Data System (ADS)
Medvedev, M. V.; Loeb, A.
2013-04-01
Merging binaries of compact relativistic objects are thought to be progenitors of short gamma-ray bursts. Because of the strong magnetic field of one or both binary members and high orbital frequencies, these binaries are strong sources of energy in the form of Poynting flux. The steady injection of energy by the binary forms a bubble filled with matter with the relativistic equation of state, which pushes on the surrounding plasma and can drive a shock wave in it. Unlike the Sedov-von Neumann-Taylor blast wave solution for a point-like explosion, the shock wave here is continuously driven by the ever-increasing pressure inside the bubble. We calculate from the first principles the dynamics and evolution of the bubble and the shock surrounding it, demonstrate that it exhibits finite time singularity and find the corresponding analytical solution. We predict that such binaries can be observed as radio sources a few hours before and after the merger.
METHOD AND APPARATUS FOR THE DETECTION OF LEAKS IN PIPE LINES
Jefferson, S.; Cameron, J.F.
1961-11-28
A method is described for detecting leaks in pipe lines carrying fluid. The steps include the following: injecting a radioactive solution into a fluid flowing in the line; flushing the line clear of the radioactive solution; introducing a detector-recorder unit, comprising a radioactivity radiation detector and a recorder which records the detector signal over a time period at a substantially constant speed, into the line in association with a go-devil capable of propelling the detector-recorder unit through the line in the direction of the fluid flow at a substantia1ly constant velocity; placing a series of sources of radioactivity at predetermined distances along the downstream part of the line to make a characteristic signal on the recorder record at intervals corresponding to the location of said sources; recovering the detector-recorder unit at a downstream point along the line; transcribing the recorder record of any radioactivity detected during the travel of the detector- recorder unit in terms of distance along the line. (AEC)
Freezing point depression in model Lennard-Jones solutions
NASA Astrophysics Data System (ADS)
Koschke, Konstantin; Jörg Limbach, Hans; Kremer, Kurt; Donadio, Davide
2015-09-01
Crystallisation of liquid solutions is of uttermost importance in a wide variety of processes in materials, atmospheric and food science. Depending on the type and concentration of solutes the freezing point shifts, thus allowing control on the thermodynamics of complex fluids. Here we investigate the basic principles of solute-induced freezing point depression by computing the melting temperature of a Lennard-Jones fluid with low concentrations of solutes, by means of equilibrium molecular dynamics simulations. The effect of solvophilic and weakly solvophobic solutes at low concentrations is analysed, scanning systematically the size and the concentration. We identify the range of parameters that produce deviations from the linear dependence of the freezing point on the molal concentration of solutes, expected for ideal solutions. Our simulations allow us also to link the shifts in coexistence temperature to the microscopic structure of the solutions.
Impact Delivery of Reduced Greenhouse Gases on Early Mars
NASA Technical Reports Server (NTRS)
Haberle, R. M.; Zahnle, K.; Barlow, N.
2017-01-01
Reducing greenhouse gases are once again the latest trend in finding solutions to the early Mars climate dilemma. In its current form - as proposed by Ramirez et al. [1], later refined by Wordsworth et al. [2], and confirmed by Ramirez [3] - collision induced absorptions between CO2-H2 or CO2-CH4 provide enough extra greenhouse power to raise global mean surface temperatures to the melting point of water provided the atmosphere is thick enough and the reduced gases are abundant enough. To raise surface temperatures significantly by this mechanism, surface pressures must be at least 500 mb and H2 and/or CH4 concentrations must be at or above the several percent level. Both Wordsworth et al. [2] and Ramirez [3] show that the melting point can be reached in atmospheres with 1-2 bars of CO2 and 2-10% H2; smaller concentrations of H2 will suffice if CH4 is also present. If thick weakly reducing atmospheres are the solution to the faint young Sun paradox, then plausible mechanisms must be found to generate and sustain the gases. Possible sources of reducing gases include volcanic outgassing, serpentinization, and impact delivery; sinks include photolyis, oxidation, and escape to space. The viability of the reduced greenhouse hypothesis depends, therefore, on the strength of these sources and sinks.
NASA Astrophysics Data System (ADS)
Corrales, Lia
2015-05-01
X-ray bright quasars might be used to trace dust in the circumgalactic and intergalactic medium through the phenomenon of X-ray scattering, which is observed around Galactic objects whose light passes through a sufficient column of interstellar gas and dust. Of particular interest is the abundance of gray dust larger than 0.1 μ m, which is difficult to detect at other wavelengths. To calculate X-ray scattering from large grains, one must abandon the traditional Rayleigh-Gans approximation. The Mie solution for the X-ray scattering optical depth of the universe is ∼ 1%. This presents a great difficulty for distinguishing dust scattered photons from the point source image of Chandra, which is currently unsurpassed in imaging resolution. The variable nature of AGNs offers a solution to this problem, as scattered light takes a longer path and thus experiences a time delay with respect to non-scattered light. If an AGN dims significantly (≳ 3 dex) due to a major feedback event, the Chandra point source image will be suppressed relative to the scattering halo, and an X-ray echo or ghost halo may become visible. I estimate the total number of scattering echoes visible by Chandra over the entire sky: {{N}ech}∼ {{10}3}({{ν }fb}/y{{r}-1}), where {{ν }fb} is the characteristic frequency of feedback events capable of dimming an AGN quickly.
NASA Astrophysics Data System (ADS)
Radomski, Bartosz; Ćwiek, Barbara; Mróz, Tomasz M.
2017-11-01
The paper presents multicriteria decision aid analysis of the choice of PV installation providing electric energy to a public utility building. From the energy management point of view electricity obtained by solar radiation has become crucial renewable energy source. Application of PV installations may occur a profitable solution from energy, economic and ecologic point of view for both existing and newly erected buildings. Featured variants of PV installations have been assessed by multicriteria analysis based on ANP (Analytic Network Process) method. Technical, economical, energy and environmental criteria have been identified as main decision criteria. Defined set of decision criteria has an open character and can be modified in the dialog process between the decision-maker and the expert - in the present case, an expert in planning of development of energy supply systems. The proposed approach has been used to evaluate three variants of PV installation acceptable for existing educational building located in Poznań, Poland - the building of Faculty of Chemical Technology, Poznań University of Technology. Multi-criteria analysis based on ANP method and the calculation software Super Decisions has proven to be an effective tool for energy planning, leading to the indication of the recommended variant of PV installation in existing and newly erected public buildings. Achieved results show prospects and possibilities of rational renewable energy usage as complex solution to public utility buildings.
VizieR Online Data Catalog: The VLBA Extragalactic Proper Motion Catalog (Truebenbach+, 2017)
NASA Astrophysics Data System (ADS)
Truebenbach, A. E.; Darling, J.
2017-11-01
We created our catalog of extragalactic radio proper motions using the 2017a Goddard VLBI global solution. The 2017a solution is computed from more than 30 years of dual-band VLBI observations --1979 August 3 to 2017 March 27. We also observed 28 objects with either no redshift or a "questionable" Optical Characteristic of Astrometric Radio Sources (OCARS; Malkin 2016ARep...60..996M) redshift at the Apache Point Observatory (APO) 3.5m telescope and/or at Gemini North. We conducted observations on the 3.5m telescope at Apache Point Observatory with the Dual Imaging Spectrograph (DIS) from 2015 April 18 to 2016 June 30. We chose two objects for additional observations with the Gemini Multi-Object Spectrograph-North (GMOS-N) at Gemini North Observatory. 2021+317 was observed on 2016 June 26 and 28, while 0420+417 was observed on 2016 November 8 and 26. We also observed 42 radio sources with the Very Long Baseline Array (VLBA) in the X-band (3.6cm/8.3GHz). Our targets had all been previously observed by VLBI. Our VLBA observations were conducted in two campaigns from 2015 September to 2016 January and 2016 October to November. The final extragalactic proper motion catalog (created primarily from archival Goddard VLBI data, with redshifts obtained from OCARS) contains 713 proper motions with average uncertainties of 24μas/yr. (5 data files).
Directory of aerospace safety specialized information sources
NASA Technical Reports Server (NTRS)
Fullerton, E. A.; Rubens, L. S.
1973-01-01
A directory is presented to make available to the aerospace safety community a handbook of organizations and experts in specific, well-defined areas of safety technology. It is designed for the safety specialist as an aid for locating both information sources and individual points of contact (experts) in engineering related fields. The file covers sources of data in aerospace design, tests, as well as information in hazard and failure cause identification, accident analysis, materials characteristics, and other related subject areas. These 171 organizations and their staff members, hopefully, should provide technical information in the form of documentation, data and consulting expertise. These will be sources that have assembled and collated their information, so that it will be useful in the solution of engineering problems. One of the goals of the project in the United States that have and are willing to share data of value to the aerospace safety community.
Aureole radiance field about a source in a scattering-absorbing medium.
Zachor, A S
1978-06-15
A technique is described for computing the aureole radiance field about a point source in a medium that absorbs and scatters according to an arbitrary phase function. When applied to an isotropic source in a homogenous medium, the method uses a double-integral transform which is evaluated recursively to obtain the aureole radiances contributed by successive scattering orders, as in the Neumann solution of the radiative transfer equation. The normalized total radiance field distribution and the variation of flux with field of view and range are given for three wavelengths in the uv and one in the visible, for a sea-level model atmosphere assumed to scatter according to a composite of the Rayleigh and modified Henyey-Greenstein phase functions. These results have application to the detection and measurement of uncollimated uv and visible sources at short ranges in the lower atmosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campigotto, M.C.; Diaferio, A.; Hernandez, X.
We discuss the phenomenology of gravitational lensing in the purely metric f (χ) gravity, an f ( R ) gravity where the action of the gravitational field depends on the source mass. We focus on the strong lensing regime in galaxy-galaxy lens systems and in clusters of galaxies. By adopting point-like lenses and using an approximate metric solution accurate to second order of the velocity field v / c , we show how, in the f (χ) = χ{sup 3/2} gravity, the same light deflection can be produced by lenses with masses smaller than in General Relativity (GR); this massmore » difference increases with increasing impact parameter and decreasing lens mass. However, for sufficiently massive point-like lenses and small impact parameters, f (χ) = χ{sup 3/2} and GR yield indistinguishable light deflection angles: this regime occurs both in observed galaxy-galaxy lens systems and in the central regions of galaxy clusters. In the former systems, the GR and f (χ) masses are compatible with the mass of standard stellar populations and little or no dark matter, whereas, on the scales of the core of galaxy clusters, the presence of substantial dark matter is required by our point-like lenses both in GR and in our approximate f (χ) = χ{sup 3/2} solution. We thus conclude that our approximate metric solution of f (χ) = χ{sup 3/2} is unable to describe the observed phenomenology of the strong lensing regime without the aid of dark matter.« less
Avoidance of singularities in asymptotically safe Quantum Einstein Gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kofinas, Georgios; Zarikas, Vasilios; Department of Physics, Aristotle University of Thessaloniki,54124 Thessaloniki
2015-10-30
New general spherically symmetric solutions have been derived with a cosmological “constant” Λ as a source. This Λ term is not constant but it satisfies the properties of the asymptotically safe gravity at the ultraviolet fixed point. The importance of these solutions comes from the fact that they may describe the near to the centre region of black hole spacetimes as this is modified by the Renormalization Group scaling behaviour of the fields. The consistent set of field equations which respect the Bianchi identities is derived and solved. One of the solutions (with conventional sign of temporal-radial metric components) ismore » timelike geodesically complete, and although there is still a curvature divergent origin, this is never approachable by an infalling massive particle which is reflected at a finite distance due to the repulsive origin. Another family of solutions (of both signatures) range from a finite radius outwards, they cannot be extended to the centre of spherical symmetry, and the curvature invariants are finite at the minimum radius.« less
Avoidance of singularities in asymptotically safe Quantum Einstein Gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kofinas, Georgios; Zarikas, Vasilios, E-mail: gkofinas@aegean.gr, E-mail: vzarikas@teilam.gr
2015-10-01
New general spherically symmetric solutions have been derived with a cosmological ''constant'' Λ as a source. This Λ term is not constant but it satisfies the properties of the asymptotically safe gravity at the ultraviolet fixed point. The importance of these solutions comes from the fact that they may describe the near to the centre region of black hole spacetimes as this is modified by the Renormalization Group scaling behaviour of the fields. The consistent set of field equations which respect the Bianchi identities is derived and solved. One of the solutions (with conventional sign of temporal-radial metric components) ismore » timelike geodesically complete, and although there is still a curvature divergent origin, this is never approachable by an infalling massive particle which is reflected at a finite distance due to the repulsive origin. Another family of solutions (of both signatures) range from a finite radius outwards, they cannot be extended to the centre of spherical symmetry, and the curvature invariants are finite at the minimum radius.« less
Decentralized DC Microgrid Monitoring and Optimization via Primary Control Perturbations
NASA Astrophysics Data System (ADS)
Angjelichinoski, Marko; Scaglione, Anna; Popovski, Petar; Stefanovic, Cedomir
2018-06-01
We treat the emerging power systems with direct current (DC) MicroGrids, characterized with high penetration of power electronic converters. We rely on the power electronics to propose a decentralized solution for autonomous learning of and adaptation to the operating conditions of the DC Mirogrids; the goal is to eliminate the need to rely on an external communication system for such purpose. The solution works within the primary droop control loops and uses only local bus voltage measurements. Each controller is able to estimate (i) the generation capacities of power sources, (ii) the load demands, and (iii) the conductances of the distribution lines. To define a well-conditioned estimation problem, we employ decentralized strategy where the primary droop controllers temporarily switch between operating points in a coordinated manner, following amplitude-modulated training sequences. We study the use of the estimator in a decentralized solution of the Optimal Economic Dispatch problem. The evaluations confirm the usefulness of the proposed solution for autonomous MicroGrid operation.
Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms
NASA Astrophysics Data System (ADS)
Kumar, K.; Ledoux, H.; Stoter, J.
2016-06-01
Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.
Viscous remanent magnetization model for the Broken Ridge satellite magnetic anomaly
NASA Technical Reports Server (NTRS)
Johnson, B. D.
1985-01-01
An equivalent source model solution of the satellite magnetic field over Australia obtained by Mayhew et al. (1980) showed that the satellite anomalies could be related to geological features in Australia. When the processing and selection of the Magsat data over the Australian region had progressed to the point where interpretation procedures could be initiated, it was decided to start by attempting to model the Broken Ridge satellite anomaly, which represents one of the very few relatively isolated anomalies in the Magsat maps, with an unambiguous source region. Attention is given to details concerning the Broken Ridge satellite magnetic anomaly, the modeling method used, the Broken Ridge models, modeling results, and characteristics of magnetization.
Activity measurements of 55Fe by two different methods
NASA Astrophysics Data System (ADS)
da Cruz, Paulo A. L.; Iwahara, Akira; da Silva, Carlos J.; Poledna, Roberto; Loureiro, Jamir S.; da Silva, Monica A. L.; Ruzzarin, Anelise
2018-03-01
A calibrated germanium detector and CIEMAT/NIST liquid scintillation method were used in the standardization of solution of 55Fe coming from a key-comparison BIPM. Commercial cocktails were used in source preparation for activity measurements in CIEMAT/NIST method. Measurements were performed in Liquid Scintillation Counter. In the germanium counting method standard point sources were prepared for obtaining atomic number versus efficiency curve of the detector in order to obtain the efficiency of 5.9 keV KX-ray of 55Fe by interpolation. The activity concentrations obtained were 508.17 ± 3.56 and 509.95 ± 16.20 kBq/g for CIEMAT/NIST and germanium methods, respectively.
A priori Estimates for 3D Incompressible Current-Vortex Sheets
NASA Astrophysics Data System (ADS)
Coulombel, J.-F.; Morando, A.; Secchi, P.; Trebeschi, P.
2012-04-01
We consider the free boundary problem for current-vortex sheets in ideal incompressible magneto-hydrodynamics. It is known that current-vortex sheets may be at most weakly (neutrally) stable due to the existence of surface waves solutions to the linearized equations. The existence of such waves may yield a loss of derivatives in the energy estimate of the solution with respect to the source terms. However, under a suitable stability condition satisfied at each point of the initial discontinuity and a flatness condition on the initial front, we prove an a priori estimate in Sobolev spaces for smooth solutions with no loss of derivatives. The result of this paper gives some hope for proving the local existence of smooth current-vortex sheets without resorting to a Nash-Moser iteration. Such result would be a rigorous confirmation of the stabilizing effect of the magnetic field on Kelvin-Helmholtz instabilities, which is well known in astrophysics.
NASA Astrophysics Data System (ADS)
Pada Das, Krishna; Roy, Prodip; Ghosh, Subhabrata; Maiti, Somnath
This paper deals with an eco-epidemiological approach with disease circulating through the predator species. Disease circulation in the predator species can be possible by contact as well as by external sources. Here, we try to discuss the role of external source of infection along with nutritional value on system dynamics. To establish our findings, we have worked out the local and global stability analysis of the equilibrium points with Hopf bifurcation analysis associated with interior equilibrium point. The ecological consequence by ecological basic reproduction number as well as the disease basic reproduction number or basic reproductive ratio are obtained and we have analyzed the community structure of the particular system with the help of ecological and disease basic reproduction numbers. Further we pay attention to the chaotic dynamics which is produced by disease circulating in predator species by contact. Our numerical simulations reveal that eco-epidemiological system without external source of infection induced chaotic dynamics for increasing force of infection due to contact, whereas in the presence of external source of infection, it exhibits stable solution. It is also observed that nutritional value can prevent chaotic dynamics. We conclude that chaotic dynamics can be controlled by the external source of infection as well as nutritional value. We apply basic tools of nonlinear dynamics such as Poincare section and maximum Lyapunov exponent to investigate chaotic behavior of the system.
Bayat, Pouriya; Rezai, Pouya
2018-05-21
One of the common operations in sample preparation is to separate specific particles (e.g. target cells, embryos or microparticles) from non-target substances (e.g. bacteria) in a fluid and to wash them into clean buffers for further processing like detection (called solution exchange in this paper). For instance, solution exchange is widely needed in preparing fluidic samples for biosensing at the point-of-care and point-of-use, but still conducted via the use of cumbersome and time-consuming off-chip analyte washing and purification techniques. Existing small-scale and handheld active and passive devices for washing particles are often limited to very low throughputs or require external sources of energy. Here, we integrated Dean flow recirculation of two fluids in curved microchannels with selective inertial focusing of target particles to develop a microfluidic centrifuge device that can isolate specific particles (as surrogates for target analytes) from bacteria and wash them into a clean buffer at high throughput and efficiency. We could process micron-size particles at a flow rate of 1 mL min-1 and achieve throughputs higher than 104 particles per second. Our results reveal that the device is capable of singleplex solution exchange of 11 μm and 19 μm particles with efficiencies of 86 ± 2% and 93 ± 0.7%, respectively. A purity of 96 ± 2% was achieved in the duplex experiments where 11 μm particles were isolated from 4 μm particles. Application of our device in biological assays was shown by performing duplex experiments where 11 μm or 19 μm particles were isolated from an Escherichia coli bacterial suspension with purities of 91-98%. We envision that our technique will have applications in point-of-care devices for simultaneous purification and solution exchange of cells and embryos from smaller substances in high-volume suspensions at high throughput and efficiency.
Synthesis of nanocrystalline CeO{sub 2} particles by different emulsion methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supakanapitak, Sunisa; Boonamnuayvitaya, Virote; Jarudilokkul, Somnuk, E-mail: somnuk.jar@kmutt.ac.th
2012-05-15
Cerium oxide nanoparticles were synthesized using three different methods of emulsion: (1) reversed micelle (RM); (2) emulsion liquid membrane (ELM); and (3) colloidal emulsion aphrons (CEAs). Ammonium cerium nitrate and polyoxyethylene-4-lauryl ether (PE4LE) were used as cerium and surfactant sources in this study. The powder was calcined at 500 Degree-Sign C to obtain CeO{sub 2}. The effect of the preparation procedure on the particle size, surface area, and the morphology of the prepared powders were investigated. The obtained powders are highly crystalline, and nearly spherical in shape. The average particle size and the specific surface area of the powders frommore » the three methods were in the range of 4-10 nm and 5.32-145.73 m{sup 2}/g, respectively. The CeO{sub 2} powders synthesized by the CEAs are the smallest average particle size, and the highest surface area. Finally, the CeO{sub 2} prepared by the CEAs using different cerium sources and surfactant types were studied. It was found that the surface tensions of cerium solution and the type of surfactant affect the particle size of CeO{sub 2}. - Graphical Abstract: The emulsion droplet size distribution and the TEM images of CeO{sub 2} prepared by different methods: reversed micelle (RM), emulsion liquid membrane (ELM) and colloidal emulsion aphrons (CEAs). Highlights: Black-Right-Pointing-Pointer Nano-sized CeO{sub 2} was successfully prepared by three different emulsion methods. Black-Right-Pointing-Pointer The colloidal emulsion aphrons method producing CeO{sub 2} with the highest surface area. Black-Right-Pointing-Pointer The surface tensions of a cerium solution have slightly effect on the particle size. Black-Right-Pointing-Pointer The size control could be interpreted in terms of the adsorption of the surfactant.« less
NASA Astrophysics Data System (ADS)
Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Pilan, N.; Marcuzzi, D.; Serianni, G.; Veltri, P.
2011-09-01
Consorzio RFX in Padova is currently using a comprehensive set of numerical and analytical codes, for the physics and engineering design of the SPIDER (Source for Production of Ion of Deuterium Extracted from RF plasma) and MITICA (Megavolt ITER Injector Concept Advancement) experiments, planned to be built at Consorzio RFX. This paper presents a set of studies on different possible geometries for the MITICA accelerator, with the objective to compare different design concepts and choose the most suitable one (or ones) to be further developed and possibly adopted in the experiment. Different design solutions have been discussed and compared, taking into account their advantages and drawbacks by both the physics and engineering points of view.
NASA Astrophysics Data System (ADS)
Makoveeva, Eugenya V.; Alexandrov, Dmitri V.
2018-01-01
This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.
Computation of transonic viscous-inviscid interacting flow
NASA Technical Reports Server (NTRS)
Whitfield, D. L.; Thomas, J. L.; Jameson, A.; Schmidt, W.
1983-01-01
Transonic viscous-inviscid interaction is considered using the Euler and inverse compressible turbulent boundary-layer equations. Certain improvements in the inverse boundary-layer method are mentioned, along with experiences in using various Runge-Kutta schemes to solve the Euler equations. Numerical conditions imposed on the Euler equations at a surface for viscous-inviscid interaction using the method of equivalent sources are developed, and numerical solutions are presented and compared with experimental data to illustrate essential points. Previously announced in STAR N83-17829
A computer program to evaluate optical systems
NASA Technical Reports Server (NTRS)
Innes, D.
1972-01-01
A computer program is used to evaluate a 25.4 cm X-ray telescope at a field angle of 20 minutes of arc by geometrical analysis. The object is regarded as a point source of electromagnetic radiation, and the optical surfaces are treated as boundary conditions in the solution of the electromagnetic wave propagation equation. The electric field distribution is then determined in the region of the image and the intensity distribution inferred. A comparison of wave analysis results and photographs taken through the telescope shows excellent agreement.
Potential and Innovations in Rooftop Photovoltaics
NASA Astrophysics Data System (ADS)
Bierman, Ben
2011-11-01
Photovoltaic technology has reached a point where its cost and capability make it one of a handful of carbon-free sources of electrical energy that could meet a meaningful fraction of US energy demand. In this paper we will first compare Photovoltaics with several other carbon free energy technologies, then look at the economics of Solyndra's rooftop photovoltaic solution as an example of the current state of the art, as well as the market dynamics that have resulted in dramatically faster adoption in Germany vs. the United States.
Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre
2013-01-01
The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.
Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Dȩbski, Wojciech
2008-07-01
Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.
"Closing the Loop": Overcoming barriers to locally sourcing food in Fort Collins, Colorado
NASA Astrophysics Data System (ADS)
DeMets, C. M.
2012-12-01
Environmental sustainability has become a focal point for many communities in recent years, and restaurants are seeking creative ways to become more sustainable. As many chefs realize, sourcing food locally is an important step towards sustainability and towards building a healthy, resilient community. Review of literature on sustainability in restaurants and the local food movement revealed that chefs face many barriers to sourcing their food locally, but that there are also many solutions for overcoming these barriers that chefs are in the early stages of exploring. Therefore, the purpose of this research is to identify barriers to local sourcing and investigate how some restaurants are working to overcome those barriers in the city of Fort Collins, Colorado. To do this, interviews were conducted with four subjects who guide purchasing decisions for restaurants in Fort Collins. Two of these restaurants have created successful solutions and are able to source most of their food locally. The other two are interested in and working towards sourcing locally but have not yet been able to overcome barriers, and therefore only source a few local items. Findings show that there are four barriers and nine solutions commonly identified by each of the subjects. The research found differences between those who source most of their food locally and those who have not made as much progress in local sourcing. Based on these results, two solution flowcharts were created, one for primary barriers and one for secondary barriers, for restaurants to assess where they are in the local food chain and how they can more successfully source food locally. As there are few explicit connections between this research question and climate change, it is important to consider the implicit connections that motivate and justify this research. The question of whether or not greenhouse gas emissions are lower for locally sourced food is a topic of much debate, and while there are major developments for quantitatively determining a generalized answer, it is "currently impossible to state categorically whether or not local food systems emit fewer greenhouse gases than non-local food systems" (Edwards-Jones et al, 2008). Even so, numerous researchers have shown that "83 percent of emissions occur before food even leaves the farm gate" (Weber and Matthews, Garnett, cited in DeWeerdt, 2011); while this doesn't provide any information in terms of local vs. non-local, it is significant when viewed in light of the fact that local farmers tend to have much greater transparency and accountability in their agricultural practices. In other words, "a farmer who sells in the local food economy might be more likely to adopt or continue sustainable practices in order to meet…customer demand" (DeWeerdt, 2011), among other reasons such as environmental concern and desire to support the local economy (DeWeerdt, 2009). In identifying solutions to barriers to locally sourcing food, this research will enable restaurants to overcome these barriers and source their food locally, thereby supporting farmers and their ability to maintain sustainable practices.
Li, Li-Guan; Yin, Xiaole; Zhang, Tong
2018-05-24
Antimicrobial resistance (AMR) has been a worldwide public health concern. Current widespread AMR pollution has posed a big challenge in accurately disentangling source-sink relationship, which has been further confounded by point and non-point sources, as well as endogenous and exogenous cross-reactivity under complicated environmental conditions. Because of insufficient capability in identifying source-sink relationship within a quantitative framework, traditional antibiotic resistance gene (ARG) signatures-based source-tracking methods would hardly be a practical solution. By combining broad-spectrum ARG profiling with machine-learning classification SourceTracker, here we present a novel way to address the question in the era of high-throughput sequencing. Its potential in extensive application was firstly validated by 656 global-scale samples covering diverse environmental types (e.g., human/animal gut, wastewater, soil, ocean) and broad geographical regions (e.g., China, USA, Europe, Peru). Its potential and limitations in source prediction as well as effect of parameter adjustment were then rigorously evaluated by artificial configurations with representative source proportions. When applying SourceTracker in region-specific analysis, excellent performance was achieved by ARG profiles in two sample types with obvious different source compositions, i.e., influent and effluent of wastewater treatment plant. Two environmental metagenomic datasets of anthropogenic interference gradient further supported its potential in practical application. To complement general-profile-based source tracking in distinguishing continuous gradient pollution, a few generalist and specialist indicator ARGs across ecotypes were identified in this study. We demonstrated for the first time that the developed source-tracking platform when coupling with proper experiment design and efficient metagenomic analysis tools will have significant implications for assessing AMR pollution. Following predicted source contribution status, risk ranking of different sources in ARG dissemination will be possible, thereby paving the way for establishing priority in mitigating ARG spread and designing effective control strategies.
Momentum and energy transport by waves in the solar atmosphere and solar wind
NASA Technical Reports Server (NTRS)
Jacques, S. A.
1977-01-01
The fluid equations for the solar wind are presented in a form which includes the momentum and energy flux of waves in a general and consistent way. The concept of conservation of wave action is introduced and is used to derive expressions for the wave energy density as a function of heliocentric distance. The explicit form of the terms due to waves in both the momentum and energy equations are given for radially propagating acoustic, Alfven, and fast mode waves. The effect of waves as a source of momentum is explored by examining the critical points of the momentum equation for isothermal spherically symmetric flow. We find that the principal effect of waves on the solutions is to bring the critical point closer to the sun's surface and to increase the Mach number at the critical point. When a simple model of dissipation is included for acoustic waves, in some cases there are multiple critical points.
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
All-fiber intensity bend sensor based on photonic crystal fiber with asymmetric air-hole structure
NASA Astrophysics Data System (ADS)
Budnicki, Dawid; Szostkiewicz, Lukasz; Szymanski, Michal O.; Ostrowski, Lukasz; Holdynski, Zbigniew; Lipinski, Stanislaw; Murawski, Michal; Wojcik, Grzegorz; Makara, Mariusz; Poturaj, Krzysztof; Mergo, Pawel; Napierala, Marek; Nasilowski, Tomasz
2017-10-01
Monitoring the geometry of an moving element is a crucial task for example in robotics. The robots equipped with fiber bend sensor integrated in their arms can be a promising solution for medicine, physiotherapy and also for application in computer games. We report an all-fiber intensity bend sensor, which is based on microstructured multicore optical fiber. It allows to perform a measurement of the bending radius as well as the bending orientation. The reported solution has a special airhole structure which makes the sensor only bend-sensitive. Our solution is an intensity based sensor, which measures power transmitted along the fiber, influenced by bend. The sensor is based on a multicore fiber with the special air-hole structure that allows detection of bending orientation in range of 360°. Each core in the multicore fiber is sensitive to bend in specified direction. The principle behind sensor operation is to differentiate the confinement loss of fundamental mode propagating in each core. Thanks to received power differences one can distinguish not only bend direction but also its amplitude. Multicore fiber is designed to utilize most common light sources that operate at 1.55 μm thus ensuring high stability of operation. The sensitivity of the proposed solution is equal 29,4 dB/cm and the accuracy of bend direction for the fiber end point is up to 5 degrees for 15 cm fiber length. Such sensitivity allows to perform end point detection with millimeter precision.
MSWT-01, flood disaster water treatment solution from common ideas
NASA Astrophysics Data System (ADS)
Ananto, Gamawan; Setiawan, Albertus B.; Z, Darman M.
2013-06-01
Indonesia has a lot of potential flood disaster places with clean water problems faced. Various solution programs always initiated by Government, companies CSR, and people sporadical actions to provide clean water; with their advantages and disadvantages respectively. One solution is easy to operate for instance, but didn't provide adequate capacity, whereas the other had ideal performance but more costly. This situation inspired to develop a water treatment machine that could be an alternative favor. There are many methods could be choosed; whether in simple, middle or high technology, depends on water source input and output result quality. MSWT, Mobile Surface Water Treatment, is an idea for raw water in flood area, basically made for 1m3 per hour. This water treatment design adopted from combined existing technologies and related literatures. Using common ideas, the highlight is how to make such modular process put in compact design elegantly, and would be equipped with mobile feature due to make easier in operational. Through prototype level experiment trials, the machine is capable for producing clean water that suitable for sanitation and cooking/drinking purposes although using contaminated water input source. From the investment point of view, such machine could be also treated as an asset that will be used from time to time when needed, instead of made for project approach only.
NASA Astrophysics Data System (ADS)
Rose, Seth
2007-07-01
SummaryA comprehensive network of stream data ( n = 50) was used to assess the effects of urbanization upon the hydrochemical variation within base flow in the Chattahoochee River Basin (CRB), Georgia (USA). Base flow solute concentrations (particularly sulfate, chloride, bicarbonate alkalinity, and sodium) increase with the degree of urbanization and any degree of urbanization within the Atlanta Metropolitan Region (AMR) results in elevated base flow solute concentrations. This suggests that there are pervasive low-level non-point sources of contamination such as septic tanks systems and leaky sewer lines affecting the chemistry of shallow groundwater throughout much of the AMR and CRB. Six groups or subsets representing the "rural-to-urban gradient" were defined, characterized by the following order of increasing solute concentrations: rural basins < Chattahoochee River. semi-urbanized basins < urbanized basins < urban basins with main sewer trunk lines < urbanized basins directly receiving treated effluent and combined sewer overflow (CSO) basins. There is a strong and unusual basin-wide correlation ( r2 values >0.79) between Na-K-Cl within the CRB that likely reflects the widespread input of electrolytes present in human wastes and wastewater. The most likely source and pathway for contaminant input involves the mobilization of salts, originally present in waste water, within the riparian or hypoheric zone.
Relativistic Self-similar Equilibria and Non-axisymmetric Neutral Modes
NASA Astrophysics Data System (ADS)
Cai, Mike J.; Shu, F. H.
2002-05-01
We have constructed semi-analytic axisymmetric scale free solutions to Einstein field equations with perfect fluid matter source. These spacetimes are self-similar under the simultaneous transformation r'= ar and t'=a1-nt. We explored the two dimensional solution space parameterized by the rescaling index n and the isothermal sound speed γ 1/2. The isopycnic surfaces are in general toroids. As the equilibrium configuration rotates faster, an ergo region develops in the form of the exterior of a cone centered about the symmetry axis. The sequence of solution terminates when frame dragging becomes infinite and the ergo cone closes onto the axis. In the extreme flattening limit, we have also searched for non-axisymmetric neutral modes in a self-similar disk. Two separate sets of tracks are discovered in the solution space. One corresponds to the bifurcation points to non-axisymmetric equilibria, which is confined in the non-ergo solutions. The other track signals the onset of instability driven by gravitational radiation. These solutions are formally infinite in extent, and thus can not represent realistic astrophysical systems. However, if these properties do not alter qualitatively when the self-similar configurations are truncated, then these solutions may serve as initial data for dynamic collapse in super massive black hole formation.
Cosmological rotating black holes in five-dimensional fake supergravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nozawa, Masato; Maeda, Kei-ichi; Waseda Research Institute for Science and Engineering, Okubo 3-4-1, Shinjuku, Tokyo 169-8555
2011-01-15
In recent series of papers, we found an arbitrary dimensional, time-evolving, and spatially inhomogeneous solution in Einstein-Maxwell-dilaton gravity with particular couplings. Similar to the supersymmetric case, the solution can be arbitrarily superposed in spite of nontrivial time-dependence, since the metric is specified by a set of harmonic functions. When each harmonic has a single point source at the center, the solution describes a spherically symmetric black hole with regular Killing horizons and the spacetime approaches asymptotically to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. We discuss in this paper that in 5 dimensions, this equilibrium condition traces back to the first-order 'Killing spinor'more » equation in 'fake supergravity' coupled to arbitrary U(1) gauge fields and scalars. We present a five-dimensional, asymptotically FLRW, rotating black-hole solution admitting a nontrivial 'Killing spinor', which is a spinning generalization of our previous solution. We argue that the solution admits nondegenerate and rotating Killing horizons in contrast with the supersymmetric solutions. It is shown that the present pseudo-supersymmetric solution admits closed timelike curves around the central singularities. When only one harmonic is time-dependent, the solution oxidizes to 11 dimensions and realizes the dynamically intersecting M2/M2/M2-branes in a rotating Kasner universe. The Kaluza-Klein-type black holes are also discussed.« less
The solusphere-its inferences and study
Rainwater, F.H.; White, W.F.
1958-01-01
Water is a fundamental geologic agent active in rock decomposition, erosion, and synthesis. Solutes in water are of particular interest to geochemists as sources of raw material for synthesis or as products of decomposition. When geochemical studies move from the laboratory into natural environment many variables relating to solute hydrology must be considered. As a focal point there has been designed a graphical representation of solute hydrology, the solusphere, which embodies the concepts of land-water occurrence and movement on which are superimposed geologic, biologic, physical, chemical, and cultural processes affecting solutes. The solusphere is demonstrated by passing an imaginary plane through the centre of the earth. This plane intercepts concentric zones designated as rock flowage, saturation, aeration, surface activity, and atmosphere. Transport processes carry solutes within and between zones without alteration or conversion. However, whether stationary or in motion, the water's solute character is constantly subject to (1) alteration processes that change concentration by addition or subtraction of solutes or solvent without loss of solute identities, and (2) conversion processes that change the chemical state and form of solutes. The geochemist is concerned with specific conversion processes, but he also must consider transport, alteration, and other conversion processes that are continually modifying the materials with which he is dealing in nature. The solusphere is an attempt to organize processes affecting the chemical quality of land waters into a unified field of science much like the field of marine chemistry. ?? 1958.
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
Sources and methods to reconstruct past masting patterns in European oak species.
Szabó, Péter
2012-01-01
The irregular occurrence of good seed years in forest trees is known in many parts of the world. Mast year frequency in the past few decades can be examined through field observational studies; however, masting patterns in the more distant past are equally important in gaining a better understanding of long-term forest ecology. Past masting patterns can be studied through the examination of historical written sources. These pose considerable challenges, because data in them were usually not recorded with the aim of providing information about masting. Several studies examined masting in the deeper past, however, authors hardly ever considered the methodological implications of using and combining various source types. This paper provides a critical overview of the types of archival written that are available for the reconstruction of past masting patterns for European oak species and proposes a method to unify and evaluate different types of data. Available sources cover approximately eight centuries and can be put into two basic categories: direct observations on the amount of acorns and references to sums of money received in exchange for access to acorns. Because archival sources are highly different in origin and quality, the optimal solution for creating databases for past masting data is a three-point scale: zero mast, moderate mast, good mast. When larger amounts of data are available in a unified three-point-scale database, they can be used to test hypotheses about past masting frequencies, the driving forces of masting or regional masting patterns.
BioFed: federated query processing over life sciences linked open data.
Hasnain, Ali; Mehmood, Qaiser; Sana E Zainab, Syeda; Saleem, Muhammad; Warren, Claude; Zehra, Durre; Decker, Stefan; Rebholz-Schuhmann, Dietrich
2017-03-15
Biomedical data, e.g. from knowledge bases and ontologies, is increasingly made available following open linked data principles, at best as RDF triple data. This is a necessary step towards unified access to biological data sets, but this still requires solutions to query multiple endpoints for their heterogeneous data to eventually retrieve all the meaningful information. Suggested solutions are based on query federation approaches, which require the submission of SPARQL queries to endpoints. Due to the size and complexity of available data, these solutions have to be optimised for efficient retrieval times and for users in life sciences research. Last but not least, over time, the reliability of data resources in terms of access and quality have to be monitored. Our solution (BioFed) federates data over 130 SPARQL endpoints in life sciences and tailors query submission according to the provenance information. BioFed has been evaluated against the state of the art solution FedX and forms an important benchmark for the life science domain. The efficient cataloguing approach of the federated query processing system 'BioFed', the triple pattern wise source selection and the semantic source normalisation forms the core to our solution. It gathers and integrates data from newly identified public endpoints for federated access. Basic provenance information is linked to the retrieved data. Last but not least, BioFed makes use of the latest SPARQL standard (i.e., 1.1) to leverage the full benefits for query federation. The evaluation is based on 10 simple and 10 complex queries, which address data in 10 major and very popular data sources (e.g., Dugbank, Sider). BioFed is a solution for a single-point-of-access for a large number of SPARQL endpoints providing life science data. It facilitates efficient query generation for data access and provides basic provenance information in combination with the retrieved data. BioFed fully supports SPARQL 1.1 and gives access to the endpoint's availability based on the EndpointData graph. Our evaluation of BioFed against FedX is based on 20 heterogeneous federated SPARQL queries and shows competitive execution performance in comparison to FedX, which can be attributed to the provision of provenance information for the source selection. Developing and testing federated query engines for life sciences data is still a challenging task. According to our findings, it is advantageous to optimise the source selection. The cataloguing of SPARQL endpoints, including type and property indexing, leads to efficient querying of data resources over the Web of Data. This could even be further improved through the use of ontologies, e.g., for abstract normalisation of query terms.
Generic guide concepts for the European Spallation Source
NASA Astrophysics Data System (ADS)
Zendler, C.; Martin Rodriguez, D.; Bentley, P. M.
2015-12-01
The construction of the European Spallation Source (ESS) faces many challenges from the neutron beam transport point of view: the spallation source is specified as being driven by a 5 MW beam of protons, each with 2 GeV energy, and yet the requirements in instrument background suppression relative to measured signal vary between 10-6 and 10-8. The energetic particles, particularly above 20 MeV, which are expected to be produced in abundance in the target, have to be filtered in order to make the beamlines safe, operational and provide good quality measurements with low background. We present generic neutron guides of short and medium length instruments which are optimised for good performance at minimal cost. Direct line of sight to the source is avoided twice, with either the first point out of line of sight or both being inside the bunker (20 m) to minimise shielding costs. These guide geometries are regarded as a baseline to define standards for instruments to be constructed at ESS. They are used to find commonalities and develop principles and solutions for common problems. Lastly, we report the impact of employing the over-illumination concept to mitigate losses from random misalignment passively, and that over-illumination should be used sparingly in key locations to be effective. For more widespread alignment issues, a more direct, active approach is likely to be needed.
Balanced Central Schemes for the Shallow Water Equations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron
2004-01-01
We present a two-dimensional, well-balanced, central-upwind scheme for approximating solutions of the shallow water equations in the presence of a stationary bottom topography on triangular meshes. Our starting point is the recent central scheme of Kurganov and Petrova (KP) for approximating solutions of conservation laws on triangular meshes. In order to extend this scheme from systems of conservation laws to systems of balance laws one has to find an appropriate discretization of the source terms. We first show that for general triangulations there is no discretization of the source terms that corresponds to a well-balanced form of the KP scheme. We then derive a new variant of a central scheme that can be balanced on triangular meshes. We note in passing that it is straightforward to extend the KP scheme to general unstructured conformal meshes. This extension allows us to recover our previous well-balanced scheme on Cartesian grids. We conclude with several simulations, verifying the second-order accuracy of our scheme as well as its well-balanced properties.
Health Information Research Platform (HIReP)--an architecture pattern.
Schreiweis, Björn; Schneider, Gerd; Eichner, Theresia; Bergh, Björn; Heinze, Oliver
2014-01-01
Secondary use or single source is still far from routine in healthcare, although lots of data are available either structured or unstructured. As data are stored in multiple systems, using them for biomedical research is difficult. Clinical data warehouses already help overcoming this issue, but currently they are only used for certain parts of biomedical research. A comprehensive research platform based on a generic architecture pattern could increase the benefits of existing data warehouses for both patient care and research by meeting two objectives: serving as a so called single point-of-truth and acting as a mediator between them strengthening interaction and close collaboration. Another effect is to reduce boundaries for the implementation of data warehouses. Taking further settings into account the architecture of a clinical data warehouse supporting patient care and biomedical research needs to be integrated with biomaterial banks and other sources. This work provides a solution conceptualizing a comprehensive architecture pattern of a Health Information Research Platform (HIReP) derived from use cases of the patient care and biomedical research domain. It serves as single IT infrastructure providing solutions for any type of use case.
Scaling of plane-wave functions in statistically optimized near-field acoustic holography.
Hald, Jørgen
2014-11-01
Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.
NASA Astrophysics Data System (ADS)
Yuan, Li-Yun; Xiang, Yu; Lu, Jing; Jiang, Hong-Hua
2015-12-01
Based on the transfer matrix method of exploring the circular cylindrical shell treated with active constrained layer damping (i.e., ACLD), combined with the analytical solution of the Helmholtz equation for a point source, a multi-point multipole virtual source simulation method is for the first time proposed for solving the acoustic radiation problem of a submerged ACLD shell. This approach, wherein some virtual point sources are assumed to be evenly distributed on the axial line of the cylindrical shell, and the sound pressure could be written in the form of the sum of the wave functions series with the undetermined coefficients, is demonstrated to be accurate to achieve the radiation acoustic pressure of the pulsating and oscillating spheres respectively. Meanwhile, this approach is proved to be accurate to obtain the radiation acoustic pressure for a stiffened cylindrical shell. Then, the chosen number of the virtual distributed point sources and truncated number of the wave functions series are discussed to achieve the approximate radiation acoustic pressure of an ACLD cylindrical shell. Applying this method, different radiation acoustic pressures of a submerged ACLD cylindrical shell with different boundary conditions, different thickness values of viscoelastic and piezoelectric layer, different feedback gains for the piezoelectric layer and coverage of ACLD are discussed in detail. Results show that a thicker thickness and larger velocity gain for the piezoelectric layer and larger coverage of the ACLD layer can obtain a better damping effect for the whole structure in general. Whereas, laying a thicker viscoelastic layer is not always a better treatment to achieve a better acoustic characteristic. Project supported by the National Natural Science Foundation of China (Grant Nos. 11162001, 11502056, and 51105083), the Natural Science Foundation of Guangxi Zhuang Autonomous Region, China (Grant No. 2012GXNSFAA053207), the Doctor Foundation of Guangxi University of Science and Technology, China (Grant No. 12Z09), and the Development Project of the Key Laboratory of Guangxi Zhuang Autonomous Region, China (Grant No. 1404544).
Framing and sources: a study of mass media coverage of climate change in Peru during the V ALCUE.
Takahashi, Bruno
2011-07-01
Studies about mass media framing have found divergent levels of influence on public opinion; moreover, the evidence suggests that issue attributes can contribute to this difference. In the case of climate change, studies have focused exclusively on developed countries, suggesting that media influence perceptions about the issue. This study presents one of the first studies of media coverage in a developing country. It examines newspapers' reporting in Peru during the Fifth Latin America, Caribbean and European Union Summit in May 2008. The study focuses on the frames and the sources to provide an initial exploratory assessment of the coverage. The results show that the media relied mostly on government sources, giving limited access to dissenting voices such as environmentalists. Additionally, a prominence of "solutions" and "effects" frames was found, while "policy" and "science" frames were limited. The results could serve as a reference point for more comprehensive studies.
NASA Astrophysics Data System (ADS)
Mahanthesh, B.; Gireesha, B. J.; Shashikumar, N. S.; Hayat, T.; Alsaedi, A.
2018-06-01
Present work aims to investigate the features of the exponential space dependent heat source (ESHS) and cross-diffusion effects in Marangoni convective heat mass transfer flow due to an infinite disk. Flow analysis is comprised with magnetohydrodynamics (MHD). The effects of Joule heating, viscous dissipation and solar radiation are also utilized. The thermal and solute field on the disk surface varies in a quadratic manner. The ordinary differential equations have been obtained by utilizing Von Kármán transformations. The resulting problem under consideration is solved numerically via Runge-Kutta-Fehlberg based shooting scheme. The effects of involved pertinent flow parameters are explored by graphical illustrations. Results point out that the ESHS effect dominates thermal dependent heat source effect on thermal boundary layer growth. The concentration and temperature distributions and their associated layer thicknesses are enhanced by Marangoni effect.
NASA Astrophysics Data System (ADS)
Voronin, S. V.; Gureev, D. M.; Zolotarevskiĭ, A. V.
1990-06-01
An investigation was made of some characteristics of the formation of the structure of Al-Si alloys containing 10%, 12% and 20 % Si, and also of the commercial alloy V124 under conditions of surface fusion by laser-arc and laser sources. It was established that as a result of local fusion there was a change in the silicon deposition morphology, the α solid solution became oversaturated, and the eutectic point was shifted toward high silicon concentrations. It was found that the hardened layer retained its high hardness when treated at temperatures up to 250 °C. The commercial alloy V124 was used as an example to show that an alloyed layer with a controlled silicon concentration can be obtained on the surface by using a laser-arc or laser source.
Coherent pulses in the diffusive transport of charged particles`
NASA Technical Reports Server (NTRS)
Kota, J.
1994-01-01
We present exact solutions to the diffusive transport of charged particles following impulsive injection for a simple model of scattering. A modified, two-parameter relaxation-time model is considered that simulates the low rate of scattering through perpendicular pitch-angle. Scattering is taken to be isotropic within each of the foward- and backward-pointing hemispheres, respectively, but, at the same time, a reduced rate of sccattering is assumed from one hemisphere to the other one. By applying a technique of Fourier- and Laplace-transform, the inverse transformation can be performed and exact solutions can be reached. By contrast with the first, and so far only exact solutions of Federov and Shakov, this wider class of solutions gives rise to coherent pulses to appear. The present work addresses omnidirectional densities for isotropic injection from an instantaneous and localized source. The dispersion relations are briefly discussed. We find, for this particular model, two diffusive models to exist up to a certain limiting wavenumber. The corresponding eigenvalues are real at the lowest wavenumbers. Complex eigenvalues, which are responsible for coherent pulses, appear at higher wavenumbers.
Long-term changes in nitrate conditions over the 20th century in two Midwestern Corn Belt streams
Kelly, Valerie J.; Stets, Edward G.; Crawford, Charles G.
2015-01-01
Long-term changes in nitrate concentration and flux between the middle of the 20th century and the first decade of the 21st century were estimated for the Des Moines River and the Middle Illinois River, two Midwestern Corn Belt streams, using a novel weighted regression approach that is able to detect subtle changes in solute transport behavior over time. The results show that the largest changes in flow-normalized concentration and flux occurred between 1960 and 1980 in both streams, with smaller or negligible changes between 1980 and 2004. Contrasting patterns were observed between (1) nitrate export linked to non-point sources, explicitly runoff of synthetic fertilizer or other surface sources and (2) nitrate export presumably associated with point sources such as urban wastewater or confined livestock feeding facilities, with each of these modes of transport important under different domains of streamflow. Surface runoff was estimated to be consistently most important under high-flow conditions during the spring in both rivers. Nitrate export may also have been considerable in the Des Moines River even under some conditions during the winter when flows are generally lower, suggesting the influence of point sources during this time. Similar results were shown for the Middle Illinois River, which is subject to significant influence of wastewater from the Chicago area, where elevated nitrate concentrations were associated with at the lowest flows during the winter and fall. By modeling concentration directly, this study highlights the complex relationship between concentration and streamflow that has evolved in these two basins over the last 50 years. This approach provides insights about changing conditions that only become observable when stationarity in the relationship between concentration and streamflow is not assumed.
Omirou, M; Dalias, P; Costa, C; Papastefanou, C; Dados, A; Ehaliotis, C; Karpouzas, D G
2012-07-01
The high wastewater volumes produced during citrus production at pre- and post-harvest level presents serious pesticide point-source pollution for groundwater bodies. Biobeds are used for preventing such point-source pollution occurring at farm level. We explored the potential of biobeds for the depuration of wastewaters produced through the citrus production chain following a lab-to-field experimentation. The dissipation of pesticides used pre- or post-harvest was studied in compost-based biomixtures, soil, and a straw-soil mixture. A biomixture of composted grape seeds and skins (GSS-1) showed the highest dissipation capacity. In subsequent column studies, GSS-1 restricted pesticides leaching even at the highest water load (462 Lm(-3)). Ortho-phenylphenol was the most mobile compound. Studies in an on-farm biobed filled with GSS-1 showed that pesticides were fully retained and partially or fully dissipated. Overall biobeds could be a valuable solution for the depuration of wastewaters produced at pre- and post-harvest level by citrus fruit industries. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irimie, I.I.; Tulbure, I.
1996-12-31
The present paper presents the following subjects regarding the atmospheric pollution in the Jiu-Valley coal mining region of Romania: identifying polluting sources, pointing out the pollution favoring conditions, the pollution impacts, and measures for short, middle, and long time, which could be taken in order to obtain a sustainable future development of this region. The importance of the problems presented in this paper is emphasized by the fact, that beside coking and fuel coal reserves, this region has a high touristic potential the year round.
An Invitation to Collaborate: The SPIRIT Open Source Health Care Portal
Bray, Brian; Molin, Joseph Dal
2001-01-01
The SPIRIT portal is a web site resulting from a joint project of the European Commission 5th Framework Research Programme for Information Society Technologies, Minoru Development (France), Conecta srl (Italy), and Sistema Information Systems (Italy). The portal indexes and disseminates free software, serves as a meeting point for health care informatics researchers, and provides collaboration services to health care innovators. This poster session describes the services of the portal and invites researchers to join a worldwide collaborative community developing evidence based health care solutions.
NASA Technical Reports Server (NTRS)
Baird, J. K.
1986-01-01
The Ostwald-ripening theory is deduced and discussed starting from the fundamental principles such as Ising model concept, Mayer cluster expansion, Langer condensation point theory, Ginzburg-Landau free energy, Stillinger cutoff-pair potential, LSW-theory and MLSW-theory. Mathematical intricacies are reduced to an understanding version. Comparison of selected works, from 1949 to 1984, on solution of diffusion equation with and without sink/sources term(s) is presented. Kahlweit's 1980 work and Marqusee-Ross' 1954 work are more emphasized. Odijk and Lekkerkerker's 1985 work on rodlike macromolecules is introduced in order to simulate interested investigators.
Ultra-high resolution of radiocesium distribution detection based on Cherenkov light imaging
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Ogata, Yoshimune; Kawachi, Naoki; Suzui, Nobuo; Yin, Yong-Gen; Fujimaki, Shu
2015-03-01
After the nuclear disaster in Fukushima, radiocesium contamination became a serious scientific concern and research of its effects on plants increased. In such plant studies, high resolution images of radiocesium are required without contacting the subjects. Cherenkov light imaging of beta radionuclides has inherently high resolution and is promising for plant research. Since 137Cs and 134Cs emit beta particles, Cherenkov light imaging will be useful for the imaging of radiocesium distribution. Consequently, we developed and tested a Cherenkov light imaging system. We used a high sensitivity cooled charge coupled device (CCD) camera (Hamamatsu Photonics, ORCA2-ER) for imaging Cherenkov light from 137Cs. A bright lens (Xenon, F-number: 0.95, lens diameter: 25 mm) was mounted on the camera and placed in a black box. With a 100-μm 137Cs point source, we obtained 220-μm spatial resolution in the Cherenkov light image. With a 1-mm diameter, 320-kBq 137Cs point source, the source was distinguished within 2-s. We successfully obtained Cherenkov light images of a plant whose root was dipped in a 137Cs solution, radiocesium-containing samples as well as line and character phantom images with our imaging system. Cherenkov light imaging is promising for the high resolution imaging of radiocesium distribution without contacting the subject.
NASA Astrophysics Data System (ADS)
Coppola, A.; Comegna, V.; de Simone, L.
2009-04-01
Non-point source (NPS) pollution in the vadose zone is a global environmental problem. The knowledge and information required to address the problem of NPS pollutants in the vadose zone cross several technological and sub disciplinary lines: spatial statistics, geographic information systems (GIS), hydrology, soil science, and remote sensing. The main issues encountered by NPS groundwater vulnerability assessment, as discussed by Stewart [2001], are the large spatial scales, the complex processes that govern fluid flow and solute transport in the unsaturated zone, the absence of unsaturated zone measurements of diffuse pesticide concentrations in 3-D regional-scale space as these are difficult, time consuming, and prohibitively costly, and the computational effort required for solving the nonlinear equations for physically-based modeling of regional scale, heterogeneous applications. As an alternative solution, here is presented an approach that is based on coupling of transfer function and GIS modeling that: a) is capable of solute concentration estimation at a depth of interest within a known error confidence class; b) uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application; c) can dynamically support decision making through thematic mapping and 3D scenarios This result was pursued through 1) the design and building of a spatial database containing environmental and physical information regarding the study area, 2) the development of the transfer function procedure for layered soils, 3) the final representation of results through digital mapping and 3D visualization. One side GIS modeled environmental data in order to characterize, at regional scale, soil profile texture and depth, land use, climatic data, water table depth, potential evapotranspiration; on the other side such information was implemented in the up-scaling procedure of the Jury's TFM resulting in a set of texture based travel time probability density functions for layered soils each describing a characteristic leaching behavior for soil profiles with similar hydraulic properties. Such behavior, in terms of solute travel time to water table, was then imported back into GIS and finally estimation groundwater vulnerability for each soil unit was represented into a map as well as visualized in 3D.
Tunneling dynamics in relativistic and nonrelativistic wave equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgado, F.; Muga, J. G.; Ruschhaupt, A.
2003-09-01
We obtain the solution of a relativistic wave equation and compare it with the solution of the Schroedinger equation for a source with a sharp onset and excitation frequencies below cutoff. A scaling of position and time reduces to a single case all the (below cutoff) nonrelativistic solutions, but no such simplification holds for the relativistic equation, so that qualitatively different ''shallow'' and ''deep'' tunneling regimes may be identified relativistically. The nonrelativistic forerunner at a position beyond the penetration length of the asymptotic stationary wave does not tunnel; nevertheless, it arrives at the traversal (semiclassical or Buettiker-Landauer) time {tau}. Themore » corresponding relativistic forerunner is more complex: it oscillates due to the interference between two saddle-point contributions and may be characterized by two times for the arrival of the maxima of lower and upper envelopes. There is in addition an earlier relativistic forerunner, right after the causal front, which does tunnel. Within the penetration length, tunneling is more robust for the precursors of the relativistic equation.« less
A double-observer approach for estimating detection probability and abundance from point counts
Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.
2000-01-01
Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.
NASA Astrophysics Data System (ADS)
Kostrzewski, J. M.; Brooks, P. D.
2005-12-01
We assessed impacts of vegetative cover and water source on water quality in the Valles Caldera National Preserve (VCNP). Within the preserve we selected three montane watersheds due to vegetative and physical characteristics. Redondo Creek with an area of 11.7 mi2 is a higher elevation (7,000 to 11,200 ft) watershed with a vegetation transition from aspen to ponderosa pine to meadow. The La Jara Creek is a bedrock confined watershed with an area of 1.5 mi2, elevation range of 8,500 to 11,200 ft, and predominate vegetative cover of mixed conifer. The Jaramillo Creek is a lower elevation (8,500 to 10,500 ft) alluvial watershed with an area of 4.5 mi2 which is dominated by grassland vegetation. In the spring, early summer, and late summer we preformed stream and tributary synoptic sampling combined with regular fixed point sampling. Our experimental design includes analysis of conservative solutes (F-, Br-, Cl-, SO42-), water isotopes, and biogeochemical nutrients to quantify water sources, age, and biological influence within each catchment. Preliminary analysis of dissolved organic carbon (DOC) data suggests an early flushing of DOC in all three catchments to a reduced concentration in the early summer months. Elevated chloride and sulfate concentrations in Redondo Creek indicate a deeper water source than La Jara Creek. This difference in water source contributes to the higher variation of DOC concentrations in La Jara Creek (x=2.33 mg/L, s.d.=1.22) and a lower variation in Redondo Creek (x=2.72 mg/L, s.d.=0.49). A continuation of conservative solute and isotopic analyses will constrain hydrologic flow paths to evaluate the effects of vegetation and water source on water quality.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
The Second-Level Adjoint Sensitivity System (2nd-LASS) that yields the second-order sensitivities of a response of uncollided particles with respect to isotope densities, cross sections, and source emission rates is derived in Refs. 1 and 2. In Ref. 2, we solved problems for the uncollided leakage from a homogeneous sphere and a multiregion cylinder using the PARTISN multigroup discrete-ordinates code. In this memo, we derive solutions of the 2nd-LASS for the particular case when the response is a flux or partial current density computed at a single point on the boundary, and the inner products are computed using ray-tracing. Both themore » PARTISN approach and the ray-tracing approach are implemented in a computer code, SENSPG. The next section of this report presents the equations of the 1st- and 2nd-LASS for uncollided particles and the first- and second-order sensitivities that use the solutions of the 1st- and 2nd-LASS. Section III presents solutions of the 1st- and 2nd-LASS equations for the case of ray-tracing from a detector point. Section IV presents specific solutions of the 2nd-LASS and derives the ray-trace form of the inner products needed for second-order sensitivities. Numerical results for the total leakage from a homogeneous sphere are presented in Sec. V and for the leakage from one side of a two-region slab in Sec. VI. Section VII is a summary and conclusions.« less
[A landscape ecological approach for urban non-point source pollution control].
Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing
2005-05-01
Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.
Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm
NASA Astrophysics Data System (ADS)
Selig, Marco; Enßlin, Torsten A.
2015-02-01
The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74
The Sedov Blast Wave as a Radial Piston Verification Test
Pederson, Clark; Brown, Bart; Morgan, Nathaniel
2016-06-22
The Sedov blast wave is of great utility as a verification problem for hydrodynamic methods. The typical implementation uses an energized cell of finite dimensions to represent the energy point source. We avoid this approximation by directly finding the effects of the energy source as a boundary condition (BC). Furthermore, the proposed method transforms the Sedov problem into an outward moving radial piston problem with a time-varying velocity. A portion of the mesh adjacent to the origin is removed and the boundaries of this hole are forced with the velocities from the Sedov solution. This verification test is implemented onmore » two types of meshes, and convergence is shown. Our results from the typical initial condition (IC) method and the new BC method are compared.« less
Aqueous electrolytes for redox flow battery systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tianbiao; Li, Bin; Wei, Xiaoliang
An aqueous redox flow battery system includes an aqueous catholyte and an aqueous anolyte. The aqueous catholyte may comprise (i) an optionally substituted thiourea or a nitroxyl radical compound and (ii) a catholyte aqueous supporting solution. The aqueous anolyte may comprise (i) metal cations or a viologen compound and (ii) an anolyte aqueous supporting solution. The catholyte aqueous supporting solution and the anolyte aqueous supporting solution independently may comprise (i) a proton source, (ii) a halide source, or (iii) a proton source and a halide source.
Design principles for radiation-resistant solid solutions
NASA Astrophysics Data System (ADS)
Schuler, Thomas; Trinkle, Dallas R.; Bellon, Pascal; Averback, Robert
2017-05-01
We develop a multiscale approach to quantify the increase in the recombined fraction of point defects under irradiation resulting from dilute solute additions to a solid solution. This methodology provides design principles for radiation-resistant materials. Using an existing database of solute diffusivities, we identify Sb as one of the most efficient solutes for this purpose in a Cu matrix. We perform density-functional-theory calculations to obtain binding and migration energies of Sb atoms, vacancies, and self-interstitial atoms in various configurations. The computed data informs the self-consistent mean-field formalism to calculate transport coefficients, allowing us to make quantitative predictions of the recombined fraction of point defects as a function of temperature and irradiation rate using homogeneous rate equations. We identify two different mechanisms according to which solutes lead to an increase in the recombined fraction of point defects; at low temperature, solutes slow down vacancies (kinetic effect), while at high temperature, solutes stabilize vacancies in the solid solution (thermodynamic effect). Extension to other metallic matrices and solutes are discussed.
NASA Astrophysics Data System (ADS)
Ahmed, E.; El-Sayed, A. M. A.; El-Saka, H. A. A.
2007-01-01
In this paper we are concerned with the fractional-order predator-prey model and the fractional-order rabies model. Existence and uniqueness of solutions are proved. The stability of equilibrium points are studied. Numerical solutions of these models are given. An example is given where the equilibrium point is a centre for the integer order system but locally asymptotically stable for its fractional-order counterpart.
The modified semi-discrete two-dimensional Toda lattice with self-consistent sources
NASA Astrophysics Data System (ADS)
Gegenhasi
2017-07-01
In this paper, we derive the Grammian determinant solutions to the modified semi-discrete two-dimensional Toda lattice equation, and then construct the semi-discrete two-dimensional Toda lattice equation with self-consistent sources via source generation procedure. The algebraic structure of the resulting coupled modified differential-difference equation is clarified by presenting its Grammian determinant solutions and Casorati determinant solutions. As an application of the Grammian determinant and Casorati determinant solution, the explicit one-soliton and two-soliton solution of the modified semi-discrete two-dimensional Toda lattice equation with self-consistent sources are given. We also construct another form of the modified semi-discrete two-dimensional Toda lattice equation with self-consistent sources which is the Bäcklund transformation for the semi-discrete two-dimensional Toda lattice equation with self-consistent sources.
Ion release from, and fluoride recharge of a composite with a fluoride-containing bioactive glass.
Davis, Harry B; Gwinner, Fernanda; Mitchell, John C; Ferracane, Jack L
2014-10-01
Materials that are capable of releasing ions such as calcium and fluoride, that are necessary for remineralization of dentin and enamel, have been the topic of intensive research for many years. The source of calcium has most often been some form of calcium phosphate, and that for fluoride has been one of several metal fluoride or hexafluorophosphate salts. Fluoride-containing bioactive glass (BAG) prepared by the sol-gel method acts as a single source of both calcium and fluoride ions in aqueous solutions. The objective of this investigation was to determine if BAG, when added to a composite formulation, can be used as a single source for calcium and fluoride ion release over an extended time period, and to determine if the BAG-containing composite can be recharged upon exposure to a solution of 5000ppm fluoride. BAG 61 (61% Si; 31% Ca; 4% P; 3% F; 1% B) and BAG 81 (81% Si; 11% Ca; 4% P; 3% F; 1% B) were synthesized by the sol-gel method. The composite used was composed of 50/50 Bis-GMA/TEGDMA, 0.8% EDMAB, 0.4% CQ, and 0.05% BHT, combined with a mixture of BAG (15%) and strontium glass (85%) to a total filler load of 72% by weight. Disks were prepared, allowed to age for 24h, abraded, then placed into DI water. Calcium and fluoride release was measured by atomic absorption spectroscopy and fluoride ion selective electrode methods, respectively, after 2, 22, and 222h. The composite samples were then soaked for 5min in an aqueous 5000ppm fluoride solution, after which calcium and fluoride release was again measured at 2, 22, and 222h time points. Prior to fluoride recharge, release of fluoride ions was similar for the BAG 61 and BAG 81 composites after 2h, and also similar after 22h. At the four subsequent time points, one prior to, and three following fluoride recharge, the BAG 81 composite released significantly more fluoride ions (p<0.05). Both composites were recharged by exposure to 5000ppm fluoride, although the BAG 81 composite was recharged more than the BAG 61 composite. The BAG 61 composite released substantially more calcium ions prior to fluoride recharge during each of the 2 and 22h time periods. Thereafter, the release of calcium at the four subsequent time points was not significantly different (p>0.05) for the two composites. These results show that, when added to a composite formulation, fluoride-containing bioactive glass made by the sol-gel route can function as a single source for both calcium and fluoride ions, and that the composite can be readily recharged with fluoride. Copyright © 2014 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bozza, V.; Postiglione, A., E-mail: valboz@sa.infn.it, E-mail: postiglione@fis.uniroma3.it
The metric outside an isolated object made up of ordinary matter is bound to be the classical Schwarzschild vacuum solution of General Relativity. Nevertheless, some solutions are known (e.g. Morris-Thorne wormholes) that do not match Schwarzschild asymptotically. On a phenomenological point of view, gravitational lensing in metrics falling as 1/r{sup q} has recently attracted great interest. In this work, we explore the conditions on the source matter for constructing static spherically symmetric metrics exhibiting an arbitrary power-law as Newtonian limit. For such space-times we also derive the expressions of gravitational redshift and force on probe masses, which, together with lightmore » deflection, can be used in astrophysical searches of non-Schwarzschild objects made up of exotic matter. Interestingly, we prove that even a minimally coupled scalar field with a power-law potential can support non-Schwarzschild metrics with arbitrary asymptotic behaviour.« less
Fingerprints of Both Watson-Crick and Hoogsteen Isomers of the Isolated (Cytosine-Guanine)H+ Pair.
Cruz-Ortiz, Andrés F; Rossa, Maximiliano; Berthias, Francis; Berdakin, Matías; Maitre, Philippe; Pino, Gustavo A
2017-11-16
Gas phase protonated guanine-cytosine (CGH + ) pair was generated using an electrospray ionization source from solutions at two different pH (5.8 and 3.2). Consistent evidence from MS/MS fragmentation patterns and differential ion mobility spectra (DIMS) point toward the presence of two isomers of the CGH + pair, whose relative populations depend strongly on the pH of the solution. Gas phase infrared multiphoton dissociation (IRMPD) spectroscopy in the 900-1900 cm -1 spectral range further confirms that the Watson-Crick isomer is preferentially produced (91%) at pH = 5.8, while the Hoogsteen isomer predominates (66%) at pH = 3.2). These fingerprint signatures are expected to be useful for the development of new analytical methodologies and to trigger isomer selective photochemical studies of protonated DNA base pairs.
NASA Astrophysics Data System (ADS)
Kuhlman, K. L.; Neuman, S. P.
2006-12-01
Furman and Neuman (2003) proposed a Laplace Transform Analytic Element Method (LT-AEM) for transient groundwater flow. LT-AEM applies the traditionally steady-state AEM to the Laplace transformed groundwater flow equation, and back-transforms the resulting solution to the time domain using a Fourier Series numerical inverse Laplace transform method (de Hoog, et.al., 1982). We have extended the method so it can compute hydraulic head and flow velocity distributions due to any two-dimensional combination and arrangement of point, line, circular and elliptical area sinks and sources, nested circular or elliptical regions having different hydraulic properties, and areas of specified head, flux or initial condition. The strengths of all sinks and sources, and the specified head and flux values, can all vary in both space and time in an independent and arbitrary fashion. Initial conditions may vary from one area element to another. A solution is obtained by matching heads and normal fluxes along the boundary of each element. The effect which each element has on the total flow is expressed in terms of generalized Fourier series which converge rapidly (<20 terms) in most cases. As there are more matching points than unknown Fourier terms, the matching is accomplished in Laplace space using least-squares. The method is illustrated by calculating the resulting transient head and flow velocities due to an arrangement of elements in both finite and infinite domains. The 2D LT-AEM elements already developed and implemented are currently being extended to solve the 3D groundwater flow equation.
1986-07-01
pure water. Dissolved ions in the soil solution lower the freezing point; this is called freezing point depression. Many of the early studies of...them in the remaining soil solution . The temperature and concentration of this solution affect the chemical reactions and the forms of ions in...in the soil solution freezes, more concentrated "% solutes will be present in soil solution . 3. Water will travel even in frozen soils and sediments
Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings
NASA Astrophysics Data System (ADS)
Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső
We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.
Electrical distribution studies for the 200 Area tank farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisler, J.B.
1994-08-26
This is an engineering study providing reliability numbers for various design configurations as well as computer analyses (Captor/Dapper) of the existing distribution system to the 480V side of the unit substations. The objective of the study was to assure the adequacy of the existing electrical system components from the connection at the high voltage supply point through the transformation and distribution equipment to the point where it is reduced to its useful voltage level. It also was to evaluate the reasonableness of proposed solutions of identified deficiencies and recommendations of possible alternate solutions. The electrical utilities are normally considered themore » most vital of the utility systems on a site because all other utility systems depend on electrical power. The system accepts electric power from the external sources, reduces it to a lower voltage, and distributes it to end-use points throughout the site. By classic definition, all utility systems extend to a point 5 feet from the facility perimeter. An exception is made to this definition for the electric utilities at this site. The electrical Utility System ends at the low voltage section of the unit substation, which reduces the voltage from 13.8 kV to 2,400, 480, 277/480 or 120/208 volts. These transformers are located at various distances from existing facilities. The adequacy of the distribution system which transports the power from the main substation to the individual area substations and other load centers is evaluated and factored into the impact of the future load forecast.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldridge, David F.
A reciprocity theorem is an explicit mathematical relationship between two different wavefields that can exist within the same space - time configuration. Reciprocity theorems provi de the theoretical underpinning for mod ern full waveform inversion solutions, and also suggest practical strategies for speed ing up large - scale numerical modeling of geophysical datasets . In the present work, several previously - developed electromagnetic r eciprocity theorems are generalized to accommodate a broader range of medi um, source , and receiver types. Reciprocity relations enabling the interchange of various types of point sources and point receivers within a three - dimensionalmore » electromagnetic model are derived. Two numerical modeling algorithms in current use are successfully tested for adherence to reciprocity. Finally, the reciprocity theorem forms the point of departure for a lengthy derivation of electromagnetic Frechet derivatives. These mathe matical objects quantify the sensitivity of geophysical electromagnetic data to variatio ns in medium parameters, and thus constitute indispensable tools for solution of the full waveform inverse problem. ACKNOWLEDGEMENTS Sandia National Labor atories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000. Signif icant portions of the work reported herein were conducted under a Cooperative Research and Development Agreement (CRADA) between Sandia National Laboratories (SNL) and CARBO Ceramics Incorporated. The author acknowledges Mr. Chad Cannan and Mr. Terry Pa lisch of CARBO Ceramics, and Ms. Amy Halloran, manager of SNL's Geophysics and Atmospheric Sciences Department, for their interest in and encouragement of this work. Special thanks are due to Dr . Lewis C. Bartel ( recently retired from Sandia National Labo ratories and now a geophysical consultant ) and Dr. Chester J. Weiss (recently rejoined with Sandia National Laboratories) for many stimulating (and reciprocal!) discussions regar ding the topic at hand.« less
Inferring Models of Bacterial Dynamics toward Point Sources
Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve
2015-01-01
Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
Phase-plane analysis to an “anisotropic” higher-order traffic flow model
NASA Astrophysics Data System (ADS)
Wu, Chun-Xiu
2018-04-01
The qualitative theory of differential equations is applied to investigate the traveling wave solution to an “anisotropic” higher-order viscous traffic flow model under the Lagrange coordinate system. The types and stabilities of the equilibrium points are discussed in the phase plane. Through the numerical simulation, the overall distribution structures of trajectories are drawn to analyze the relation between the phase diagram and the selected conservative solution variables, and the influences of the parameters on the system are studied. The limit-circle, limit circle-spiral point, saddle-spiral point and saddle-nodal point solutions are obtained. These steady-state solutions provide good explanation for the phenomena of the oscillatory and homogeneous congestions in real-world traffic.
Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...
A method to approximate a closest loadability limit using multiple load flow solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong
A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less
Chen, Chenglong; Gao, Ming; Xie, Deti; Ni, Jiupai
2016-04-01
Losses of agricultural pollutants from small catchments are a major issue for water quality in the Three Gorges Region. Solutions are urgently needed. However, before pollutant losses can be controlled, information about spatial and temporal variations in pollutant losses is needed. The study was carried out in the Wangjiagou catchment, a small agricultural catchment in Fuling District, Chongqing, and the data about non-point source losses of nitrogen and phosphorus was collected here. Water samples were collected daily by an automatic water sampler at the outlets of two subcatchments from 2012 to 2014. Also, samples of surface runoff from 28 sampling sites distributed through the subcatchments were collected during 12 rainfall events in 2014. A range of water quality variables were analyzed for all samples and were used to demonstrate the variation in non-point losses of nitrogen and phosphorus over a range of temporal and spatial scales and in different types of rainfall in the catchment. Results showed that there was a significant linear correlation between the mass concentrations of total nitrogen (TN) and nitrate (NO3-N) in surface runoff and that the relationship was maintained with changes in time. Concentrations of TN and NO3-N peaked after fertilizer was applied to crops in spring and autumn; concentrations decreased rapidly after the peak values in spring but declined slowly in autumn. N and P concentrations fluctuated more and showed a greater degree of dispersion during the spring crop cultivation period than those in autumn. Concentrations of TN and NO3-N in surface runoff were significantly and positively correlated with the proportion of the area that was planted with corn and mustard tubers, but were negatively correlated with the proportion of the area taken up with rice and mulberry plantations. The average concentrations of TN and NO3-N in surface runoff reached the highest level from the sampling points at the bottom of the land used for corn only, but lowest in rice fields. Slope gradient had a significant positive correlation with TN’s and total phosphorus (TP)’s concentration losses. Concentrations of TN, NO3-N, and total phosphorus were significantly correlated with rainfall. Peak concentrations of ammoniacal nitrogen occurred during the fertilizer application period in spring and autumn. Different structures of land use types had a significant influence on the concentration losses of nitrogen and phosphorus; thus, using a reasonable way to adjust land use structure and spatial arrangement of whole catchment was an effective solution to control non-point source pollution of the Three Gorges Region.
The scattering of Lyα radiation in the intergalactic medium: numerical methods and solutions
NASA Astrophysics Data System (ADS)
Higgins, Jonathan; Meiksin, Avery
2012-11-01
Two methods are developed for solving the steady-state spherically symmetric radiative transfer equation for resonance line radiation emitted by a point source in the intergalactic medium, in the context of the Wouthuysen-Field mechanism for coupling the hyperfine structure spin temperature of hydrogen to the gas temperature. One method is based on solving the ray and moment equations using finite differences. The second uses a Monte Carlo approach incorporating methods that greatly improve the accuracy compared with previous approaches in this context. Several applications are presented serving as test problems for both a static medium and an expanding medium, including inhomogeneities in the density and velocity fields. Solutions are obtained in the coherent scattering limit and for Doppler RII redistribution with and without recoils. We find generally that the radiation intensity is linear in the cosine of the azimuthal angle with respect to radius to high accuracy over a broad frequency region across the line centre for both linear and perturbed velocity fields, yielding the Eddington factors fν ≃ 1/3 and gν ≃ 3/5. The radiation field produced by a point source divides into three spatial regimes for a uniformly expanding homogeneous medium. The regimes are governed by the fraction of the distance r from the source in terms of the distance r* required for a photon to redshift from line centre to the frequency needed to escape from the expanding gas. For a standard cosmology, before the Universe was reionized r* takes on the universal value independent of redshift of 1.1 Mpc, depending only on the ratio of the baryon to dark matter density. At r/r* < 1, the radiation field is accurately described in the diffusion approximation, with the scattering rate declining with the distance from the source as r-7/3, except at r/r* ≪ 1 where frequency redistribution nearly doubles the mean intensity around line centre. At r/r* > 1, the diffusion approximation breaks down and the decline of the mean intensity near line centre and the scattering rate approach the geometric dilution scaling 1/r2. The mean intensity and scattering rate are found to be very sensitive to the gradient of the velocity field, growing exponentially with the amplitude of the perturbation as the limit of a vanishing velocity gradient is approached near the source. We expect the 21-cm signal from the epoch of reionization to thus be a sensitive probe of both the density and the peculiar velocity fields. The solutions for the mean intensity are made available in machine-readable format.
Relative Water Uptake as a Criterion for the Design of Trickle Irrigation Systems
NASA Astrophysics Data System (ADS)
Communar, G.; Friedman, S. P.
2008-12-01
Previously derived analytical solutions to the 2- and 3-dimensional water flow problems describing trickle irrigation are not being widely used in practice because those formulations either ignore root water uptake or refer to it as a known input. In this lecture we are going to describe a new modeling approach and demonstrate its applicability for designing the geometry of trickle irrigation systems, namely the spacing between the emitters and drip lines. The major difference between our and previous modeling approaches is that we refer to the root water uptake as to the unknown solution of the problem and not as to a known input. We postulate that the solution to the steady-state water flow problem with a root sink that is acting under constant, maximum suction defines un upper bound to the relative water uptake (water use efficiency) in actual transient situations and propose to use it as a design criterion. Following previous derivations of analytical solutions we assume that the soil hydraulic conductivity increases exponentially with its matric head, which allows the linearization of the Richards equation, formulated in terms of the Kirchhoff matric flux potential. Since the transformed problem is linear, the relative water uptake for any given configuration of point or line sources and sinks can be calculated by superposition of the Green's functions of all relevant water sources and sinks. In addition to evaluating the relative water uptake, we also derived analytical expressions for the steam functions. The stream lines separating the water uptake zone from the percolating water provide insight to the dependence of the shape and extent of the actual rooting zone on the source- sink geometry and soil properties. A minimal number of just 3 system parameters: Gardner's (1958) alfa as a soil type quantifier and the depth and diameter of the pre-assumed active root zone are sufficient to characterize the interplay between capillary and gravitational effects on water flow and the competition between the processes of root water uptake and percolation. For accounting also for evaporation from the soil surface, when significant, another parameter is required, adopting the solution of Lomen and Warrick (1978).
Focal points and principal solutions of linear Hamiltonian systems revisited
NASA Astrophysics Data System (ADS)
Šepitka, Peter; Šimon Hilscher, Roman
2018-05-01
In this paper we present a novel view on the principal (and antiprincipal) solutions of linear Hamiltonian systems, as well as on the focal points of their conjoined bases. We present a new and unified theory of principal (and antiprincipal) solutions at a finite point and at infinity, and apply it to obtain new representation of the multiplicities of right and left proper focal points of conjoined bases. We show that these multiplicities can be characterized by the abnormality of the system in a neighborhood of the given point and by the rank of the associated T-matrix from the theory of principal (and antiprincipal) solutions. We also derive some additional important results concerning the representation of T-matrices and associated normalized conjoined bases. The results in this paper are new even for completely controllable linear Hamiltonian systems. We also discuss other potential applications of our main results, in particular in the singular Sturmian theory.
An IoT-Based Solution for Monitoring a Fleet of Educational Buildings Focusing on Energy Efficiency.
Amaxilatis, Dimitrios; Akrivopoulos, Orestis; Mylonas, Georgios; Chatzigiannakis, Ioannis
2017-10-10
Raising awareness among young people and changing their behaviour and habits concerning energy usage is key to achieving sustained energy saving. Additionally, young people are very sensitive to environmental protection so raising awareness among children is much easier than with any other group of citizens. This work examines ways to create an innovative Information & Communication Technologies (ICT) ecosystem (including web-based, mobile, social and sensing elements) tailored specifically for school environments, taking into account both the users (faculty, staff, students, parents) and school buildings, thus motivating and supporting young citizens' behavioural change to achieve greater energy efficiency. A mixture of open-source IoT hardware and proprietary platforms on the infrastructure level, are currently being utilized for monitoring a fleet of 18 educational buildings across 3 countries, comprising over 700 IoT monitoring points. Hereon presented is the system's high-level architecture, as well as several aspects of its implementation, related to the application domain of educational building monitoring and energy efficiency. The system is developed based on open-source technologies and services in order to make it capable of providing open IT-infrastructure and support from different commercial hardware/sensor vendors as well as open-source solutions. The system presented can be used to develop and offer new app-based solutions that can be used either for educational purposes or for managing the energy efficiency of the building. The system is replicable and adaptable to settings that may be different than the scenarios envisioned here (e.g., targeting different climate zones), different IT infrastructures and can be easily extended to accommodate integration with other systems. The overall performance of the system is evaluated in real-world environment in terms of scalability, responsiveness and simplicity.
An IoT-Based Solution for Monitoring a Fleet of Educational Buildings Focusing on Energy Efficiency
Akrivopoulos, Orestis
2017-01-01
Raising awareness among young people and changing their behaviour and habits concerning energy usage is key to achieving sustained energy saving. Additionally, young people are very sensitive to environmental protection so raising awareness among children is much easier than with any other group of citizens. This work examines ways to create an innovative Information & Communication Technologies (ICT) ecosystem (including web-based, mobile, social and sensing elements) tailored specifically for school environments, taking into account both the users (faculty, staff, students, parents) and school buildings, thus motivating and supporting young citizens’ behavioural change to achieve greater energy efficiency. A mixture of open-source IoT hardware and proprietary platforms on the infrastructure level, are currently being utilized for monitoring a fleet of 18 educational buildings across 3 countries, comprising over 700 IoT monitoring points. Hereon presented is the system’s high-level architecture, as well as several aspects of its implementation, related to the application domain of educational building monitoring and energy efficiency. The system is developed based on open-source technologies and services in order to make it capable of providing open IT-infrastructure and support from different commercial hardware/sensor vendors as well as open-source solutions. The system presented can be used to develop and offer new app-based solutions that can be used either for educational purposes or for managing the energy efficiency of the building. The system is replicable and adaptable to settings that may be different than the scenarios envisioned here (e.g., targeting different climate zones), different IT infrastructures and can be easily extended to accommodate integration with other systems. The overall performance of the system is evaluated in real-world environment in terms of scalability, responsiveness and simplicity. PMID:28994719
Naked shell singularities on the brane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seahra, Sanjeev S.
By utilizing nonstandard slicings of 5-dimensional Schwarzschild and Schwarzschild-AdS manifolds based on isotropic coordinates, we generate static and spherically-symmetric braneworld spacetimes containing shell-like naked null singularities. For planar slicings, we find that the brane-matter sourcing the solution is a perfect fluid with an exotic equation of state and a pressure singularity where the brane crosses the bulk horizon. From a relativistic point of view, such a singularity is required to maintain matter infinitesimally above the surface of a black hole. From the point of view of the AdS/CFT conjecture, the singular horizon can be seen as one possible quantum correctionmore » to a classical black hole geometry. Various generalizations of planar slicings are also considered for a Ricci-flat bulk, and we find that singular horizons and exotic matter distributions are common features.« less
Is thermal dispersivity significant for the use of heat as a tracer?
NASA Astrophysics Data System (ADS)
Rau, G. C.; Andersen, M. S.; Acworth, I.
2011-12-01
Heat profiles are regularly used to estimate sediment thermal parameters and to quantify vertical water flow velocity in fully saturated porous media. However, it has been pointed out by several authors that there is disagreement regarding the use of thermal dispersivity in heat transport models [e.g. Anderson, 2005]. Some researchers argue that this term should be treated analogous to solute transport [e.g. de Marsily, 1986], whilst others state that because heat diffusion is much faster than solute diffusion the dispersivity term can be neglected [e.g. Ingebritsen and Sanford, 1998]. This issue has never been properly addressed experimentally for environmentally relevant conditions. In order to address this question a hydraulic laboratory experiment was designed to investigate heat transport for different steady-state uniform flow velocities in the Darcy range (between 0 and 100 m/d) through homogeneous sand. For each flow velocity a point heat source at the center of the tank was instantaneously activated, and the thermal response was measured at 27 different locations using high resolution temperature probes. For the same flow velocities, a solute slug was injected in the center of the tank and the solute slug breakthrough was measured using 3 fluid EC sensors at different distances downstream of the injection point. This enabled direct comparison of solute and heat transport under identical conditions. The recorded temperature time-series data were used to calculate the thermal properties of the sand for conduction only, and estimate water flow velocity and thermal dispersion. The recorded EC time-series data were used to independently estimate water flow velocity but also solute dispersivity. The analytical solution for the solute transport case [Hunt, 1978] was adapted for heat transport and extended to account for slightly non-ideal experiment conditions. Velocity results independently derived from solute and heat show a discrepancy of up to 20%. The reason for this is not clear. Furthermore, the results show that thermal dispersivity can best be approximated with a square dependency on flow velocity. This agrees with earlier experiments in ideal materials by Green et al. [1964] as well as theoretical derivations [Kaviany, 1995]. However, this is in contrast to the linear dispersion model which has been adapted from solute transport and is commonly used in groundwater studies. The experimental results can be visualized in a conceptual plot devised by Bear [1972] for solute dispersion data (Figure 1). From this it becomes clear that the heat and solute transport Peclet numbers differs by several orders of magnitude for the same flow velocity and material because diffusion of heat is much faster than solute diffusion. As a result, the same Darcy flow range covers a different Peclet number range in heat transport and solute transport. This explains the controversy in the hydrologic community regarding the use of thermal dispersivity in transport models. In summary, for this experiment thermal dispersivity can be neglected when thermal Pe < 0.5, but should be considered for Pe > 0.5 with a square dependency on velocity.
Terrain shape estimation from optical flow, using Kalman filtering
NASA Astrophysics Data System (ADS)
Hoff, William A.; Sklair, Cheryl W.
1990-01-01
As one moves through a static environment, the visual world as projected on the retina seems to flow past. This apparent motion, called optical flow, can be an important source of depth perception for autonomous robots. An important application is in planetary exploration -the landing vehicle must find a safe landing site in rugged terrain, and an autonomous rover must be able to navigate safely through this terrain. In this paper, we describe a solution to this problem. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement can also be modelled with Kalman filtering. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. Using the method of extended Kalman filtering, our approach allows arbitrary camera motion. Preliminary results of an implementation are presented, and show that the resulting range accuracy is on the order of 1-2% of the range.
NASA Astrophysics Data System (ADS)
Glibitskiy, Dmitriy M.; Gorobchenko, Olga A.; Nikolov, Oleg T.; Cheipesh, Tatiana A.; Roshal, Alexander D.; Zibarov, Artem M.; Shestopalova, Anna V.; Semenov, Mikhail A.; Glibitskiy, Gennadiy M.
2018-03-01
Formation of patterns on the surface of dried films of saline biopolymer solutions is influenced by many factors, including particle size and structure. Proteins may be modified under the influence of ionizing radiation. By irradiating protein solutions with gamma rays, it is possible to affect the formation of zigzag (Z) structures on the film surface. In our study, the films were obtained by desiccation of bovine serum albumin (BSA) solutions, which were irradiated by a 60Co gamma-source at doses ranging from 1 Gy to 12 kGy. The analysis of the resulting textures on the surface of the films was carried out by calculating the specific length of Z-structures. The results are compared against the absorption and fluorescence spectroscopy and dynamic light scattering (DLS) data. Gamma-irradiation of BSA solutions in the 1-200 Gy range practically does not influence the amount of Z-structures on the film surface. The decrease in fluorescence intensity and increase in absorbance intensity point to the destruction of BSA structure at 2 and 12 kGy, and DLS shows a more than 160% increase in particle size as a result of BSA aggregation at 2 kGy. This prevents the formation of Z-structures, which is reflected in the decrease of their specific length.
Exact Holography of Massive M2-brane Theories and Entanglement Entropy
NASA Astrophysics Data System (ADS)
Jang, Dongmin; Kim, Yoonbai; Kwon, O.-Kab; Tolla, D. D.
2018-01-01
We test the gauge/gravity duality between the N = 6 mass-deformed ABJM theory with Uk(N) × U-k(N) gauge symmetry and the 11-dimensional supergravity on LLM geometries with SO(4)=ℤk × SO(4)=ℤk isometry. Our analysis is based on the evaluation of vacuum expectation values of chiral primary operators from the supersymmetric vacua of mass-deformed ABJM theory and from the implementation of Kaluza-Klein (KK) holography to the LLM geometries. We focus on the chiral primary operator (CPO) with conformal dimension Δ = 1. The non-vanishing vacuum expectation value (vev) implies the breaking of conformal symmetry. In that case, we show that the variation of the holographic entanglement entropy (HEE) from it's value in the CFT, is related to the non-vanishing one-point function due to the relevant deformation as well as the source field. Applying Ryu Takayanagi's HEE conjecture to the 4-dimensional gravity solutions, which are obtained from the KK reduction of the 11-dimensional LLM solutions, we calculate the variation of the HEE. We show how the vev and the value of the source field determine the HEE.
Acoustic 3D modeling by the method of integral equations
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2018-02-01
This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.
A near-Infrared SETI Experiment: Alignment and Astrometric precision
NASA Astrophysics Data System (ADS)
Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan
2016-06-01
Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.
Hybrid diffusion-P3 equation in N-layered turbid media: steady-state domain.
Shi, Zhenzhi; Zhao, Huijuan; Xu, Kexin
2011-10-01
This paper discusses light propagation in N-layered turbid media. The hybrid diffusion-P3 equation is solved for an N-layered finite or infinite turbid medium in the steady-state domain for one point source using the extrapolated boundary condition. The Fourier transform formalism is applied to derive the analytical solutions of the fluence rate in Fourier space. Two inverse Fourier transform methods are developed to calculate the fluence rate in real space. In addition, the solutions of the hybrid diffusion-P3 equation are compared to the solutions of the diffusion equation and the Monte Carlo simulation. For the case of small absorption coefficients, the solutions of the N-layered diffusion equation and hybrid diffusion-P3 equation are almost equivalent and are in agreement with the Monte Carlo simulation. For the case of large absorption coefficients, the model of the hybrid diffusion-P3 equation is more precise than that of the diffusion equation. In conclusion, the model of the hybrid diffusion-P3 equation can replace the diffusion equation for modeling light propagation in the N-layered turbid media for a wide range of absorption coefficients.
An approach for the regularization of a power flow solution around the maximum loading point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kataoka, Y.
1992-08-01
In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
Propagation of sound waves through a linear shear layer: A closed form solution
NASA Technical Reports Server (NTRS)
Scott, J. N.
1978-01-01
Closed form solutions are presented for sound propagation from a line source in or near a shear layer. The analysis was exact for all frequencies and was developed assuming a linear velocity profile in the shear layer. This assumption allowed the solution to be expressed in terms of parabolic cyclinder functions. The solution is presented for a line monopole source first embedded in the uniform flow and then in the shear layer. Solutions are also discussed for certain types of dipole and quadrupole sources. Asymptotic expansions of the exact solutions for small and large values of Strouhal number gave expressions which correspond to solutions previously obtained for these limiting cases.
2012-01-01
The rumen is one of the most complicated and most fascinating microbial ecosystems in nature. A wide variety of microbial species, including bacteria, fungi and protozoa act together to bioconvert (ligno)cellulosic plant material into compounds, which can be taken up and metabolized by the ruminant. Thus, the rumen perfectly resembles a solution to a current industrial problem: the biorefinery, which aims at the bioconversion of lignocellulosic material into fuels and chemicals. We suggest to intensify the studies of the ruminal microbial ecosystem from an industrial microbiologists point of view in order to make use of this rich source of organisms and enzymes. PMID:22963386
NASA Technical Reports Server (NTRS)
Fowell, Richard A.
1989-01-01
Most simulation plots are heavily oversampled. Ignoring unnecessary data points dramatically reduces plot time with imperceptible effect on quality. The technique is suited to most plot devices. The departments laser printer's speed was tripled for large simulation plots by data thinning. This reduced printer delays without the expense of a faster laser printer. Surpisingly, it saved computer time as well. All plot data are now thinned, including PostScript and terminal plots. The problem, solution, and conclusions are described. The thinning algorithm is described and performance studies are presented. To obtain FORTRAN 77 or C source listings, mail a SASE to the author.
1988-01-01
for hydrauine, MMH and UDMH are 4.78 x 10-6, 10.2 x 10Ś, and 3.19 x 10-6 aecŕ, respectively. Plots of the log(area) versus time were linear and...followed first-order kinetics except for hydrauine, for which a non- linear portion was observed in the first 6 to 8 hours. This portion of the decay...As a result, the prototype flow reactor can be represented to good approximation by a linear combination of point source solutions (Reference 19). The
Sauer, Michael; Marx, Hans; Mattanovich, Diethard
2012-09-10
The rumen is one of the most complicated and most fascinating microbial ecosystems in nature. A wide variety of microbial species, including bacteria, fungi and protozoa act together to bioconvert (ligno)cellulosic plant material into compounds, which can be taken up and metabolized by the ruminant. Thus, the rumen perfectly resembles a solution to a current industrial problem: the biorefinery, which aims at the bioconversion of lignocellulosic material into fuels and chemicals. We suggest to intensify the studies of the ruminal microbial ecosystem from an industrial microbiologists point of view in order to make use of this rich source of organisms and enzymes.
A diagonal implicit scheme for computing flows with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Eberhardt, Scott; Imlay, Scott
1990-01-01
A new algorithm for solving steady, finite-rate chemistry, flow problems is presented. The new scheme eliminates the expense of inverting large block matrices that arise when species conservation equations are introduced. The source Jacobian matrix is replaced by a diagonal matrix which is tailored to account for the fastest reactions in the chemical system. A point-implicit procedure is discussed and then the algorithm is included into the LU-SGS scheme. Solutions are presented for hypervelocity reentry and Hydrogen-Oxygen combustion. For the LU-SGS scheme a CFL number in excess of 10,000 has been achieved.
HST WFC3/IR Calibration Updates
NASA Astrophysics Data System (ADS)
Durbin, Meredith; Brammer, Gabriel; Long, Knox S.; Pirzkal, Norbert; Ryan, Russell E.; McCullough, Peter R.; Baggett, Sylvia M.; Gosmeyer, Catherine; Bourque, Matthew; HST WFC3 Team
2016-01-01
We report on several improvements to the characterization, monitoring, and calibration of the HST WFC3/IR detector. The detector performance has remained overall stable since its installation during HST Servicing Mission 4 in 2009. We present an updated persistence model that takes into account effects of exposure time and spatial variations in persistence across the detector, new grism wavelength solutions and master sky images, and a new SPARS sample sequence. We also discuss the stability of the IR gain, the time evolution and photometric properties of IR "snowballs," and the effect of IR "blobs" on point-source photometry.
NASA Astrophysics Data System (ADS)
Tsurumi, Makoto; Takahashi, Akira; Ichikuni, Masami
An iterative least-squares method with a receptor model was applied to the analytical data of the precipitation samples collected at 23 points in the suburban area of Tokyo, and the number and composition of the source materials were determined. Thirty-nine monthly bulk precipitation samples were collected in the spring and summer of 1987 from the hilly and mountainous area of Tokyo and analyzed for Na +, K +, NH 4+, Mg 2+, Ca 2+, F -, Cl -, Br -, NO 3- and SO 42- by atomic absorption spectrometry and ion chromatography. The pH of the samples was also measured. A multivariate ion balance approach (Tsurumi, 1982, Anal. Chim. Acta138, 177-182) showed that the solutes in the precipitation were derived from just three major sources; sea salt, acid substance (a mixture of 53% HNO 3, 39% H 2SO 4 and 8% HCl in equivalent) and CaSO 4. The contributions of each source to the precipitation were calculated for every sampling site. Variations of the contributions with the distance from the coast were also discussed.
Jiang, Jheng Jie; Lee, Chon Lin; Fang, Meng Der; Boyd, Kenneth G.; Gibb, Stuart W.
2015-01-01
This paper presents a methodology based on multivariate data analysis for characterizing potential source contributions of emerging contaminants (ECs) detected in 26 river water samples across multi-scape regions during dry and wet seasons. Based on this methodology, we unveil an approach toward potential source contributions of ECs, a concept we refer to as the “Pharmaco-signature.” Exploratory analysis of data points has been carried out by unsupervised pattern recognition (hierarchical cluster analysis, HCA) and receptor model (principal component analysis-multiple linear regression, PCA-MLR) in an attempt to demonstrate significant source contributions of ECs in different land-use zone. Robust cluster solutions grouped the database according to different EC profiles. PCA-MLR identified that 58.9% of the mean summed ECs were contributed by domestic impact, 9.7% by antibiotics application, and 31.4% by drug abuse. Diclofenac, ibuprofen, codeine, ampicillin, tetracycline, and erythromycin-H2O have significant pollution risk quotients (RQ>1), indicating potentially high risk to aquatic organisms in Taiwan. PMID:25874375
NASA Astrophysics Data System (ADS)
Agrawal, Arun; Koff, David; Bak, Peter; Bender, Duane; Castelli, Jane
2015-03-01
The deployment of regional and national Electronic Health Record solutions has been a focus of many countries throughout the past decade. A major challenge for these deployments has been support for ubiquitous image viewing. More specifically, these deployments require an imaging solution that can work over the Internet, leverage any point of service device: desktop, tablet, phone; and access imaging data from any source seamlessly. Whereas standards exist to enable ubiquitous image viewing, few if any solutions exist that leverage these standards and meet the challenge. Rather, most of the currently available web based DI viewing solutions are either proprietary solutions or require special plugins. We developed a true zero foot print browser based DI viewing solution based on the Web Access DICOM Objects (WADO) and Cross-enterprise Document Sharing for Imaging (XDS-I.b) standards to a) demonstrate that a truly ubiquitous image viewer can be deployed; b) identify the gaps in the current standards and the design challenges for developing such a solution. The objective was to develop a viewer, which works on all modern browsers on both desktop and mobile devices. The implementation allows basic viewing functionalities of scroll, zoom, pan and window leveling (limited). The major gaps identified in the current DICOM WADO standards are a lack of ability to allow any kind of 3D reconstruction or MPR views. Other design challenges explored include considerations related to optimization of the solution for response time and low memory foot print.
1982-09-01
wall, and exit points are know,, collectively as boundary points. In the following discussion, thi numerical treatment used for each type of mesh point...and fiozen solutions and that it matches the ODK solution 6 [Reference (10)] quite well. Also note that in this case, there is only a small departure...shows the results of the H-F system analysis. The mass-averaged temperature profile falls between the equilibrium and frozen solutions and matches the ODK
Modeling Vortex Generators in the Wind-US Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2010-01-01
A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.
NASA Astrophysics Data System (ADS)
Chiarucci, Simone; Wijnholds, Stefan J.
2018-02-01
Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.
Kinetic titration with differential thermometric determination of the end-point.
Sajó, I
1968-06-01
A method has been described for the determination of concentrations below 10(-4)M by applying catalytic reactions and using thermometric end-point determination. A reference solution, identical with the sample solution except for catalyst, is titrated with catalyst solution until the rates of reaction become the same, as shown by a null deflection on a galvanometer connected via bridge circuits to two opposed thermistors placed in the solutions.
Restoration of the ASCA Source Position Accuracy
NASA Astrophysics Data System (ADS)
Gotthelf, E. V.; Ueda, Y.; Fujimoto, R.; Kii, T.; Yamaoka, K.
2000-11-01
We present a calibration of the absolute pointing accuracy of the Advanced Satellite for Cosmology and Astrophysics (ASCA) which allows us to compensate for a large error (up to 1') in the derived source coordinates. We parameterize a temperature dependent deviation of the attitude solution which is responsible for this error. By analyzing ASCA coordinates of 100 bright active galactic nuclei, we show that it is possible to reduce the uncertainty in the sky position for any given observation by a factor of 4. The revised 90% error circle radius is then 12", consistent with preflight specifications, effectively restoring the full ASCA pointing accuracy. Herein, we derive an algorithm which compensates for this attitude error and present an internet-based table to be used to correct post facto the coordinate of all ASCA observations. While the above error circle is strictly applicable to data taken with the on-board Solid-state Imaging Spectrometers (SISs), similar coordinate corrections are derived for data obtained with the Gas Imaging Spectrometers (GISs), which, however, have additional instrumental uncertainties. The 90% error circle radius for the central 20' diameter of the GIS is 24". The large reduction in the error circle area for the two instruments offers the opportunity to greatly enhance the search for X-ray counterparts at other wavelengths. This has important implications for current and future ASCA source catalogs and surveys.
Household water insecurity and its cultural dimensions: preliminary results from Newtok, Alaska.
Eichelberger, Laura
2017-06-21
Using a relational approach, I examine several cultural dimensions involved in household water access and use in Newtok, Alaska. I describe the patterns that emerge around domestic water access and use, as well as the subjective lived experiences of water insecurity including risk perceptions, and the daily work and hydro-social relationships involved in accessing water from various sources. I found that Newtok residents haul water in limited amounts from a multitude of sources, both treated and untreated, throughout the year. Household water access is tied to hydro-social relationships predicated on sharing and reciprocity, particularly when the primary treated water access point is unavailable. Older boys and young men are primarily responsible for hauling water, and this role appears to be important to male Yupik identity. Many interviewees described preferring to drink untreated water, a practice that appears related to cultural constructions of natural water sources as pure and self-purifying, as well as concerns about the safety of treated water. Concerns related to the health consequences of low water access appear to differ by gender and age, with women and elders expressing greater concern than men. These preliminary results point to the importance of understanding the cultural dimensions involved in household water access and use. I argue that institutional responses to water insecurity need to incorporate such cultural dimensions into solutions aimed at increasing household access to and use of water.
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frayce, D.; Khayat, R.E.; Derdouri, A.
The dual reciprocity boundary element method (DRBEM) is implemented to solve three-dimensional transient heat conduction problems in the presence of arbitrary sources, typically as these problems arise in materials processing. The DRBEM has a major advantage over conventional BEM, since it avoids the computation of volume integrals. These integrals stem from transient, nonlinear, and/or source terms. Thus there is no need to discretize the inner domain, since only a number of internal points are needed for the computation. The validity of the method is assessed upon comparison with results from benchmark problems where analytical solutions exist. There is generally goodmore » agreement. Comparison against finite element results is also favorable. Calculations are carried out in order to assess the influence of the number and location of internal nodes. The influence of the ratio of the numbers of internal to boundary nodes is also examined.« less
Radiation absorbed dose to bladder walls from positron emitters in the bladder content.
Powell, G F; Chen, C T
1987-01-01
A method to calculate absorbed doses at depths in the walls of a static spherical bladder from a positron emitter in the bladder content has been developed. The beta ray dose component is calculated for a spherical model by employing the solutions to the integration of Loevinger and Bochkarev point source functions over line segments and a line segment source array technique. The gamma ray dose is determined using the specific gamma ray constant. As an example, absorbed radiation doses to the bladder walls from F-18 in the bladder content are presented for static spherical bladder models having radii of 2.0 and 3.5 cm, respectively. Experiments with ultra-thin thermoluminescent dosimeters (TLD's) were performed to verify the results of the calculations. Good agreement between TLD measurements and calculations was obtained.
A higher order panel method for linearized supersonic flow
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Epton, M. A.; Johnson, F. T.; Magnus, A. E.; Rubbert, P. E.
1979-01-01
The basic integral equations of linearized supersonic theory for an advanced supersonic panel method are derived. Methods using only linear varying source strength over each panel or only quadratic doublet strength over each panel gave good agreement with analytic solutions over cones and zero thickness cambered wings. For three dimensional bodies and wings of general shape, combined source and doublet panels with interior boundary conditions to eliminate the internal perturbations lead to a stable method providing good agreement experiment. A panel system with all edges contiguous resulted from dividing the basic four point non-planar panel into eight triangular subpanels, and the doublet strength was made continuous at all edges by a quadratic distribution over each subpanel. Superinclined panels were developed and tested on s simple nacelle and on an airplane model having engine inlets, with excellent results.
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Jianguo; Greenhalgh, Stewart
2018-04-01
We present methods for obtaining numerical and analytic solutions of the complex eikonal equation in inhomogeneous acoustic VTI media (transversely isotropic media with a vertical symmetry axis). The key and novel point of the method for obtaining numerical solutions is to transform the problem of solving the highly nonlinear acoustic VTI eikonal equation into one of solving the relatively simple eikonal equation for the background (isotropic) medium and a system of linear partial differential equations. Specifically, to obtain the real and imaginary parts of the complex traveltime in inhomogeneous acoustic VTI media, we generalize a perturbation theory, which was developed earlier for solving the conventional real eikonal equation in inhomogeneous anisotropic media, to the complex eikonal equation in such media. After the perturbation analysis, we obtain two types of equations. One is the complex eikonal equation for the background medium and the other is a system of linearized partial differential equations for the coefficients of the corresponding complex traveltime formulas. To solve the complex eikonal equation for the background medium, we employ an optimization scheme that we developed for solving the complex eikonal equation in isotropic media. Then, to solve the system of linearized partial differential equations for the coefficients of the complex traveltime formulas, we use the finite difference method based on the fast marching strategy. Furthermore, by applying the complex source point method and the paraxial approximation, we develop the analytic solutions of the complex eikonal equation in acoustic VTI media, both for the isotropic and elliptical anisotropic background medium. Our numerical results demonstrate the effectiveness of our derivations and illustrate the influence of the beam widths and the anisotropic parameters on the complex traveltimes.
Absolute Points for Multiple Assignment Problems
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2006-01-01
An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…
The ionization length in plasmas with finite temperature ion sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jelic, N.; Kos, L.; Duhovnik, J.
2009-12-15
The ionization length is an important quantity which up to now has been precisely determined only in plasmas which assume that the ions are born at rest, i.e., in discharges known as 'cold ion-source' plasmas. Presented here are the results of our calculations of the ionization lengths in plasmas with an arbitrary ion source temperature. Harrison and Thompson (H and T) [Proc. Phys. Soc. 74, 145 (1959)] found the values of this quantity for the cases of several ion strength potential profiles in the well-known Tonks-Langmuir [Phys. Rev. 34, 876 (1929)] discharge, which is characterized by 'cold' ion temperature. Thismore » scenario is also known as the 'singular' ion-source discharge. The H and T analytic result covers cases of ion sources proportional to exp(betaPHI) with PHI the normalized plasma potential and beta=0,1,2 values, which correspond to particular physical scenarios. Many years following H and T's work, Bissell and Johnson (B and J) [Phys. Fluids 30, 779 (1987)] developed a model with the so-called 'warm' ion-source temperature, i.e., 'regular' ion source, under B and J's particular assumption that the ionization strength is proportional to the local electron density. However, it appears that B and J were not interested in determining the ionization length at all. The importance of this quantity to theoretical modeling was recognized by Riemann, who recently answered all the questions of the most advanced up-to-date plasma-sheath boundary theory with cold ions [K.-U. Riemann, Phys. Plasmas 13, 063508 (2006)] but still without the stiff warm ion-source case solution, which is highly resistant to solution via any available analytic method. The present article is an extension of H and T's results obtained for a single point only with ion source temperature T{sub n}=0 to arbitrary finite ion source temperatures. The approach applied in this work is based on the method recently developed by Kos et al. [Phys. Plasmas 16, 093503 (2009)].« less
Improved method for retinotopy constrained source estimation of visual evoked responses
Hagler, Donald J.; Dale, Anders M.
2011-01-01
Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418
Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas
NASA Astrophysics Data System (ADS)
Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing
2017-12-01
Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.
NASA Technical Reports Server (NTRS)
T.Dauser; Garcia, J.; Wilms, J.; Boeck, M.; Brenneman, L. W.; Falanga, M.; Fukumura, Keigo; Reynolds, C. S.
2013-01-01
X-ray irradiation of the accretion disc leads to strong reflection features, which are then broadened and distorted by relativistic effects. We present a detailed, general relativistic approach to model this irradiation for different geometries of the primary X-ray source. These geometries include the standard point source on the rotational axis as well as more jet-like sources, which are radially elongated and accelerating. Incorporating this code in the RELLINE model for relativistic line emission, the line shape for any configuration can be predicted. We study how different irradiation geometries affect the determination of the spin of the black hole. Broad emission lines are produced only for compact irradiating sources situated close to the black hole. This is the only case where the black hole spin can be unambiguously determined. In all other cases the line shape is narrower, which could either be explained by a low spin or an elongated source. We conclude that for those cases and independent of the quality of the data, no unique solution for the spin exists and therefore only a lower limit of the spin value can be given
Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano; ...
2018-01-01
This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less
NASA Astrophysics Data System (ADS)
Granade, Christopher; Combes, Joshua; Cory, D. G.
2016-03-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano
This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less
NASA Technical Reports Server (NTRS)
Acuna, M. H.
1974-01-01
The solution to the steady state magnetohydrodynamic equations governing the supersonic expansion of the solar corona into interplanetary space is obtained for various assumptions regarding the form in which proton thermal energy is carried away from the sun. The one-fluid, inviscid, formulation of the MHD equations is considered assuming that thermal energy is carried away by conduction from a heat source located at the base of the corona. Angular motion of the solar wind led to the existence of three critical points through which the numerical solutions must pass to extend from the sun's surface to large heliocentric distances. The results show that the amount of magnetic field energy converted into kinetic energy in the solar wind is only a small fraction of the total expansion energy flux and has little effect upon the final radial expansion velocity.
Quantification of brain tissue through incorporation of partial volume effects
NASA Astrophysics Data System (ADS)
Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.
1992-06-01
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.
de Barros, Louis; Dietrich, Michel
2008-03-01
Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.
Technologies for autonomous integrated lab-on-chip systems for space missions
NASA Astrophysics Data System (ADS)
Nascetti, A.; Caputo, D.; Scipinotti, R.; de Cesare, G.
2016-11-01
Lab-on-chip devices are ideal candidates for use in space missions where experiment automation, system compactness, limited weight and low sample and reagent consumption are required. Currently, however, most microfluidic systems require external desktop instrumentation to operate and interrogate the chip, thus strongly limiting their use as stand-alone systems. In order to overcome the above-mentioned limitations our research group is currently working on the design and fabrication of "true" lab-on-chip systems that integrate in a single device all the analytical steps from the sample preparation to the detection without the need for bulky external components such as pumps, syringes, radiation sources or optical detection systems. Three critical points can be identified to achieve 'true' lab-on-chip devices: sample handling, analytical detection and signal transduction. For each critical point, feasible solutions are presented and evaluated. Proposed microfluidic actuation and control is based on electrowetting on dielectrics, autonomous capillary networks and active valves. Analytical detection based on highly specific chemiluminescent reactions is used to avoid external radiation sources. Finally, the integration on the same chip of thin film sensors based on hydrogenated amorphous silicon is discussed showing practical results achieved in different sensing tasks.
Borovikov, V. A.; Kalinin, S. V.; Khavin, Yu.; ...
2015-08-19
We derive the Green's functions for a three-dimensional semi-infinite fully anisotropic piezoelectric material using the plane wave theory method. The solution gives the complete set of electromechanical fields due to an arbitrarily oriented point force and a point electric charge applied to the boundary of the half-space. Moreover, the solution constitutes generalization of Boussinesq's and Cerruti's problems of elastic isotropy for the anisotropic piezoelectric materials. On the example of piezoceramics PZT-6B, the present results are compared with the previously obtained solution for the special case of transversely isotropic piezoelectric solid subjected to the same boundary condition.
PCC Framework for Program-Generators
NASA Technical Reports Server (NTRS)
Kong, Soonho; Choi, Wontae; Yi, Kwangkeun
2009-01-01
In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.
NASA Astrophysics Data System (ADS)
Voss, Anja; Bärlund, Ilona; Punzet, Manuel; Williams, Richard; Teichert, Ellen; Malve, Olli; Voß, Frank
2010-05-01
Although catchment scale modelling of water and solute transport and transformations is a widely used technique to study pollution pathways and effects of natural changes, policies and mitigation measures there are only a few examples of global water quality modelling. This work will provide a description of the new continental-scale model of water quality WorldQual and the analysis of model simulations under changed climate and anthropogenic conditions with respect to changes in diffuse and point loading as well as surface water quality. BOD is used as an indicator of the level of organic pollution and its oxygen-depleting potential, and for the overall health of aquatic ecosystems. The first application of this new water quality model is to river systems of Europe. The model itself is being developed as part of the EU-funded SCENES Project which has the principal goal of developing new scenarios of the future of freshwater resources in Europe. The aim of the model is to determine chemical fluxes in different pathways combining analysis of water quantity with water quality. Simple equations, consistent with the availability of data on the continental scale, are used to simulate the response of in-stream BOD concentrations to diffuse and anthropogenic point loadings as well as flow dilution. Point sources are divided into manufacturing, domestic and urban loadings, whereas diffuse loadings come from scattered settlements, agricultural input (for instance livestock farming), and also from natural background sources. The model is tested against measured longitudinal gradients and time series data at specific river locations with different loading characteristics like the Thames that is driven by domestic loading and Ebro with relative high share of diffuse loading. With scenario studies the influence of climate and anthropogenic changes on European water resources shall be investigated with the following questions: 1. What percentage of river systems will have degraded water quality due to different driving forces? 2. How will climate change and changes in wastewater discharges affect water quality? For the analysis these scenario aspects are included: 1. climate with changed runoff (affecting diffuse pollution and loading from sealed areas), river discharge (causing dilution or concentration of point source pollution) and water temperature (affecting BOD degradation). 2. Point sources with changed population (affecting domestic pollution), connectivity to treatment plants (influencing domestic and manufacturing pollution as well as input from sealed areas and scattered settlements).
Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.
2018-07-01
Quantifying the uncertainty in solute mass discharge at an environmentally sensitive location is key to assess the risks due to groundwater contamination. Solute mass fluxes are strongly affected by the spatial variability of hydrogeological properties as well as release conditions at the source zone. This paper provides a methodological framework to investigate the interaction between the ubiquitous heterogeneity of the hydraulic conductivity and the mass release rate at the source zone on the uncertainty of mass discharge. Through the use of perturbation theory, we derive analytical and semi-analytical expressions for the statistics of the solute mass discharge at a control plane in a three-dimensional aquifer while accounting for the solute mass release rates at the source. The derived solutions are limited to aquifers displaying low-to-mild heterogeneity. Results illustrate the significance of the source zone mass release rate in controlling the mass discharge uncertainty. The relative importance of the mass release rate on the mean solute discharge depends on the distance between the source and the control plane. On the other hand, we find that the solute release rate at the source zone has a strong impact on the variance of the mass discharge. Within a risk context, we also compute the peak mean discharge as a function of the parameters governing the spatial heterogeneity of the hydraulic conductivity field and mass release rates at the source zone. The proposed physically-based framework is application-oriented, computationally efficient and capable of propagating uncertainty from different parameters onto risk metrics. Furthermore, it can be used for preliminary screening purposes to guide site managers to perform system-level sensitivity analysis and better allocate resources.
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2017-12-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.
Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2018-02-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.
Solute source depletion control of forward and back diffusion through low-permeability zones
NASA Astrophysics Data System (ADS)
Yang, Minjune; Annable, Michael D.; Jawitz, James W.
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.
Solute source depletion control of forward and back diffusion through low-permeability zones.
Yang, Minjune; Annable, Michael D; Jawitz, James W
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.
Efficient Jacobian inversion for the control of simple robot manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1988-01-01
Symbolic inversion of the Jacobian matrix for spherical wrist arms is investigated. It is shown that, taking advantage of the simple geometry of these arms, the closed-form solution of the system Q = J-1X, representing a transformation from task space to joint space, can be obtained very efficiently. The solutions for PUMA, Stanford, and a six-revolute-joint coplanar arm, along with all singular points, are presented. The solution for each joint variable is found as an explicit function of the singular points which provides a better insight into the effect of different singular points on the motion and force exertion of each individual joint. For the above arms, the computation cost of the solution is on the same order as the cost of forward kinematic solution and it is significantly reduced if forward kinematic solution is already obtained. A comparison with previous methods shows that this method is the most efficient to date.
NASA Astrophysics Data System (ADS)
Řidký, V.; Šidlof, P.; Vlček, V.
2013-04-01
The work is devoted to comparing measured data with the results of numerical simulations. As mathematical model was used mathematical model whitout turbulence for incompressible flow In the experiment was observed the behavior of designed NACA0015 airfoil in airflow. For the numerical solution was used OpenFOAM computational package, this is open-source software based on finite volume method. In the numerical solution is prescribed displacement of the airfoil, which corresponds to the experiment. The velocity at a point close to the airfoil surface is compared with the experimental data obtained from interferographic measurements of the velocity field. Numerical solution is computed on a 3D mesh composed of about 1 million ortogonal hexahedron elements. The time step is limited by the Courant number. Parallel computations are run on supercomputers of the CIV at Technical University in Prague (HAL and FOX) and on a computer cluster of the Faculty of Mechatronics of Liberec (HYDRA). Run time is fixed at five periods, the results from the fifth periods and average value for all periods are then be compared with experiment.
NASA Astrophysics Data System (ADS)
Ohminato, T.; Kobayashi, T.; Ida, Y.; Fujita, E.
2006-12-01
During the 2000 Miyake-jima volcanic activity started on 26 June 2000, an intense earthquake swarm occurred initially beneath the southwest flank near the summit and gradually migrated west of the island. A volcanic earthquake activity in the island was reactivated beneath the summit, leading to a summit eruption with a significant summit subsidence on 8 July. We detected small but numerous number of long period (LP) seismic signals during these activities. Most of them include both 0.2 and 0.4 Hz components suggesting an existence of a harmonic oscillator. Some of them have dominant frequency peak at 0.2Hz (LP1), while others have one at 0.4 Hz (LP2). At the beginning of each waveform of both LP1 and LP2, an impulsive signal with a pulse-width of about 2 s is clearly identified. The major axis of the particle motion for the initial impulsive signal is almost horizontal suggesting a shallow source beneath the summit, while the inclined particle motion for the latter phase suggests deeper source beneath the island. For both LP1 and LP2, we can identify a clear positive correlation between the amplitude of the initial pulse and that of the latter phase. We conducted waveform inversions for the LP events assuming a point source and determined the locations and mechanisms simultaneously. We assumed three types of source mechanisms; three single forces, six moment tensor components, and a combination of moment tensor and single forces. We used AIC to decide the optimal solutions. Firstly, we applied the method to the entire waveform including both the initial pulse and the latter phase. The source type with a combination of moment tensor and single force components yields the minimum values of the AIC for both LP events. However, the spatial distribution of the residual errors tends to have two local minima. Considering the error distribution and the characteristic particle motions, it is likely that the source of the LP event consists of two different parts. We thus divided the LP events into two parts; the initial and the latter phases, and applied the same waveform inversion procedure separately for each part of the waveform. The inversion results show that the initial impulsive phase and the latter oscillatory phase are well explained by a nearly horizontal single force and a moment solution, respectively. The single force solutions of the initial pulse are positioned at the depth of about 2 km beneath the summit. The single force initially oriented to the north, and then to the south. On the other hand, the sources of the moment solutions are significantly deeper than the single force solutions. The hypocenter of the later phase of LP1 is located at the depth of 5.5 km in the southern region of the island, while that for the LP2 event is at 5.1 km beneath the summit. The horizontal oscillations are relatively dominant for both the LP1 and LP2 events. Although the two sources are separated each other by several kilometers, the positive correlation between the amplitudes of the initial pulse and the latter phase strongly suggests that the shallow sources trigger the deeper sources. The source time histories of the 6 moment tensor components of the latter portion of the LP1 and LP2 are not in phase. This makes it difficult to extract information on source geometry using the amplitude ratio among moment tensor components in a traditional manner. It may suggest that the source is composed of two independent sources whose oscillations are out of phase.
Illumination system development using design and analysis of computer experiments
NASA Astrophysics Data System (ADS)
Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter
2015-09-01
Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.
Nitrogen release from rock and soil under simulated field conditions
Holloway, J.M.; Dahlgren, R.A.; Casey, W.H.
2001-01-01
A laboratory study was performed to simulate field weathering and nitrogen release from bedrock in a setting where geologic nitrogen has been suspected to be a large local source of nitrate. Two rock types containing nitrogen, slate (1370 mg N kg-1) and greenstone (480 mg N kg-1), were used along with saprolite and BC horizon sand from soils derived from these rock types. The fresh rock and weathered material were used in batch reactors that were leached every 30 days over 6 months to simulate a single wet season. Nitrogen was released from rock and soil materials at rates between 10-20 and 10-19 mo1 N cm-2 s-1. Results from the laboratory dissolution experiments were compared to in situ soil solutions and available mineral nitrogen pools from the BC horizon of both soils. Concentrations of mineral nitrogen (NO3- + NH4+) in soil solutions reached the highest levels at the beginning of the rainy season and progressively decreased with increased leaching. This seasonal pattern was repeated for the available mineral nitrogen pool that was extracted using a KCl solution. Estimates based on these laboratory release rates bracket stream water NO3-N fluxes and changes in the available mineral nitrogen pool over the active leaching period. These results confirm that geologic nitrogen, when present, may be a large and reactive pool that may contribute as a non-point source of nitrate contamination to surface and ground waters. ?? 2001 Elsevier Science B.V. All rights reserved.
Finite element solution of lubrication problems
NASA Technical Reports Server (NTRS)
Reddi, M. M.
1971-01-01
A variational formulation of the transient lubrication problem is presented and the corresponding finite element equations derived for three and six point triangles, and, four and eight point quadrilaterals. Test solutions for a one dimensional slider bearing used in validating the computer program are given. Utility of the method is demonstrated by a solution of the shrouded step bearing.
Adjoint tomography and centroid-moment tensor inversion of the Kanto region, Japan
NASA Astrophysics Data System (ADS)
Miyoshi, T.
2017-12-01
A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography based on large computing. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. The synthetic displacements were calculated using the spectral element method (SEM; e.g. Komatitsch and Tromp 1999; Peter et al. 2011) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton's method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. The proposed model reveals several anomalous areas with extremely low Vs values in comparison with those of the initial model. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes show better fit than the initial model to the observed waveforms in different period ranges within 5-30 s. In the present study, all centroid times of the source solutions were determined using time shifts based on cross correlation to prevent high computing resources before the structural inversion. Additionally, parameters of centroid-moment solutions were fully determined using the SEM assuming the 3D structure (e.g. Liu et al. 2004). As a preliminary result, new solutions were basically same as their initial solutions. This may indicate that the 3D structure is not effective for the source estimation. Acknowledgements: This study was supported by JSPS KAKENHI Grant Number 16K21699.
Solution-grown crystals for neutron radiation detectors, and methods of solution growth
Zaitseva, Natalia P; Hull, Giulia; Cherepy, Nerine J; Payne, Stephen A; Stoeffl, Wolfgang
2012-06-26
A method according to one embodiment includes growing an organic crystal from solution, the organic crystal exhibiting a signal response signature for neutrons from a radioactive source. A system according to one embodiment includes an organic crystal having physical characteristics of formation from solution, the organic crystal exhibiting a signal response signature for neutrons from a radioactive source; and a photodetector for detecting the signal response of the organic crystal. A method according to another embodiment includes growing an organic crystal from solution, the organic crystal being large enough to exhibit a detectable signal response signature for neutrons from a radioactive source. An organic crystal according to another embodiment includes an organic crystal having physical characteristics of formation from solution, the organic crystal exhibiting a signal response signature for neutrons from a radioactive source, wherein the organic crystal has a length of greater than about 1 mm in one dimension.
Anderson, Daniel M; Benson, James D; Kearsley, Anthony J
2014-12-01
Mathematical modeling plays an enormously important role in understanding the behavior of cells, tissues, and organs undergoing cryopreservation. Uses of these models range from explanation of phenomena, exploration of potential theories of damage or success, development of equipment, and refinement of optimal cryopreservation/cryoablation strategies. Over the last half century there has been a considerable amount of work in bio-heat and mass-transport, and these models and theories have been readily and repeatedly applied to cryobiology with much success. However, there are significant gaps between experimental and theoretical results that suggest missing links in models. One source for these potential gaps is that cryobiology is at the intersection of several very challenging aspects of transport theory: it couples multi-component, moving boundary, multiphase solutions that interact through a semipermeable elastic membrane with multicomponent solutions in a second time-varying domain, during a two-hundred Kelvin temperature change with multi-molar concentration gradients and multi-atmosphere pressure changes. In order to better identify potential sources of error, and to point to future directions in modeling and experimental research, we present a three part series to build from first principles a theory of coupled heat and mass transport in cryobiological systems accounting for all of these effects. The hope of this series is that by presenting and justifying all steps, conclusions may be made about the importance of key assumptions, perhaps pointing to areas of future research or model development, but importantly, lending weight to standard simplification arguments that are often made in heat and mass transport. In this first part, we review concentration variable relationships, their impact on choices for Gibbs energy models, and their impact on chemical potentials. Copyright © 2014 Elsevier Inc. All rights reserved.
Morphological control in polymer solar cells using low-boiling-point solvent additives
NASA Astrophysics Data System (ADS)
Mahadevapuram, Rakesh C.
In the global search for clean, renewable energy sources, organic photovoltaics (OPVs) have recently been given much attention. Popular modern-day OPVs are made from solution-processible, carbon-based polymers (e.g. the model poly(3-hexylthiophene) that are intimately blended with fullerene derivatives (e.g. [6,6]-phenyl-C71-butyric acid methyl ester) to form what is known as the dispersed bulk-heterojunction (BHJ). This BHJ architecture has produced some of the most efficient OPVs to date, with reports closing in on 10% power conversion efficiency. To push efficiencies further into double digits, many groups have identified the BHJ nanomorphology---that is, the phase separations and grain sizes within the polymer: fullerene composite---as a key aspect in need of control and improvement. As a result, many methods, including thermal annealing, slow-drying (solvent) annealing, vapor annealing, and solvent additives, have been developed and studied to promote BHJ self-organization. Processing organic photovoltaic (OPV) blend solutions with high-boiling-point solvent additives has recently been used for morphological control in BHJ OPV cells. Here we show that even low-boiling-point solvents can be effective additives. When P3HT:PCBM OPV cells were processed with a low-boiling-point solvent tetrahydrafuran as an additive in parent solvent o-dichlorobenzene, charge extraction increased leading to fill factors as high as 69.5%, without low work-function cathodes, electrode buffer layers or thermal treatment. This was attributed to PCBM demixing from P3HT domains and better vertical phase separation, as indicated by photoluminescence lifetimes, hole mobilities, and shunt leakage currents. Dependence on solvent parameters and applicability beyond P3HT system was also investigated.
NASA Astrophysics Data System (ADS)
Bižić, Milan B.; Petrović, Dragan Z.; Tomić, Miloš C.; Djinović, Zoran V.
2017-07-01
This paper presents the development of a unique method for experimental determination of wheel-rail contact forces and contact point position by using the instrumented wheelset (IWS). Solutions of key problems in the development of IWS are proposed, such as the determination of optimal locations, layout, number and way of connecting strain gauges as well as the development of an inverse identification algorithm (IIA). The base for the solution of these problems is the wheel model and results of FEM calculations, while IIA is based on the method of blind source separation using independent component analysis. In the first phase, the developed method was tested on a wheel model and a high accuracy was obtained (deviations of parameters obtained with IIA and really applied parameters in the model are less than 2%). In the second phase, experimental tests on the real object or IWS were carried out. The signal-to-noise ratio was identified as the main influential parameter on the measurement accuracy. Тhе obtained results have shown that the developed method enables measurement of vertical and lateral wheel-rail contact forces Q and Y and their ratio Y/Q with estimated errors of less than 10%, while the estimated measurement error of contact point position is less than 15%. At flange contact and higher values of ratio Y/Q or Y force, the measurement errors are reduced, which is extremely important for the reliability and quality of experimental tests of safety against derailment of railway vehicles according to the standards UIC 518 and EN 14363. The obtained results have shown that the proposed method can be successfully applied in solving the problem of high accuracy measurement of wheel-rail contact forces and contact point position using IWS.
Active solution of homography for pavement crack recovery with four laser lines.
Xu, Guan; Chen, Fang; Wu, Guangwei; Li, Xiaotao
2018-05-08
An active solution method of the homography, which is derived from four laser lines, is proposed to recover the pavement cracks captured by the camera to the real-dimension cracks in the pavement plane. The measurement system, including a camera and four laser projectors, captures the projection laser points on the 2D reference in different positions. The projection laser points are reconstructed in the camera coordinate system. Then, the laser lines are initialized and optimized by the projection laser points. Moreover, the plane-indicated Plücker matrices of the optimized laser lines are employed to model the laser projection points of the laser lines on the pavement. The image-pavement homography is actively determined by the solutions of the perpendicular feet of the projection laser points. The pavement cracks are recovered by the active solution of homography in the experiments. The recovery accuracy of the active solution method is verified by the 2D dimension-known reference. The test case with the measurement distance of 700 mm and the relative angle of 8° achieves the smallest recovery error of 0.78 mm in the experimental investigations, which indicates the application potentials in the vision-based pavement inspection.
Hygroscopic salts and the potential for life on Mars.
Davila, Alfonso F; Duport, Luis Gago; Melchiorri, Riccardo; Jänchen, Jochen; Valea, Sergio; de Los Rios, Asunción; Fairén, Alberto G; Möhlmann, Diedrich; McKay, Christopher P; Ascaso, Carmen; Wierzchos, Jacek
2010-01-01
Hygroscopic salts have been detected in soils in the northern latitudes of Mars, and widespread chloride-bearing evaporitic deposits have been detected in the southern highlands. The deliquescence of hygroscopic minerals such as chloride salts could provide a local and transient source of liquid water that would be available for microorganisms on the surface. This is known to occur in the Atacama Desert, where massive halite evaporites have become a habitat for photosynthetic and heterotrophic microorganisms that take advantage of the deliquescence of the salt at certain relative humidity (RH) levels. We modeled the climate conditions (RH and temperature) in a region on Mars with chloride-bearing evaporites, and modeled the evolution of the water activity (a(w)) of the deliquescence solutions of three possible chloride salts (sodium chloride, calcium chloride, and magnesium chloride) as a function of temperature. We also studied the water absorption properties of the same salts as a function of RH. Our climate model results show that the RH in the region with chloride-bearing deposits on Mars often reaches the deliquescence points of all three salts, and the temperature reaches levels above their eutectic points seasonally, in the course of a martian year. The a(w) of the deliquescence solutions increases with decreasing temperature due mainly to the precipitation of unstable phases, which removes ions from the solution. The deliquescence of sodium chloride results in transient solutions with a(w) compatible with growth of terrestrial microorganisms down to 252 K, whereas for calcium chloride and magnesium chloride it results in solutions with a(w) below the known limits for growth at all temperatures. However, taking the limits of a(w) used to define special regions on Mars, the deliquescence of calcium chloride deposits would allow for the propagation of terrestrial microorganisms at temperatures between 265 and 253 K, and for metabolic activity (no growth) at temperatures between 253 and 233 K.
NASA Astrophysics Data System (ADS)
Chen, Jui-Sheng; Liu, Chen-Wuing; Liang, Ching-Ping; Lai, Keng-Hsin
2012-08-01
SummaryMulti-species advective-dispersive transport equations sequentially coupled with first-order decay reactions are widely used to describe the transport and fate of the decay chain contaminants such as radionuclide, chlorinated solvents, and nitrogen. Although researchers attempted to present various types of methods for analytically solving this transport equation system, the currently available solutions are mostly limited to an infinite or a semi-infinite domain. A generalized analytical solution for the coupled multi-species transport problem in a finite domain associated with an arbitrary time-dependent source boundary is not available in the published literature. In this study, we first derive generalized analytical solutions for this transport problem in a finite domain involving arbitrary number of species subject to an arbitrary time-dependent source boundary. Subsequently, we adopt these derived generalized analytical solutions to obtain explicit analytical solutions for a special-case transport scenario involving an exponentially decaying Bateman type time-dependent source boundary. We test the derived special-case solutions against the previously published coupled 4-species transport solution and the corresponding numerical solution with coupled 10-species transport to conduct the solution verification. Finally, we compare the new analytical solutions derived for a finite domain against the published analytical solutions derived for a semi-infinite domain to illustrate the effect of the exit boundary condition on coupled multi-species transport with an exponential decaying source boundary. The results show noticeable discrepancies between the breakthrough curves of all the species in the immediate vicinity of the exit boundary obtained from the analytical solutions for a finite domain and a semi-infinite domain for the dispersion-dominated condition.
Determining osmotic pressure of drug solutions by air humidity in equilibrium method.
Zhan, Xiancheng; Li, Hui; Yu, Lan; Wei, Guocui; Li, Chengrong
2014-06-01
To establish a new osmotic pressure measuring method with a wide measuring range. The osmotic pressure of drug solutions is determined by measuring the relative air humidity in equilibrium with the solution. The freezing point osmometry is used as a control. The data obtained by the proposed method are comparable to those by the control method, and the measuring range of the proposed method is significantly wider than that of the control method. The proposed method is performed in an isothermal and equilibrium state, so it overcomes the defects of the freezing point and dew point osmometries which result from the heterothermal process in the measurement, and therefore is not limited to diluted solutions.
Blow-up solutions for L 2 supercritical gKdV equations with exactly k blow-up points
NASA Astrophysics Data System (ADS)
Lan, Yang
2017-08-01
In this paper we consider the slightly L 2-supercritical gKdV equations \\partialt u+(uxx+u\\vert u\\vert p-1)_x=0 , with the nonlinearity 5 and 0<\\varepsilon\\ll 1 . In the previous work of the author, we know that there exists a stable self-similar blow-up dynamics for slightly L 2-supercritical gKdV equations. Such solutions can be viewed as solutions with a single blow-up point. In this paper we will prove the existence of solutions with multiple blow-up points, and give a description of the formation of the singularity near the blow-up time.
NASA Astrophysics Data System (ADS)
Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben
2005-09-01
An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.
Organic contaminant transport and fate in the subsurface: Evolution of knowledge and understanding
NASA Astrophysics Data System (ADS)
Essaid, Hedeff I.; Bekins, Barbara A.; Cozzarelli, Isabelle M.
2015-07-01
Toxic organic contaminants may enter the subsurface as slightly soluble and volatile nonaqueous phase liquids (NAPLs) or as dissolved solutes resulting in contaminant plumes emanating from the source zone. A large body of research published in Water Resources Research has been devoted to characterizing and understanding processes controlling the transport and fate of these organic contaminants and the effectiveness of natural attenuation, bioremediation, and other remedial technologies. These contributions include studies of NAPL flow, entrapment, and interphase mass transfer that have advanced from the analysis of simple systems with uniform properties and equilibrium contaminant phase partitioning to complex systems with pore-scale and macroscale heterogeneity and rate-limited interphase mass transfer. Understanding of the fate of dissolved organic plumes has advanced from when biodegradation was thought to require oxygen to recognition of the importance of anaerobic biodegradation, multiple redox zones, microbial enzyme kinetics, and mixing of organic contaminants and electron acceptors at plume fringes. Challenges remain in understanding the impacts of physical, chemical, biological, and hydrogeological heterogeneity, pore-scale interactions, and mixing on the fate of organic contaminants. Further effort is needed to successfully incorporate these processes into field-scale predictions of transport and fate. Regulations have greatly reduced the frequency of new point-source contamination problems; however, remediation at many legacy plumes remains challenging. A number of fields of current relevance are benefiting from research advances from point-source contaminant research. These include geologic carbon sequestration, nonpoint-source contamination, aquifer storage and recovery, the fate of contaminants from oil and gas development, and enhanced bioremediation.
Evaluation of a Proposed Biodegradable 188Re Source for Brachytherapy Application
Khorshidi, Abdollah; Ahmadinejad, Marjan; Hamed Hosseini, S.
2015-01-01
Abstract This study aimed to evaluate dosimetric characteristics based on Monte Carlo (MC) simulations for a proposed beta emitter bioglass 188Re seed for internal radiotherapy applications. The bioactive glass seed has been developed using the sol-gel technique. The simulations were performed for the seed using MC radiation transport code to investigate the dosimetric factors recommended by the AAPM Task Group 60 (TG-60). Dose distributions due to the beta and photon radiation were predicted at different radial distances surrounding the source. The dose rate in water at the reference point was calculated to be 7.43 ± 0.5 cGy/h/μCi. The dosimetric factors consisting of the reference point dose rate, D(r0,θ0), the radial dose function, g(r), the 2-dimensional anisotropy function, F(r,θ), the 1-dimensional anisotropy function, φan(r), and the R90 quantity were estimated and compared with several available beta-emitting sources. The element 188Re incorporated in bioactive glasses produced by the sol-gel technique provides a suitable solution for producing new materials for seed implants applied to brachytherapy applications in prostate and liver cancers treatment. Dose distribution of 188Re seed was greater isotropic than other commercially attainable encapsulated seeds, since it has no end weld to attenuate radiation. The beta radiation-emitting 188Re source provides high doses of local radiation to the tumor tissue and the short range of the beta particles limit damage to the adjacent normal tissue. PMID:26181543
Organic contaminant transport and fate in the subsurface: evolution of knowledge and understanding
Essaid, Hedeff I.; Bekins, Barbara A.; Cozzarelli, Isabelle M.
2015-01-01
Toxic organic contaminants may enter the subsurface as slightly soluble and volatile nonaqueous phase liquids (NAPLs) or as dissolved solutes resulting in contaminant plumes emanating from the source zone. A large body of research published in Water Resources Research has been devoted to characterizing and understanding processes controlling the transport and fate of these organic contaminants and the effectiveness of natural attenuation, bioremediation, and other remedial technologies. These contributions include studies of NAPL flow, entrapment, and interphase mass transfer that have advanced from the analysis of simple systems with uniform properties and equilibrium contaminant phase partitioning to complex systems with pore-scale and macroscale heterogeneity and rate-limited interphase mass transfer. Understanding of the fate of dissolved organic plumes has advanced from when biodegradation was thought to require oxygen to recognition of the importance of anaerobic biodegradation, multiple redox zones, microbial enzyme kinetics, and mixing of organic contaminants and electron acceptors at plume fringes. Challenges remain in understanding the impacts of physical, chemical, biological, and hydrogeological heterogeneity, pore-scale interactions, and mixing on the fate of organic contaminants. Further effort is needed to successfully incorporate these processes into field-scale predictions of transport and fate. Regulations have greatly reduced the frequency of new point-source contamination problems; however, remediation at many legacy plumes remains challenging. A number of fields of current relevance are benefiting from research advances from point-source contaminant research. These include geologic carbon sequestration, nonpoint-source contamination, aquifer storage and recovery, the fate of contaminants from oil and gas development, and enhanced bioremediation.
Freezing Point of Milk: A Natural Way to Understand Colligative Properties
ERIC Educational Resources Information Center
Novo, Mercedes; Reija, Belen; Al-Soufi, Wajih
2007-01-01
A laboratory experiment is presented in which the freezing point depression is analyzed using milk as solution. The nature of milk as a mixture of different solutes makes it a suitable probe to learn about colligative properties. The first part of the experiment illustrates the analytical use of freezing point measurements to control milk quality,…
The pulsar planet production process
NASA Technical Reports Server (NTRS)
Phinney, E. S.; Hansen, B. M. S.
1993-01-01
Most plausible scenarios for the formation of planets around pulsars end with a disk of gas around the pulsar. The supplicant author then points to the solar system to bolster faith in the miraculous transfiguration of gas into planets. We here investigate this process of transfiguration. We derive analytic sequences of quasi-static disks which give good approximations to exact solutions of the disk diffusion equation with realistic opacity tables. These allow quick and efficient surveys of parameter space. We discuss the outward transfer of mass in accretion disks and the resulting timescale constraints, the effects of illumination by the central source on the disk and dust within it, and the effects of the widely different elemental compositions of the disks in the various scenarios, and their extensions to globular clusters. We point out where significant uncertainties exist in the appropriate grain opacities, and in the effect of illumination and winds from the neutron star.
Design and Synthesis of Multigraft Copolymer Thermoplastic Elastomers: Superelastomers
Wang, Huiqun; Lu, Wei; Wang, Weiyu; ...
2017-09-28
Thermoplastic elastomers (TPEs) have been widely studied because of their recyclability, good processibility, low production cost, and unique performance. The building of graft-type architectures can greatly improve mechanical properties of TPEs. This review focuses on the advances in different approaches to synthesize multigraft copolymer TPEs. Anionic polymerization techniques allow for the synthesis of well-defined macromolecular structures and compositions, with great control over the molecular weight, polydispersity, branch spacing, number of branch points, and branch point functionality. Progress in emulsion polymerization offers potential approaches to commercialize these types of materials with low production cost via simple operations. Moreover, the use ofmore » multigraft architecturesprovides a solution to the limited elongational properties of all-acrylic TPEs, which can greatly expand their potential application range. The combination of different polymerization techniques, the introduction of new chemical compositions, and the incorporation of sustainable sources are expected to be further investigated in this area in coming years.« less
Not just a drop in the bucket: expanding access to point-of-use water treatment systems.
Mintz, E; Bartram, J; Lochery, P; Wegelin, M
2001-10-01
Since 1990, the number of people without access to safe water sources has remained constant at approximately 1.1 billion, of whom approximately 2.2 million die of waterborne disease each year. In developing countries, population growth and migrations strain existing water and sanitary infrastructure and complicate planning and construction of new infrastructure. Providing safe water for all is a long-term goal; however, relying only on time- and resource-intensive centralized solutions such as piped, treated water will leave hundreds of millions of people without safe water far into the future. Self-sustaining, decentralized approaches to making drinking water safe, including point-of-use chemical and solar disinfection, safe water storage, and behavioral change, have been widely field-tested. These options target the most affected, enhance health, contribute to development and productivity, and merit far greater priority for rapid implementation.
Not Just a Drop in the Bucket: Expanding Access to Point-of-Use Water Treatment Systems
Mintz, Eric; Bartram, Jamie; Lochery, Peter; Wegelin, Martin
2001-01-01
Since 1990, the number of people without access to safe water sources has remained constant at approximately 1.1 billion, of whom approximately 2.2 million die of waterborne disease each year. In developing countries, population growth and migrations strain existing water and sanitary infrastructure and complicate planning and construction of new infrastructure. Providing safe water for all is a long-term goal; however, relying only on time- and resource-intensive centralized solutions such as piped, treated water will leave hundreds of millions of people without safe water far into the future. Self-sustaining, decentralized approaches to making drinking water safe, including point-of-use chemical and solar disinfection, safe water storage, and behavioral change, have been widely field-tested. These options target the most affected, enhance health, contribute to development and productivity, and merit far greater priority for rapid implementation. PMID:11574307
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Huiqun; Lu, Wei; Wang, Weiyu
Thermoplastic elastomers (TPEs) have been widely studied because of their recyclability, good processibility, low production cost, and unique performance. The building of graft-type architectures can greatly improve mechanical properties of TPEs. This review focuses on the advances in different approaches to synthesize multigraft copolymer TPEs. Anionic polymerization techniques allow for the synthesis of well-defined macromolecular structures and compositions, with great control over the molecular weight, polydispersity, branch spacing, number of branch points, and branch point functionality. Progress in emulsion polymerization offers potential approaches to commercialize these types of materials with low production cost via simple operations. Moreover, the use ofmore » multigraft architecturesprovides a solution to the limited elongational properties of all-acrylic TPEs, which can greatly expand their potential application range. The combination of different polymerization techniques, the introduction of new chemical compositions, and the incorporation of sustainable sources are expected to be further investigated in this area in coming years.« less
LAMMPS strong scaling performance optimization on Blue Gene/Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffman, Paul; Jiang, Wei; Romero, Nichols A.
2014-11-12
LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using anmore » 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.« less
An In-depth Examination of Farmers' Perceptions of Targeting Conservation Practices
NASA Astrophysics Data System (ADS)
Kalcic, Margaret; Prokopy, Linda; Frankenberger, Jane; Chaubey, Indrajeet
2014-10-01
Watershed managers have largely embraced targeting of agricultural conservation as a way to manage strategically non-point source pollution from agricultural lands. However, while targeting of particular watersheds is not uncommon, targeting farms and fields within a specific watershed has lagged. In this work, we employed a qualitative approach, using farmer interviews in west-central Indiana to better understand their views on targeting. Interviews focused on adoption of conservation practices on farmers' lands and identified their views on targeting, disproportionality, and monetary incentives. Results show consistent support for the targeting approach, despite dramatic differences in farmers' views of land stewardship, in their views about disproportionality of water quality impacts, and in their trust in conservation programming. While the theoretical concept of targeting was palatable to all participants, many raised concerns about its practical implementation, pointing to the need for flexibility when applying targeting solutions and revealing misgivings about the government agencies that perform targeting.
NASA Astrophysics Data System (ADS)
Jew, A. D.; Dustin, M. K.; Harrison, A. L.; Joe-Wong, C. M.; Thomas, D.; Maher, K.; Brown, G. E.; Bargar, J.
2016-12-01
Due to the rapid growth of hydraulic fracturing in the United States, understanding the cause for the rapid production drop off of new wells over the initial months of production is paramount. One possibility for the production decrease is pore occlusion caused by the oxidation of Fe(II)-bearing phases resulting in Fe(III) precipitates. To understand the release and fate of Fe in the shale systems, we reacted synthesized fracture fluid at 80oC with shale from four different geological localities (Marcellus Fm., Barnett Fm., Eagle Ford Fm., and Green River Fm.). A variety of wet chemical and synchrotron-based techniques (XRF mapping and x-ray absorption spectroscopy) were used to understand Fe release and solid phase Fe speciation. Solution pH was found to be the greatest factor for Fe release. Carbonate-poor Barnett and Marcellus shale showed rapid Fe release into solution followed by a plateau or significant drop in Fe concentrations indicating mineral precipitation. Conversely, in high carbonate shales, Eagle Ford and Green River, no Fe was detected in solution indicating fast Fe oxidation and precipitation. For all shale samples, bulk Fe EXAFS data show that a significant amount of Fe in the shales is bound directly to organic carbon. Throughout the course of the experiments inorganic Fe(II) phases (primarily pyrite) reacted while Fe(II) bound to C showed no indication of reaction. On the micron scale, XRF mapping coupled with μ-XANES spectroscopy showed that at pH < 4.0, Fe(III) bearing phases precipitated as diffuse surface precipitates of ferrihydrite, goethite, and magnetite away from Fe(II) point sources. In near circum-neutral pH systems, Fe(III)-bearing phases (goethite and hematite) form large particles 10's of μm's in diameter near Fe(II) point sources. Idealized systems containing synthesized fracturing fluid, dissolved ferrous chloride, and bitumen showed that bitumen released during reaction with fracturing fluids is capable of oxidizing Fe(II) to Fe(III) at pH's 2.0 and 7.0. This indicates that bitumen can play a large role in Fe oxidation and speciation in the subsurface. This work shows that shale mineralogy has a significant impact on the morphology and phases of Fe(III) precipitates in the subsurface which in turn can significantly impact subsurface solution flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez-Monroy, J.A., E-mail: antosan@gmail.com; Quimbay, C.J., E-mail: cjquimbayh@unal.edu.co; Centro Internacional de Fisica, Bogota D.C.
In the context of a semiclassical approach where vectorial gauge fields can be considered as classical fields, we obtain exact static solutions of the SU(N) Yang-Mills equations in an (n+1)-dimensional curved space-time, for the cases n=1,2,3. As an application of the results obtained for the case n=3, we consider the solutions for the anti-de Sitter and Schwarzschild metrics. We show that these solutions have a confining behavior and can be considered as a first step in the study of the corrections of the spectra of quarkonia in a curved background. Since the solutions that we find in this work aremore » valid also for the group U(1), the case n=2 is a description of the (2+1) electrodynamics in the presence of a point charge. For this case, the solution has a confining behavior and can be considered as an application of the planar electrodynamics in a curved space-time. Finally we find that the solution for the case n=1 is invariant under a parity transformation and has the form of a linear confining solution. - Highlights: Black-Right-Pointing-Pointer We study exact static confining solutions of the SU(N) Yang-Mills equations in an (n+1)-dimensional curved space-time. Black-Right-Pointing-Pointer The solutions found are a first step in the study of the corrections on the spectra of quarkonia in a curved background. Black-Right-Pointing-Pointer A expression for the confinement potential in low dimensionality is found.« less
Numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity
NASA Astrophysics Data System (ADS)
Korepanov, V. V.; Matveenko, V. P.; Fedorov, A. Yu.; Shardakov, I. N.
2013-07-01
An algorithm for the numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity is considered. The algorithm is based on separation of a power-law dependence from the finite-element solution in a neighborhood of singular points in the domain under study, where singular solutions are possible. The obtained power-law dependencies allow one to conclude whether the stresses have singularities and what the character of these singularities is. The algorithm was tested for problems of classical elasticity by comparing the stress singularity exponents obtained by the proposed method and from known analytic solutions. Problems with various cases of singular points, namely, body surface points at which either the smoothness of the surface is violated, or the type of boundary conditions is changed, or distinct materials are in contact, are considered as applications. The stress singularity exponents obtained by using the models of classical and asymmetric elasticity are compared. It is shown that, in the case of cracks, the stress singularity exponents are the same for the elasticity models under study, but for other cases of singular points, the stress singularity exponents obtained on the basis of asymmetric elasticity have insignificant quantitative distinctions from the solutions of the classical elasticity.
Modifying PASVART to solve singular nonlinear 2-point boundary problems
NASA Technical Reports Server (NTRS)
Fulton, James P.
1988-01-01
To study the buckling and post-buckling behavior of shells and various other structures, one must solve a nonlinear 2-point boundary problem. Since closed-form analytic solutions for such problems are virtually nonexistent, numerical approximations are inevitable. This makes the availability of accurate and reliable software indispensable. In a series of papers Lentini and Pereyra, expanding on the work of Keller, developed PASVART: an adaptive finite difference solver for nonlinear 2-point boundary problems. While the program does produce extremely accurate solutions with great efficiency, it is hindered by a major limitation. PASVART will only locate isolated solutions of the problem. In buckling problems, the solution set is not unique. It will contain singular or bifurcation points, where different branches of the solution set may intersect. Thus, PASVART is useless precisely when the problem becomes interesting. To resolve this deficiency we propose a modification of PASVART that will enable the user to perform a more complete bifurcation analysis. PASVART would be combined with the Thurston bifurcation solution: as adaptation of Newton's method that was motivated by the work of Koiter 3 are reinterpreted in terms of an iterative computational method by Thurston.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.11)
NASA Astrophysics Data System (ADS)
Long, A. J.
2014-09-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, springflow, groundwater level, solute transport, or cave drip for a measurement point in response to a system input of precipitation, recharge, or solute injection. The RRAWFLOW open-source code is written in the R language and is included in the Supplement to this article along with an example model of springflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution; i.e., the unit hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Other options include the use of user-defined IRFs and different methods to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications. RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
Boundary-integral modeling of cochlear hydrodynamics
NASA Astrophysics Data System (ADS)
Pozrikidis, C.
2008-04-01
A two-dimensional model that captures the essential features of the vibration of the basilar membrane of the cochlea is proposed. The flow due to the vibration of the stapes footplate and round window is modeled by a point source and a point sink, and the cochlear pressure is computed simultaneously with the oscillations of the basilar membrane. The mathematical formulation relies on the boundary-integral representation of the potential flow established far from the basilar membrane and cochlea side walls, neglecting the thin Stokes boundary layer lining these surfaces. The boundary-integral approach furnishes integral equations for the membrane vibration amplitude and pressure distribution on the upper or lower side of the membrane. Several approaches are discussed, and numerical solutions in the frequency domain are presented for a rectangular cochlea model using different membrane response functions. The numerical results reproduce and extend the theoretical predictions of previous authors and delineate the effect of physical and geometrical parameters. It is found that the membrane vibration depends weakly on the position of the membrane between the upper and lower wall of the cochlear channel and on the precise location of the oval and round windows. Solutions of the initial-value problem with a single-period sinusoidal impulse reveal the formation of a traveling wave packet that eventually disappears at the helicotrema.
SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES
Wan, Xiaohai; Li, Zhilin
2012-01-01
Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size. PMID:22701346
SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES.
Wan, Xiaohai; Li, Zhilin
2012-06-01
Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size.
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Application of the sine-Poisson equation in solar magnetostatics
NASA Technical Reports Server (NTRS)
Webb, G. M.; Zank, G. P.
1990-01-01
Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.
Tang, Liyang
2013-04-04
The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.
On The Computation Of The Best-fit Okada-type Tsunami Source
NASA Astrophysics Data System (ADS)
Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.
2017-12-01
The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.
Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China
NASA Astrophysics Data System (ADS)
Zhu, Lei; Liu, WanQing
2018-02-01
TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.
GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, M.
1959-06-01
GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less
A Semi-Analytical Model for Dispersion Modelling Studies in the Atmospheric Boundary Layer
NASA Astrophysics Data System (ADS)
Gupta, A.; Sharan, M.
2017-12-01
The severe impact of harmful air pollutants has always been a cause of concern for a wide variety of air quality analysis. The analytical models based on the solution of the advection-diffusion equation have been the first and remain the convenient way for modeling air pollutant dispersion as it is easy to handle the dispersion parameters and related physics in it. A mathematical model describing the crosswind integrated concentration is presented. The analytical solution to the resulting advection-diffusion equation is limited to a constant and simple profiles of eddy diffusivity and wind speed. In practice, the wind speed depends on the vertical height above the ground and eddy diffusivity profiles on the downwind distance from the source as well as the vertical height. In the present model, a method of eigen-function expansion is used to solve the resulting partial differential equation with the appropriate boundary conditions. This leads to a system of first order ordinary differential equations with a coefficient matrix depending on the downwind distance. The solution of this system, in general, can be expressed in terms of Peano-baker series which is not easy to compute, particularly when the coefficient matrix becomes non-commutative (Martin et al., 1967). An approach based on Taylor's series expansion is introduced to find the numerical solution of first order system. The method is applied to various profiles of wind speed and eddy diffusivities. The solution computed from the proposed methodology is found to be efficient and accurate in comparison to those available in the literature. The performance of the model is evaluated with the diffusion datasets from Copenhagen (Gryning et al., 1987) and Hanford (Doran et al., 1985). In addition, the proposed method is used to deduce three dimensional concentrations by considering the Gaussian distribution in crosswind direction, which is also evaluated with diffusion data corresponding to a continuous point source.
NASA Astrophysics Data System (ADS)
Gallezot, M.; Treyssède, F.; Laguerre, L.
2018-03-01
This paper investigates the computation of the forced response of elastic open waveguides with a numerical modal approach based on perfectly matched layers (PML). With a PML of infinite thickness, the solution can theoretically be expanded as a discrete sum of trapped modes, a discrete sum of leaky modes and a continuous sum of radiation modes related to the PML branch cuts. Yet with numerical methods (e.g. finite elements), the waveguide cross-section is discretized and the PML must be truncated to a finite thickness. This truncation transforms the continuous sum into a discrete set of PML modes. To guarantee the uniqueness of the numerical solution of the forced response problem, an orthogonality relationship is proposed. This relationship is applicable to any type of modes (trapped, leaky and PML modes) and hence allows the numerical solution to be expanded on a discrete sum in a convenient manner. This also leads to an expression for the modal excitability valid for leaky modes. The physical relevance of each type of mode for the solution is clarified through two numerical test cases, a homogeneous medium and a circular bar waveguide example, excited by a point source. The former is favourably compared to a transient analytical solution, showing that PML modes reassemble the bulk wave contribution in a homogeneous medium. The latter shows that the PML mode contribution yields the long-term diffraction phenomenon whereas the leaky mode contribution prevails closer to the source. The leaky mode contribution is shown to remain accurate even with a relatively small PML thickness, hence reducing the computational cost. This is of particular interest for solving three-dimensional waveguide problems, involving two-dimensional cross-sections of arbitrary shapes. Such a problem is handled in a third numerical example by considering a buried square bar.
A two-dimensional solution of the FW-H equation for rectilinear motion of sources
NASA Astrophysics Data System (ADS)
Bozorgi, Alireza; Siozos-Rousoulis, Leonidas; Nourbakhsh, Seyyed Ahmad; Ghorbaniasl, Ghader
2017-02-01
In this paper, a subsonic solution of the two-dimensional Ffowcs Williams and Hawkings (FW-H) equation is presented for calculation of noise generated by sources moving with constant velocity in a medium at rest or in a moving medium. The solution is represented in the frequency domain and is valid for observers located far from the noise sources. In order to verify the validity of the derived formula, three test cases are considered, namely a monopole, a dipole, and a quadrupole source in a medium at rest or in motion. The calculated results well coincide with the analytical solutions, validating the applicability of the formula to rectilinear subsonic motion problems.
A computer simulation study of the temperature dependence of the hydrophobic hydration
NASA Astrophysics Data System (ADS)
Guillot, B.; Guissani, Y.
1993-11-01
The test particle method is used to evaluate by molecular dynamics calculations the solubility of rare gases and of methane in water between the freezing point and the critical point. A quantitative agreement is obtained between solubility data and simulation results when the simulated water is modeled by the extended simple point charge model (SPCE). From a thermodynamical point of view, it is shown that the hierarchy of rare gases solubilities in water is governed by the solute-water interaction energy while an entropic term of cavity formation is found to be responsible for the peculiar temperature dependence of the solubility along the coexistence curve, and more precisely, of the solubility minimum exhibited by all the investigated solutes. Near the water critical point, the asymptotic behaviors of the Henry's constant and of the vapor-liquid partition coefficient, respectively, as deduced from the simulation data follow with a good accuracy the critical laws recently proposed in the literature for these quantities. Moreover, the calculated partial molar volume of the solute shows a steep increase above 473 K and becomes proportional to the isothermal compressibility of the pure solvent in the vicinity of the critical point as it is observed experimentally. From a microscopic point of view, the evaluation of the solute-solvent pair distribution functions permits to establish a relationship between the increase of the solubility with the decrease of the temperature in cold water on the one hand, and the formation of cages of the clathrate-type around the solute on the other hand. Nevertheless, as soon as the boiling point of water is reached the computer simulation shows that the water molecules of the first hydration shell are no longer oriented tangentially to the solute and tend to reorientate towards the bulk. At higher temperatures a deficit of water molecules progressively appears around the solute, a deficit which is directly associated with an increase of the partial molar volume. Although this phenomenon could be related to what is observed in supercritical mixtures it is emphasized that no long range critical fluctuation is present in the simulated sample.
Flow regimes for fluid injection into a confined porous medium
Zheng, Zhong; Guo, Bo; Christov, Ivan C.; ...
2015-02-24
We report theoretical and numerical studies of the flow behaviour when a fluid is injected into a confined porous medium saturated with another fluid of different density and viscosity. For a two-dimensional configuration with point source injection, a nonlinear convection–diffusion equation is derived to describe the time evolution of the fluid–fluid interface. In the early time period, the fluid motion is mainly driven by the buoyancy force and the governing equation is reduced to a nonlinear diffusion equation with a well-known self-similar solution. In the late time period, the fluid flow is mainly driven by the injection, and the governingmore » equation is approximated by a nonlinear hyperbolic equation that determines the global spreading rate; a shock solution is obtained when the injected fluid is more viscous than the displaced fluid, whereas a rarefaction wave solution is found when the injected fluid is less viscous. In the late time period, we also obtain analytical solutions including the diffusive term associated with the buoyancy effects (for an injected fluid with a viscosity higher than or equal to that of the displaced fluid), which provide the structure of the moving front. Numerical simulations of the convection–diffusion equation are performed; the various analytical solutions are verified as appropriate asymptotic limits, and the transition processes between the individual limits are demonstrated.« less
Peak-locking centroid bias in Shack-Hartmann wavefront sensing
NASA Astrophysics Data System (ADS)
Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.
2018-05-01
Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.
Containerless synthesis of amorphous and nanophase organic materials
Benmore, Chris J.; Weber, Johann R.
2016-05-03
The invention provides a method for producing a mixture of amorphous compounds, the method comprising supplying a solution containing the compounds; and allowing at least a portion of the solvent of the solution to evaporate while preventing the solute of the solution from contacting a nucleation point. Also provided is a method for transforming solids to amorphous material, the method comprising heating the solids in an environment to form a melt, wherein the environment contains no nucleation points; and cooling the melt in the environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hep, J.; Konecna, A.; Krysl, V.
2011-07-01
This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less
An FBG acoustic emission source locating system based on PHAT and GA
NASA Astrophysics Data System (ADS)
Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun
2017-09-01
Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiner, G.R.; Watson, J.T.
1993-05-01
One of the Tennessee Valley Authority`s (TVA`s) major goals is cleanup and protection of the waters of the Tennessee River system. Although great strides have been made, point source and nonpoint source pollution still affect the surface water and groundwater quality in the Tennessee Valley and nationally. Causes of this pollution are poorly operating wastewater treatment systems or the lack of them. Practical solutions are needed, and there is great interest and desire to abate water pollution with effective, simple, reliable and affordable wastewater treatment processes. In recognition of this need, TVA began demonstration of the constructed wetlands technology inmore » 1986 as an alternative to conventional, mechanical processes, especially for small communities. Constructed wetlands can be downsized from municipal systems to small systems, such as for schools, camps and even individual homes.« less
Variations in Gas and Water Pulses at an Arctic Seep: Fluid Sources and Methane Transport
NASA Astrophysics Data System (ADS)
Hong, W.-L.; Torres, M. E.; Portnov, A.; Waage, M.; Haley, B.; Lepland, A.
2018-05-01
Methane fluxes into the oceans are largely dependent on the methane phase as it migrates upward through the sediments. Here we document decoupled methane transport by gaseous and aqueous phases in Storfjordrenna (offshore Svalbard) and propose a three-stage evolution model for active seepage in the region where gas hydrates are present in the shallow subsurface. In a preactive seepage stage, solute diffusion is the primary transport mechanism for methane in the dissolved phase. Fluids containing dissolved methane have high 87Sr/86Sr ratios due to silicate weathering in the microbial methanogenesis zone. During the active seepage stage, migration of gaseous methane results in near-seafloor gas hydrate formation and vigorous seafloor gas discharge with a thermogenic fingerprint. In the postactive seepage stage, the high concentration of dissolved lithium points to the contribution of a deeper-sourced aqueous fluid, which we postulate advects upward following cessation of gas discharge.
Intrawellbore kinematic and frictional losses in a horizontal well in a bounded confined aquifer
NASA Astrophysics Data System (ADS)
Wang, Quanrong; Zhan, Hongbin
2017-01-01
Horizontal drilling has become an appealing technology for water resource exploration or aquifer remediation in recent decades, due to decreasing operational cost and many technical advantages over vertical wells. However, many previous studies on flow into horizontal wells were based on the Uniform Flux Boundary Condition (UFBC), which does not reflect the physical processes of flow inside the well accurately. In this study, we investigated transient flow into a horizontal well in an anisotropic confined aquifer laterally bounded by two constant-head boundaries. Three types of boundary conditions were employed to treat the horizontal well, including UFBC, Uniform-Head Boundary Condition (UHBC), and Mixed-Type Boundary Condition (MTBC). The MTBC model considered both kinematic and frictional effects inside the horizontal well, in which the kinematic effect referred to the accelerational and fluid-inflow effects. A new solution of UFBC was derived by superimposing the point sink/source solutions along the axis of a horizontal well with a uniform flux distribution. New solutions of UHBC and MTBC were obtained by a hybrid analytical-numerical method, and an iterative method was proposed to determine the well discretization required for achieving sufficiently accurate results. This study showed that the differences among the UFBC, UHBC, and MTBC solutions were obvious near the well screen, decreased with distance from the well, and became negligible near the constant-head boundary. The relationship between the flow rate and the drawdown was nonlinear for the MTBC solution, while it was linear for the UFBC and UHBC solutions.
High gain antenna pointing on the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Vanelli, C. Anthony; Ali, Khaled S.
2005-01-01
This paper describes the algorithm used to point the high gain antennae on NASA/JPL's Mars Exploration Rovers. The gimballed antennae must track the Earth as it moves across the Martian sky during communication sessions. The algorithm accounts for (1) gimbal range limitations, (2) obstructions both on the rover and in the surrounding environment, (3) kinematic singularities in the gimbal design, and (4) up to two joint-space solutions for a given pointing direction. The algorithm computes the intercept-times for each of the occlusions and chooses the jointspace solution that provides the longest track time before encountering an occlusion. Upon encountering an occlusion, the pointing algorithm automatically switches to the other joint-space solution if it is not also occluded. The algorithm has successfully provided flop-free pointing for both rovers throughout the mission.
Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo
2014-01-01
Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677
Reference analysis of the signal + background model in counting experiments
NASA Astrophysics Data System (ADS)
Casadei, D.
2012-01-01
The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.
NASA Technical Reports Server (NTRS)
Das, S.
1979-01-01
A method to determine the displacement and the stress on the crack plane for a three-dimensional shear crack of arbitrary shape propagating in an infinite, homogeneous medium which is linearly elastic everywhere off the crack plane is presented. The main idea of the method is to use a representation theorem in which the displacement at any given point on the crack plane is written as an integral of the traction over the whole crack plane. As a test of the accuracy of the numerical technique, the results are compared with known solutions for two simple cases.
Wennberg, Richard; Cheyne, Douglas
2014-05-01
To assess the reliability of MEG source imaging (MSI) of anterior temporal spikes through detailed analysis of the localization and orientation of source solutions obtained for a large number of spikes that were separately confirmed by intracranial EEG to be focally generated within a single, well-characterized spike focus. MSI was performed on 64 identical right anterior temporal spikes from an anterolateral temporal neocortical spike focus. The effects of different volume conductors (sphere and realistic head model), removal of noise with low frequency filters (LFFs) and averaging multiple spikes were assessed in terms of the reliability of the source solutions. MSI of single spikes resulted in scattered dipole source solutions that showed reasonable reliability for localization at the lobar level, but only for solutions with a goodness-of-fit exceeding 80% using a LFF of 3 Hz. Reliability at a finer level of intralobar localization was limited. Spike averaging significantly improved the reliability of source solutions and averaging 8 or more spikes reduced dependency on goodness-of-fit and data filtering. MSI performed on topographically identical individual spikes from an intracranially defined classical anterior temporal lobe spike focus was limited by low reliability (i.e., scattered source solutions) in terms of fine, sublobar localization within the ipsilateral temporal lobe. Spike averaging significantly improved reliability. MSI performed on individual anterior temporal spikes is limited by low reliability. Reduction of background noise through spike averaging significantly improves the reliability of MSI solutions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2016-04-01
Rapid estimation of the spatial and temporal rupture characteristics of large megathrust earthquakes by finite fault inversion is important for disaster mitigation. For example, estimates of the spatio-temporal evolution of rupture can be used to evaluate population exposure to tsunami waves and ground shaking soon after the event by providing more accurate predictions than possible with point source approximations. In addition, rapid inversion results can reveal seismic source complexity to guide additional, more detailed subsequent studies. This work develops a method to rapidly estimate the slip distribution of megathrust events while reducing subjective parameter choices by automation. The method is simple yet robust and we show that it provides excellent preliminary rupture models as soon as 30 minutes for three great earthquakes in the South-American subduction zone. This may slightly change for other regions depending on seismic station coverage but method can be applied to any subduction region. The inversion is based on W-phase data since it is rapidly and widely available and of low amplitude which avoids clipping at close stations for large events. In addition, prior knowledge of the slab geometry (e.g. SLAB 1.0) is applied and rapid W-phase point source information (time delay and centroid location) is used to constrain the fault geometry and extent. Since the linearization by multiple time window (MTW) parametrization requires regularization, objective smoothing is achieved by the discrepancy principle in two fully automated steps. First, the residuals are estimated assuming unknown noise levels, and second, seeking a subsequent solution which fits the data to noise level. The MTW scheme is applied with positivity constraints and a solution is obtained by an efficient non-negative least squares solver. Systematic application of the algorithm to the Maule (2010), Iquique (2014) and Illapel (2015) events illustrates that rapid finite fault inversion with teleseismic data is feasible and provides meaningful results. The results for the three events show excellent data fits and are consistent with other solutions showing most of the slip occurring close to the trench for the Maule an Illapel events and some deeper slip for the Iquique event. Importantly, the Illapel source model predicts tsunami waveforms of close agreement with observed waveforms. Finally, we develop a new Bayesian approach to approximate uncertainties as part of the rapid inversion scheme with positivity constraints. Uncertainties are estimated by approximating the posterior distribution as a multivariate log-normal distribution. While solving for the posterior adds some additional computational cost, we illustrate that uncertainty estimation is important for meaningful interpretation of finite fault models.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
MinFinder v2.0: An improved version of MinFinder
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, Isaac E.
2008-10-01
A new version of the "MinFinder" program is presented that offers an augmented linking procedure for Fortran-77 subprograms, two additional stopping rules and a new start-point rejection mechanism that saves a significant portion of gradient and function evaluations. The method is applied on a set of standard test functions and the results are reported. New version program summaryProgram title: MinFinder v2.0 Catalogue identifier: ADWU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC Licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 14 150 No. of bytes in distributed program, including test data, etc.: 218 144 Distribution format: tar.gz Programming language used: GNU C++, GNU FORTRAN, GNU C Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200 000 bytes Classification: 4.9 Catalogue identifier of previous version: ADWU_v1_0 Journal reference of previous version: Computer Physics Communications 174 (2006) 166-179 Does the new version supersede the previous version?: Yes Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be trapped in any local minimum. Global optimization is then the appropriate tool. For example, solving a non-linear system of equations via optimization, one may encounter many local minima that do not correspond to solutions, i.e. they are far from zero. Solution method: Using a uniform pdf, points are sampled from a rectangular domain. A clustering technique, based on a typical distance and a gradient criterion, is used to decide from which points a local search should be started. Further searching is terminated when all the local minima inside the search domain are thought to be found. This is accomplished via three stopping rules: the "double-box" stopping rule, the "observables" stopping rule and the "expected minimizers" stopping rule. Reasons for the new version: The link procedure for source code in Fortran 77 is enhanced, two additional stopping rules are implemented and a new criterion for accepting-start points, that economizes on function and gradient calls, is introduced. Summary of revisions:Addition of command line parameters to the utility program make_program. Augmentation of the link process for Fortran 77 subprograms, by linking the final executable with the g2c library. Addition of two probabilistic stopping rules. Introduction of a rejection mechanism to the Checking step of the original method, that reduces the number of gradient evaluations. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the objective function.
Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1995-01-01
Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Shortest path problem on a grid network with unordered intermediate points
NASA Astrophysics Data System (ADS)
Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen
2017-10-01
We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.
Fissile solution measurement apparatus
Crane, T.W.; Collinsworth, P.R.
1984-06-11
An apparatus for determining the content of a fissile material within a solution by detecting delayed fission neutrons emitted by the fissile material after it is temporarily irradiated by a neutron source. The apparatus comprises a container holding the solution and having a portion defining a neutron source cavity centrally disposed within the container. The neutron source cavity temporarily receives the neutron source. The container has portions defining a plurality of neutron detector ports that form an annular pattern and surround the neutron source cavity. A plurality of neutron detectors count delayed fission neutrons emitted by the fissile material. Each neutron detector is located in a separate one of the neutron detector ports.
Wilson, P W; Heneghan, A F; Haymet, A D J
2003-02-01
In biological systems, nucleation of ice from a supercooled aqueous solution is a stochastic process and always heterogeneous. The average time any solution may remain supercooled is determined only by the degree of supercooling and heterogeneous nucleation sites it encounters. Here we summarize the many and varied definitions of the so-called "supercooling point," also called the "temperature of crystallization" and the "nucleation temperature," and exhibit the natural, inherent width associated with this quantity. We describe a new method for accurate determination of the supercooling point, which takes into account the inherent statistical fluctuations of the value. We show further that many measurements on a single unchanging sample are required to make a statistically valid measure of the supercooling point. This raises an interesting difference in circumstances where such repeat measurements are inconvenient, or impossible, for example for live organism experiments. We also discuss the effect of solutes on this temperature of nucleation. Existing data appear to show that various solute species decrease the nucleation temperature somewhat more than the equivalent melting point depression. For non-ionic solutes the species appears not to be a significant factor whereas for ions the species does affect the level of decrease of the nucleation temperature.
NASA Astrophysics Data System (ADS)
Whalen, Daniel; Norman, Michael L.
2006-02-01
Radiation hydrodynamical transport of ionization fronts (I-fronts) in the next generation of cosmological reionization simulations holds the promise of predicting UV escape fractions from first principles as well as investigating the role of photoionization in feedback processes and structure formation. We present a multistep integration scheme for radiative transfer and hydrodynamics for accurate propagation of I-fronts and ionized flows from a point source in cosmological simulations. The algorithm is a photon-conserving method that correctly tracks the position of I-fronts at much lower resolutions than nonconservative techniques. The method applies direct hierarchical updates to the ionic species, bypassing the need for the costly matrix solutions required by implicit methods while retaining sufficient accuracy to capture the true evolution of the fronts. We review the physics of ionization fronts in power-law density gradients, whose analytical solutions provide excellent validation tests for radiation coupling schemes. The advantages and potential drawbacks of direct and implicit schemes are also considered, with particular focus on problem time-stepping, which if not properly implemented can lead to morphologically plausible I-front behavior that nonetheless departs from theory. We also examine the effect of radiation pressure from very luminous central sources on the evolution of I-fronts and flows.
Warming Early Mars by Impact Degassing of Reduced Greenhouse Gases
NASA Technical Reports Server (NTRS)
Haberle, R. M.; Zahnle, K.; Barlow, N. G.
2018-01-01
Reducing greenhouse gases are once again the latest trend in finding solutions to the early Mars climate dilemma. In its current form collision induced absorptions (CIA) involving H2 and/or CH4 provide enough extra greenhouse power in a predominately CO2 atmosphere to raise global mean surface temperatures to the melting point of water provided the atmosphere is thick enough and the reduced gases are abundant enough. Surface pressures must be at least 500 mb and H2 and/or CH4 concentrations must be at or above the several percent level for CIA to be effective. Atmospheres with 1-2 bars of CO2 and 2- 10% H2 can sustain surface environments favorable for liquid water. Smaller concentrations of H2 are sufficient if CH4 is also present. If thick CO2 atmospheres with percent level concentrations of reduced gases are the solution to the faint young Sun paradox for Mars, then plausible mechanisms must be found to generate and sustain the gases. Possible sources of reducing gases include volcanic outgassing, serpentinization, and impact delivery; sinks include photolyis, oxidation, and escape to space. The viability of the reduced greenhouse hypothesis depends, therefore, on the strength of these sources and sinks. In this paper we focus on impact delivered reduced gases.
Supporting Indonesia's National Forest Monitoring System with LiDAR Observations
NASA Astrophysics Data System (ADS)
Hagen, S. C.
2015-12-01
Scientists at Applied GeoSolutions, Jet Propulsion Laboratory, Winrock International, and the University of New Hampshire are working with the government of Indonesia to enhance the National Forest Monitoring System in Kalimantan, Indonesia. The establishment of a reliable, transparent, and comprehensive NFMS has been limited by a dearth of relevant data that are accurate, low-cost, and spatially resolved at subnational scales. In this NASA funded project, we are developing, evaluating, and validating several critical components of a NFMS in Kalimantan, Indonesia, focusing on the use of LiDAR and radar imagery for improved carbon stock and forest degradation information. Applied GeoSolutions and the University of New Hampshire have developed an Open Source Software package to process large amounts LiDAR data quickly, easily, and accurately. The Open Source project is called lidar2dems and includes the classification of raw LAS point clouds and the creation of Digital Terrain Models (DTMs), Digital Surface Models (DSMs), and Canopy Height Models (CHMs). Preliminary estimates of forest structure and forest damage from logging from these data sets support the idea that comprehensive, well documented, freely available software for processing LiDAR data can enable countries such as Indonesia to cost effectively monitor their forests with high precision.
An architecture for genomics analysis in a clinical setting using Galaxy and Docker
Digan, W; Countouris, H; Barritault, M; Baudoin, D; Laurent-Puig, P; Blons, H; Burgun, A
2017-01-01
Abstract Next-generation sequencing is used on a daily basis to perform molecular analysis to determine subtypes of disease (e.g., in cancer) and to assist in the selection of the optimal treatment. Clinical bioinformatics handles the manipulation of the data generated by the sequencer, from the generation to the analysis and interpretation. Reproducibility and traceability are crucial issues in a clinical setting. We have designed an approach based on Docker container technology and Galaxy, the popular bioinformatics analysis support open-source software. Our solution simplifies the deployment of a small-size analytical platform and simplifies the process for the clinician. From the technical point of view, the tools embedded in the platform are isolated and versioned through Docker images. Along the Galaxy platform, we also introduce the AnalysisManager, a solution that allows single-click analysis for biologists and leverages standardized bioinformatics application programming interfaces. We added a Shiny/R interactive environment to ease the visualization of the outputs. The platform relies on containers and ensures the data traceability by recording analytical actions and by associating inputs and outputs of the tools to EDAM ontology through ReGaTe. The source code is freely available on Github at https://github.com/CARPEM/GalaxyDocker. PMID:29048555
An architecture for genomics analysis in a clinical setting using Galaxy and Docker.
Digan, W; Countouris, H; Barritault, M; Baudoin, D; Laurent-Puig, P; Blons, H; Burgun, A; Rance, B
2017-11-01
Next-generation sequencing is used on a daily basis to perform molecular analysis to determine subtypes of disease (e.g., in cancer) and to assist in the selection of the optimal treatment. Clinical bioinformatics handles the manipulation of the data generated by the sequencer, from the generation to the analysis and interpretation. Reproducibility and traceability are crucial issues in a clinical setting. We have designed an approach based on Docker container technology and Galaxy, the popular bioinformatics analysis support open-source software. Our solution simplifies the deployment of a small-size analytical platform and simplifies the process for the clinician. From the technical point of view, the tools embedded in the platform are isolated and versioned through Docker images. Along the Galaxy platform, we also introduce the AnalysisManager, a solution that allows single-click analysis for biologists and leverages standardized bioinformatics application programming interfaces. We added a Shiny/R interactive environment to ease the visualization of the outputs. The platform relies on containers and ensures the data traceability by recording analytical actions and by associating inputs and outputs of the tools to EDAM ontology through ReGaTe. The source code is freely available on Github at https://github.com/CARPEM/GalaxyDocker. © The Author 2017. Published by Oxford University Press.
Computations of steady-state and transient premixed turbulent flames using pdf methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulek, T.; Lindstedt, R.P.
1996-03-01
Premixed propagating turbulent flames are modeled using a one-point, single time, joint velocity-composition probability density function (pdf) closure. The pdf evolution equation is solved using a Monte Carlo method. The unclosed terms in the pdf equation are modeled using a modified version of the binomial Langevin model for scalar mixing of Valino and Dopazo, and the Haworth and Pope (HP) and Lagrangian Speziale-Sarkar-Gatski (LSSG) models for the viscous dissipation of velocity and the fluctuating pressure gradient. The source terms for the presumed one-step chemical reaction are extracted from the rate of fuel consumption in laminar premixed hydrocarbon flames, computed usingmore » a detailed chemical kinetic mechanism. Steady-state and transient solutions are obtained for planar turbulent methane-air and propane-air flames. The transient solution method features a coupling with a Finite Volume (FV) code to obtain the mean pressure field. The results are compared with the burning velocity measurements of Abdel-Gayed et al. and with velocity measurements obtained in freely propagating propane-air flames by Videto and Santavicca. The effects of different upstream turbulence fields, chemical source terms (different fuels and strained/unstrained laminar flames) and the influence of the velocity statistics models (HP and LSSG) are assessed.« less
Higher order approximation to the Hill problem dynamics about the libration points
NASA Astrophysics Data System (ADS)
Lara, Martin; Pérez, Iván L.; López, Rosario
2018-06-01
An analytical solution to the Hill problem Hamiltonian expanded about the libration points has been obtained by means of perturbation techniques. In order to compute the higher orders of the perturbation solution that are needed to capture all the relevant periodic orbits originated from the libration points within a reasonable accuracy, the normalization is approached in complex variables. The validity of the solution extends to energy values considerably far away from that of the libration points and, therefore, can be used in the computation of Halo orbits as an alternative to the classical Lindstedt-Poincaré approach. Furthermore, the theory correctly predicts the existence of the two-lane bridge of periodic orbits linking the families of planar and vertical Lyapunov orbits.
Approximate analytical solutions in the analysis of thin elastic plates
NASA Astrophysics Data System (ADS)
Goloskokov, Dmitriy P.; Matrosov, Alexander V.
2018-05-01
Two approaches to the construction of approximate analytical solutions for bending of a rectangular thin plate are presented: the superposition method based on the method of initial functions (MIF) and the one built using the Green's function in the form of orthogonal series. Comparison of two approaches is carried out by analyzing a square plate clamped along its contour. Behavior of the moment and the shear force in the neighborhood of the corner points is discussed. It is shown that both solutions give identical results at all points of the plate except for the neighborhoods of the corner points. There are differences in the values of bending moments and generalized shearing forces in the neighborhoods of the corner points.
W-phase estimation of first-order rupture distribution for megathrust earthquakes
NASA Astrophysics Data System (ADS)
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2014-05-01
Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.
Impact of selected troposphere models on Precise Point Positioning convergence
NASA Astrophysics Data System (ADS)
Kalita, Jakub; Rzepecka, Zofia
2016-04-01
The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first hour of processing. Finally, the results have been compared against results obtained during calm tropospheric conditions.
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2014 CFR
2014-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
Generalized cable equation model for myelinated nerve fiber.
Einziger, Pinchas D; Livshitz, Leonid M; Mizrahi, Joseph
2005-10-01
Herein, the well-known cable equation for nonmyelinated axon model is extended analytically for myelinated axon formulation. The myelinated membrane conductivity is represented via the Fourier series expansion. The classical cable equation is thereby modified into a linear second order ordinary differential equation with periodic coefficients, known as Hill's equation. The general internal source response, expressed via repeated convolutions, uniformly converges provided that the entire periodic membrane is passive. The solution can be interpreted as an extended source response in an equivalent nonmyelinated axon (i.e., the response is governed by the classical cable equation). The extended source consists of the original source and a novel activation function, replacing the periodic membrane in the myelinated axon model. Hill's equation is explicitly integrated for the specific choice of piecewise constant membrane conductivity profile, thereby resulting in an explicit closed form expression for the transmembrane potential in terms of trigonometric functions. The Floquet's modes are recognized as the nerve fiber activation modes, which are conventionally associated with the nonlinear Hodgkin-Huxley formulation. They can also be incorporated in our linear model, provided that the periodic membrane point-wise passivity constraint is properly modified. Indeed, the modified condition, enforcing the periodic membrane passivity constraint on the average conductivity only leads, for the first time, to the inclusion of the nerve fiber activation modes in our novel model. The validity of the generalized transmission-line and cable equation models for a myelinated nerve fiber, is verified herein through a rigorous Green's function formulation and numerical simulations for transmembrane potential induced in three-dimensional myelinated cylindrical cell. It is shown that the dominant pole contribution of the exact modal expansion is the transmembrane potential solution of our generalized model.
Real-time source deformation modeling through GNSS permanent stations at Merapi volcano (Indonesia
NASA Astrophysics Data System (ADS)
Beauducel, F.; Nurnaning, A.; Iguchi, M.; Fahmi, A. A.; Nandaka, M. A.; Sumarti, S.; Subandriyo, S.; Metaxian, J. P.
2014-12-01
Mt. Merapi (Java, Indonesia) is one of the most active and dangerous volcano in the world. A first GPS repetition network was setup and periodically measured since 1993, allowing detecting a deep magma reservoir, quantifying magma flux in conduit and identifying shallow discontinuities around the former crater (Beauducel and Cornet, 1999;Beauducel et al., 2000, 2006). After the 2010 centennial eruption, when this network was almost completely destroyed, Indonesian and Japanese teams installed a new continuous GPS network for monitoring purpose (Iguchi et al., 2011), consisting of 3 stations located at the volcano flanks, plus a reference station at the Yogyakarta Observatory (BPPTKG).In the framework of DOMERAPI project (2013-2016) we have completed this network with 5 additional stations, which are located on the summit area and volcano surrounding. The new stations are 1-Hz sampling, GNSS (GPS + GLONASS) receivers, and near real-time data streaming to the Observatory. An automatic processing has been developed and included in the WEBOBS system (Beauducel et al., 2010) based on GIPSY software computing precise daily moving solutions every hour, and for different time scales (2 months, 1 and 5 years), time series and velocity vectors. A real-time source modeling estimation has also been implemented. It uses the depth-varying point source solution (Mogi, 1958; Williams and Wadge, 1998) in a systematic inverse problem model exploration that displays location, volume variation and 3-D probability map.The operational system should be able to better detect and estimate the location and volume variations of possible magma sources, and to follow magma transfer towards the surface. This should help monitoring and contribute to decision making during future unrest or eruption.
Islam, Salwa; Fitzgerald, Lisa
2016-01-01
High rates of obesity are a significant issue amongst Indigenous populations in many countries around the world. Media framing of issues can play a critical role in shaping public opinion and government policy. A broad range of media analyses have been conducted on various aspects of obesity, however media representation of Indigenous obesity remains unexplored. In this study we investigate how obesity in Australia's Indigenous population is represented in newsprint media coverage. Media articles published between 2007 and 2014 were analysed for the distribution and extent of coverage over time and across Indigenous and mainstream media sources using quantitative content analysis. Representation of the causes and solutions of Indigenous obesity and framing in text and image content was examined using qualitative framing analysis. Media coverage of Indigenous obesity was very limited with no clear trends in reporting over time or across sources. The single Indigenous media source was the second largest contributor to the media discourse of this issue. Structural causes/origins were most often cited and individual solutions were comparatively overrepresented. A range of frames were employed across the media sources. All images reinforced textual framing except for one article where the image depicted individual factors whereas the text referred to structural determinants. This study provides a starting point for an important area of research that needs further investigation. The findings highlight the importance of alternative news media outlets, such as The Koori Mail, and that these should be developed to enhance the quality and diversity of media coverage. Media organisations can actively contribute to improving Indigenous health through raising awareness, evidence-based balanced reporting, and development of closer ties with Indigenous health workers.
Elastic parabolic equation solutions for underwater acoustic problems using seismic sources.
Frank, Scott D; Odom, Robert I; Collis, Jon M
2013-03-01
Several problems of current interest involve elastic bottom range-dependent ocean environments with buried or earthquake-type sources, specifically oceanic T-wave propagation studies and interface wave related analyses. Additionally, observed deep shadow-zone arrivals are not predicted by ray theoretic methods, and attempts to model them with fluid-bottom parabolic equation solutions suggest that it may be necessary to account for elastic bottom interactions. In order to study energy conversion between elastic and acoustic waves, current elastic parabolic equation solutions must be modified to allow for seismic starting fields for underwater acoustic propagation environments. Two types of elastic self-starter are presented. An explosive-type source is implemented using a compressional self-starter and the resulting acoustic field is consistent with benchmark solutions. A shear wave self-starter is implemented and shown to generate transmission loss levels consistent with the explosive source. Source fields can be combined to generate starting fields for source types such as explosions, earthquakes, or pile driving. Examples demonstrate the use of source fields for shallow sources or deep ocean-bottom earthquake sources, where down slope conversion, a known T-wave generation mechanism, is modeled. Self-starters are interpreted in the context of the seismic moment tensor.
Estimation of bipolar jets from accretion discs around Kerr black holes
NASA Astrophysics Data System (ADS)
Kumar, Rajiv; Chattopadhyay, Indranil
2017-08-01
We analyse flows around a rotating black hole and obtain self-consistent accretion-ejection solutions in full general relativistic prescription. Entire energy-angular momentum parameter space is investigated in the advective regime to obtain shocked and shock-free accretion solutions. Jet equations of motion are solved along the von Zeipel surfaces computed from the post-shock disc, simultaneously with the equations of accretion disc along the equatorial plane. For a given spin parameter, the mass outflow rate increases as the shock moves closer to the black hole, but eventually decreases, maximizing at some intermediate value of shock location. Interestingly, we obtain all types of possible jet solutions, for example, steady shock solution with multiple critical points, bound solution with two critical points and smooth solution with single critical point. Multiple critical points may exist in jet solution for spin parameter as ≥ 0.5. The jet terminal speed generally increases if the accretion shock forms closer to the horizon and is higher for corotating black hole than the counter-rotating and the non-rotating one. Quantitatively speaking, shocks in jet may form for spin parameter as > 0.6 and jet shocks range between 6rg and 130rg above the equatorial plane, while the jet terminal speed vj∞ > 0.35 c if Bernoulli parameter E≥1.01 for as > 0.99.
Uncertainty Analyses for Back Projection Methods
NASA Astrophysics Data System (ADS)
Zeng, H.; Wei, S.; Wu, W.
2017-12-01
So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.
NASA Astrophysics Data System (ADS)
Bakar, Shahirah Abu; Arifin, Norihan Md; Ali, Fadzilah Md; Bachok, Norfifah; Nazar, Roslinda
2017-08-01
The stagnation-point flow over a shrinking sheet in Darcy-Forchheimer porous medium is numerically studied. The governing partial differential equations are transformed into ordinary differential equations using a similarity transformation, and then solved numerically by using shooting technique method with Maple implementation. Dual solutions are observed in a certain range of the shrinking parameter. Regarding on numerical solutions, we prepared stability analysis to identify which solution is stable between non-unique solutions by bvp4c solver in Matlab. Further we obtain numerical results or each solution, which enable us to discuss the features of the respective solutions.
Comparison of actual vs synthesized ternary phase diagrams for solutes of cryobiological interest☆
Kleinhans, F.W.; Mazur, Peter
2009-01-01
Phase diagrams are of great utility in cryobiology, especially those consisting of a cryoprotective agent (CPA) dissolved in a physiological salt solution. These ternary phase diagrams consist of plots of the freezing points of increasing concentrations of solutions of cryoprotective agents (CPA) plus NaCl. Because they are time-consuming to generate, ternary diagrams are only available for a small number of CPA's. We wanted to determine whether accurate ternary phase diagrams could be synthesized by adding together the freezing point depressions of binary solutions of CPA/water and NaCl/water which match the corresponding solute molality concentrations in the ternary solution. We begin with a low concentration of a solution of CPA + salt of given R (CPA/salt) weight ratio. Ice formation in that solution is mimicked by withdrawing water from it which increases the concentrations of both the CPA and the NaCl. We compute the individual solute concentrations, determine their freezing points from published binary phase diagrams, and sum the freezing points. These yield the synthesized ternary phase diagram for a solution of given R. They were compared with published experimental ternary phase diagrams for glycerol, dimethyl sulfoxide (DMSO), sucrose, and ethylene glycol (EG) plus NaCl in water. For the first three, the synthesized and experimental phase diagrams agreed closely, with some divergence occurring as wt % concentrations exceeded 30% for DMSO and 55% for glycerol and sucrose. However, in the case of EG there were substantial differences over nearly the entire range of concentrations which we attribute to systematic errors in the experimental EG data. New experimental EG work will be required to resolve this issue. PMID:17350609
Comparison of actual vs. synthesized ternary phase diagrams for solutes of cryobiological interest.
Kleinhans, F W; Mazur, Peter
2007-04-01
Phase diagrams are of great utility in cryobiology, especially, those consisting of a cryoprotective agent (CPA) dissolved in a physiological salt solution. These ternary phase diagrams consist of plots of the freezing points of increasing concentrations of solutions of cryoprotective agents (CPA) plus NaCl. Because they are time-consuming to generate, ternary diagrams are only available for a small number of CPAs. We wanted to determine whether accurate ternary phase diagrams could be synthesized by adding together the freezing point depressions of binary solutions of CPA/water and NaCl/water which match the corresponding solute molality concentrations in the ternary solution. We begin with a low concentration of a solution of CPA+salt of given R (CPA/salt) weight ratio. Ice formation in that solution is mimicked by withdrawing water from it which increases the concentrations of both the CPA and the NaCl. We compute the individual solute concentrations, determine their freezing points from published binary phase diagrams, and sum the freezing points. These yield the synthesized ternary phase diagram for a solution of given R. They were compared with published experimental ternary phase diagrams for glycerol, dimethyl sulfoxide (DMSO), sucrose, and ethylene glycol (EG) plus NaCl in water. For the first three, the synthesized and experimental phase diagrams agreed closely, with some divergence occurring as wt% concentrations exceeded 30% for DMSO and 55% for glycerol, and sucrose. However, in the case of EG there were substantial differences over nearly the entire range of concentrations which we attribute to systematic errors in the experimental EG data. New experimental EG work will be required to resolve this issue.
Navarrete-Benlloch, Carlos; Roldán, Eugenio; Chang, Yue; Shi, Tao
2014-10-06
Nonlinear optical cavities are crucial both in classical and quantum optics; in particular, nowadays optical parametric oscillators are one of the most versatile and tunable sources of coherent light, as well as the sources of the highest quality quantum-correlated light in the continuous variable regime. Being nonlinear systems, they can be driven through critical points in which a solution ceases to exist in favour of a new one, and it is close to these points where quantum correlations are the strongest. The simplest description of such systems consists in writing the quantum fields as the classical part plus some quantum fluctuations, linearizing then the dynamical equations with respect to the latter; however, such an approach breaks down close to critical points, where it provides unphysical predictions such as infinite photon numbers. On the other hand, techniques going beyond the simple linear description become too complicated especially regarding the evaluation of two-time correlators, which are of major importance to compute observables outside the cavity. In this article we provide a regularized linear description of nonlinear cavities, that is, a linearization procedure yielding physical results, taking the degenerate optical parametric oscillator as the guiding example. The method, which we call self-consistent linearization, is shown to be equivalent to a general Gaussian ansatz for the state of the system, and we compare its predictions with those obtained with available exact (or quasi-exact) methods. Apart from its operational value, we believe that our work is valuable also from a fundamental point of view, especially in connection to the question of how far linearized or Gaussian theories can be pushed to describe nonlinear dissipative systems which have access to non-Gaussian states.
The Influences of Lamination Angles on the Interior Noise Levels of an Aircraft
NASA Technical Reports Server (NTRS)
Fernholz, Christian M.; Robinson, Jay H.
1996-01-01
The feasibility of reducing the interior noise levels of an aircraft passenger cabin through optimization of the composite lay up of the fuselage is investigated. MSC/NASTRAN, a commercially available finite element code, is used to perform the dynamic analysis and subsequent optimization of the fuselage. The numerical calculation of sensitivity of acoustic pressure to lamination angle is verified using a simple thin, cylindrical shell with point force excitations as noise sources. The thin shell used represents a geometry similar to the fuselage and analytic solutions are available for the cylindrical thin shell equations of motion. Optimization of lamination angle for the reduction of interior noise is performed using a finite element model of an actual aircraft fuselage. The aircraft modeled for this study is the Beech Starship. Point forces simulate the structure borne noise produced by the engines and are applied to the fuselage at the wing mounting locations. These forces are the noise source for the optimization problem. The acoustic pressure response is reduced at a number of points in the fuselage and over a number of frequencies. The objective function is minimized with the constraint that it be larger than the maximum sound pressure level at the response points in the passenger cabin for all excitation frequencies in the range of interest. Results from the study of the fuselage model indicate that a reduction in interior noise levels is possible over a finite frequency range through optimal configuration of the lamination angles in the fuselage. Noise reductions of roughly 4 dB were attained. For frequencies outside the optimization range, the acoustic pressure response may increase after optimization. The effects of changing lamination angle on the overall structural integrity of the airframe are not considered in this study.
Applications of isotopes to tracing sources of solutes and water in shallow systems
Kendall, Carol; Krabbenhoft, David P.
1995-01-01
New awareness of the potential danger to water supplies posed by the use of agricultural chemicals has focused attention on the nature of groundwater recharge and the mobility of various solutes, especially nitrate and pesticides, in shallow systems. A better understanding of hydrologic flowpaths and solute sources is required to determine the potential impact of sources of contamination on water supplies, to develop management practices for preserving water quality, and to develop remediation plans for sites that are already contaminated. In many cases, environmental isotopes can be employed as 'surgical tools' for answering very specific questions about water and solute sources. Isotopic data can often provide more accurate information about the system than hydrologic measurements or complicated hydrologic models. This note focuses on practical and cost-effective examples of how naturally-occurring isotopes can be used to track water and solutes as they move through shallow systems.
NASA Astrophysics Data System (ADS)
Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael
2018-05-01
A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.