A numerical technique for linear elliptic partial differential equations in polygonal domains.
Hashemzadeh, P; Fokas, A S; Smitheman, S A
2015-03-08
Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.
NASA Astrophysics Data System (ADS)
Liang, Hui; Chen, Xiaobo
2017-10-01
A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.
Inverse scattering for an exterior Dirichlet program
NASA Technical Reports Server (NTRS)
Hariharan, S. I.
1981-01-01
Scattering due to a metallic cylinder which is in the field of a wire carrying a periodic current is considered. The location and shape of the cylinder is obtained with a far field measurement in between the wire and the cylinder. The same analysis is applicable in acoustics in the situation that the cylinder is a soft wall body and the wire is a line source. The associated direct problem in this situation is an exterior Dirichlet problem for the Helmholtz equation in two dimensions. An improved low frequency estimate for the solution of this problem using integral equation methods is presented. The far field measurements are related to the solutions of boundary integral equations in the low frequency situation. These solutions are expressed in terms of mapping function which maps the exterior of the unknown curve onto the exterior of a unit disk. The coefficients of the Laurent expansion of the conformal transformations are related to the far field coefficients. The first far field coefficient leads to the calculation of the distance between the source and the cylinder.
NASA Astrophysics Data System (ADS)
Ben Amara, Jamel; Bouzidi, Hedi
2018-01-01
In this paper, we consider a linear hybrid system which is composed by two non-homogeneous rods connected by a point mass with Dirichlet boundary conditions on the left end and a boundary control acts on the right end. We prove that this system is null controllable with Dirichlet or Neumann boundary controls. Our approach is mainly based on a detailed spectral analysis together with the moment method. In particular, we show that the associated spectral gap in both cases (Dirichlet or Neumann boundary controls) is positive without further conditions on the coefficients other than the regularities.
Study on monostable and bistable reaction-diffusion equations by iteration of travelling wave maps
NASA Astrophysics Data System (ADS)
Yi, Taishan; Chen, Yuming
2017-12-01
In this paper, based on the iterative properties of travelling wave maps, we develop a new method to obtain spreading speeds and asymptotic propagation for monostable and bistable reaction-diffusion equations. Precisely, for Dirichlet problems of monostable reaction-diffusion equations on the half line, by making links between travelling wave maps and integral operators associated with the Dirichlet diffusion kernel (the latter is NOT invariant under translation), we obtain some iteration properties of the Dirichlet diffusion and some a priori estimates on nontrivial solutions of Dirichlet problems under travelling wave transformation. We then provide the asymptotic behavior of nontrivial solutions in the space-time region for Dirichlet problems. These enable us to develop a unified method to obtain results on heterogeneous steady states, travelling waves, spreading speeds, and asymptotic spreading behavior for Dirichlet problem of monostable reaction-diffusion equations on R+ as well as of monostable/bistable reaction-diffusion equations on R.
Li, Xian-Ying; Hu, Shi-Min
2013-02-01
Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.
New solutions to the constant-head test performed at a partially penetrating well
NASA Astrophysics Data System (ADS)
Chang, Y. C.; Yeh, H. D.
2009-05-01
SummaryThe mathematical model describing the aquifer response to a constant-head test performed at a fully penetrating well can be easily solved by the conventional integral transform technique. In addition, the Dirichlet-type condition should be chosen as the boundary condition along the rim of wellbore for such a test well. However, the boundary condition for a test well with partial penetration must be considered as a mixed-type condition. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The model for such a mixed boundary problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the dual series equations and perturbation method. This approach provides analytical results for the drawdown in the partially penetrating well and the well discharge along the screen. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.
NASA Astrophysics Data System (ADS)
Nakamura, Gen; Wang, Haibing
2017-05-01
Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.
2014-04-01
In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O(η), which is more favorable than the error convergence of the already established Dirichlet boundary condition.
Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling
NASA Astrophysics Data System (ADS)
Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan
2018-01-01
In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.
A generalized Poisson solver for first-principles device simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less
Two-point correlation function for Dirichlet L-functions
NASA Astrophysics Data System (ADS)
Bogomolny, E.; Keating, J. P.
2013-03-01
The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.
Asymptotic stability of a nonlinear Korteweg-de Vries equation with critical lengths
NASA Astrophysics Data System (ADS)
Chu, Jixun; Coron, Jean-Michel; Shang, Peipei
2015-10-01
We study an initial-boundary-value problem of a nonlinear Korteweg-de Vries equation posed on the finite interval (0, 2 kπ) where k is a positive integer. The whole system has Dirichlet boundary condition at the left end-point, and both of Dirichlet and Neumann homogeneous boundary conditions at the right end-point. It is known that the origin is not asymptotically stable for the linearized system around the origin. We prove that the origin is (locally) asymptotically stable for the nonlinear system if the integer k is such that the kernel of the linear Korteweg-de Vries stationary equation is of dimension 1. This is for example the case if k = 1.
NASA Astrophysics Data System (ADS)
Chang, Ya-Chi; Yeh, Hund-Der
2010-06-01
The constant-head pumping tests are usually employed to determine the aquifer parameters and they can be performed in fully or partially penetrating wells. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The mathematical model describing the aquifer response to a constant-head test performed in a fully penetrating well can be easily solved by the conventional integral transform technique under the uniform Dirichlet-type condition along the rim of wellbore. However, the boundary condition for a test well with partial penetration should be considered as a mixed-type condition. This mixed boundary value problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the Laplace and finite Fourier transforms in conjunction with the triple series equations method. This approach provides analytical results for the drawdown in a partially penetrating well for arbitrary location of the well screen in a finite thickness aquifer. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
Mathematical and computational aspects of nonuniform frictional slip modeling
NASA Astrophysics Data System (ADS)
Gorbatikh, Larissa
2004-07-01
A mechanics-based model of non-uniform frictional sliding is studied from the mathematical/computational analysis point of view. This problem is of a key importance for a number of applications (particularly geomechanical ones), where materials interfaces undergo partial frictional sliding under compression and shear. We show that the problem is reduced to Dirichlet's problem for monotonic loading and to Riemman's problem for cyclic loading. The problem may look like a traditional crack interaction problem, however, it is confounded by the fact that locations of n sliding intervals are not known. They are to be determined from the condition for the stress intensity factors: KII=0 at the ends of the sliding zones. Computationally, it reduces to solving a system of 2n coupled non-linear algebraic equations involving singular integrals with unknown limits of integration.
NASA Astrophysics Data System (ADS)
Diestra Cruz, Heberth Alexander
The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manjunath, Naren; Samajdar, Rhine; Jain, Sudhir R., E-mail: srjain@barc.gov.in
Recently, the nodal domain counts of planar, integrable billiards with Dirichlet boundary conditions were shown to satisfy certain difference equations in Samajdar and Jain (2014). The exact solutions of these equations give the number of domains explicitly. For complete generality, we demonstrate this novel formulation for three additional separable systems and thus extend the statement to all integrable billiards.
Reuter, Martin; Wolter, Franz-Erich; Shenton, Martha; Niethammer, Marc
2009-01-01
This paper proposes the use of the surface based Laplace-Beltrami and the volumetric Laplace eigenvalues and -functions as shape descriptors for the comparison and analysis of shapes. These spectral measures are isometry invariant and therefore allow for shape comparisons with minimal shape pre-processing. In particular, no registration, mapping, or remeshing is necessary. The discriminatory power of the 2D surface and 3D solid methods is demonstrated on a population of female caudate nuclei (a subcortical gray matter structure of the brain, involved in memory function, emotion processing, and learning) of normal control subjects and of subjects with schizotypal personality disorder. The behavior and properties of the Laplace-Beltrami eigenvalues and -functions are discussed extensively for both the Dirichlet and Neumann boundary condition showing advantages of the Neumann vs. the Dirichlet spectra in 3D. Furthermore, topological analyses employing the Morse-Smale complex (on the surfaces) and the Reeb graph (in the solids) are performed on selected eigenfunctions, yielding shape descriptors, that are capable of localizing geometric properties and detecting shape differences by indirectly registering topological features such as critical points, level sets and integral lines of the gradient field across subjects. The use of these topological features of the Laplace-Beltrami eigenfunctions in 2D and 3D for statistical shape analysis is novel. PMID:20161035
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
NASA Technical Reports Server (NTRS)
Maskew, Brian
1987-01-01
The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.
Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...
Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...
NASA Technical Reports Server (NTRS)
Johnson, F. T.
1980-01-01
A method for solving the linear integral equations of incompressible potential flow in three dimensions is presented. Both analysis (Neumann) and design (Dirichlet) boundary conditions are treated in a unified approach to the general flow problem. The method is an influence coefficient scheme which employs source and doublet panels as boundary surfaces. Curved panels possessing singularity strengths, which vary as polynomials are used, and all influence coefficients are derived in closed form. These and other features combine to produce an efficient scheme which is not only versatile but eminently suited to the practical realities of a user-oriented environment. A wide variety of numerical results demonstrating the method is presented.
NASA Astrophysics Data System (ADS)
Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.
2018-06-01
We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.
Generalised solutions for fully nonlinear PDE systems and existence-uniqueness theorems
NASA Astrophysics Data System (ADS)
Katzourakis, Nikos
2017-07-01
We introduce a new theory of generalised solutions which applies to fully nonlinear PDE systems of any order and allows for merely measurable maps as solutions. This approach bypasses the standard problems arising by the application of Distributions to PDEs and is not based on either integration by parts or on the maximum principle. Instead, our starting point builds on the probabilistic representation of derivatives via limits of difference quotients in the Young measures over a toric compactification of the space of jets. After developing some basic theory, as a first application we consider the Dirichlet problem and we prove existence-uniqueness-partial regularity of solutions to fully nonlinear degenerate elliptic 2nd order systems and also existence of solutions to the ∞-Laplace system of vectorial Calculus of Variations in L∞.
Meta-analysis using Dirichlet process.
Muthukumarana, Saman; Tiwari, Ram C
2016-02-01
This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.
Evaluation of the path integral for flow through random porous media
NASA Astrophysics Data System (ADS)
Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.
2018-04-01
We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.
A stochastic diffusion process for Lochner's generalized Dirichlet distribution
Bakosi, J.; Ristorcelli, J. R.
2013-10-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less
Quantum "violation" of Dirichlet boundary condition
NASA Astrophysics Data System (ADS)
Park, I. Y.
2017-02-01
Dirichlet boundary conditions have been widely used in general relativity. They seem at odds with the holographic property of gravity simply because a boundary configuration can be varying and dynamic instead of dying out as required by the conditions. In this work we report what should be a tension between the Dirichlet boundary conditions and quantum gravitational effects, and show that a quantum-corrected black hole solution of the 1PI action no longer obeys, in the naive manner one may expect, the Dirichlet boundary conditions imposed at the classical level. We attribute the 'violation' of the Dirichlet boundary conditions to a certain mechanism of the information storage on the boundary.
USING DIRICHLET TESSELLATION TO HELP ESTIMATE MICROBIAL BIOMASS CONCENTRATIONS
Dirichlet tessellation was applied to estimate microbial concentrations from microscope well slides. The use of microscopy/Dirichlet tessellation to quantify biomass was illustrated with two species of morphologically distinct cyanobacteria, and validated empirically by compariso...
Active electromagnetic invisibility cloaking and radiation force cancellation
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2018-03-01
This investigation shows that an active emitting electromagnetic (EM) Dirichlet source (i.e., with axial polarization of the electric field) in a homogeneous non-dissipative/non-absorptive medium placed near a perfectly conducting boundary can render total invisibility (i.e. zero extinction cross-section or efficiency) in addition to a radiation force cancellation on its surface. Based upon the Poynting theorem, the mathematical expression for the extinction, radiation and amplification cross-sections (or efficiencies) are derived using the partial-wave series expansion method in cylindrical coordinates. Moreover, the analysis is extended to compute the self-induced EM radiation force on the active source, resulting from the waves reflected by the boundary. The numerical results predict the generation of a zero extinction efficiency, achieving total invisibility, in addition to a radiation force cancellation which depend on the source size, the distance from the boundary and the associated EM mode order of the active source. Furthermore, an attractive EM pushing force on the active source directed toward the boundary or a repulsive pulling one pointing away from it can arise accordingly. The numerical predictions and computational results find potential applications in the design and development of EM cloaking devices, invisibility and stealth technologies.
Nicholls, David P
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
NASA Astrophysics Data System (ADS)
Nicholls, David P.
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
Feature extraction for document text using Latent Dirichlet Allocation
NASA Astrophysics Data System (ADS)
Prihatini, P. M.; Suryawan, I. K.; Mandia, IN
2018-01-01
Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.
NASA Astrophysics Data System (ADS)
Hill, Peter; Shanahan, Brendan; Dudson, Ben
2017-04-01
We present a technique for handling Dirichlet boundary conditions with the Flux Coordinate Independent (FCI) parallel derivative operator with arbitrary-shaped material geometry in general 3D magnetic fields. The FCI method constructs a finite difference scheme for ∇∥ by following field lines between poloidal planes and interpolating within planes. Doing so removes the need for field-aligned coordinate systems that suffer from singularities in the metric tensor at null points in the magnetic field (or equivalently, when q → ∞). One cost of this method is that as the field lines are not on the mesh, they may leave the domain at any point between neighbouring planes, complicating the application of boundary conditions. The Leg Value Fill (LVF) boundary condition scheme presented here involves an extrapolation/interpolation of the boundary value onto the field line end point. The usual finite difference scheme can then be used unmodified. We implement the LVF scheme in BOUT++ and use the Method of Manufactured Solutions to verify the implementation in a rectangular domain, and show that it does not modify the error scaling of the finite difference scheme. The use of LVF for arbitrary wall geometry is outlined. We also demonstrate the feasibility of using the FCI approach in no n-axisymmetric configurations for a simple diffusion model in a "straight stellarator" magnetic field. A Gaussian blob diffuses along the field lines, tracing out flux surfaces. Dirichlet boundary conditions impose a last closed flux surface (LCFS) that confines the density. Including a poloidal limiter moves the LCFS to a smaller radius. The expected scaling of the numerical perpendicular diffusion, which is a consequence of the FCI method, in stellarator-like geometry is recovered. A novel technique for increasing the parallel resolution during post-processing, in order to reduce artefacts in visualisations, is described.
NASA Astrophysics Data System (ADS)
Reimer, Ashton S.; Cheviakov, Alexei F.
2013-03-01
A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.
Step scaling and the Yang-Mills gradient flow
NASA Astrophysics Data System (ADS)
Lüscher, Martin
2014-06-01
The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.
Spectral multigrid methods for elliptic equations 2
NASA Technical Reports Server (NTRS)
Zang, T. A.; Wong, Y. S.; Hussaini, M. Y.
1983-01-01
A detailed description of spectral multigrid methods is provided. This includes the interpolation and coarse-grid operators for both periodic and Dirichlet problems. The spectral methods for periodic problems use Fourier series and those for Dirichlet problems are based upon Chebyshev polynomials. An improved preconditioning for Dirichlet problems is given. Numerical examples and practical advice are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, G.T.
1987-08-01
The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less
NASA Astrophysics Data System (ADS)
Huang, Ching-Sheng; Yeh, Hund-Der
2016-11-01
This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.
Study of a mixed dispersal population dynamics model
Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; ...
2016-08-27
In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less
Quantum Gravitational Effects on the Boundary
NASA Astrophysics Data System (ADS)
James, F.; Park, I. Y.
2018-04-01
Quantum gravitational effects might hold the key to some of the outstanding problems in theoretical physics. We analyze the perturbative quantum effects on the boundary of a gravitational system and the Dirichlet boundary condition imposed at the classical level. Our analysis reveals that for a black hole solution, there is a contradiction between the quantum effects and the Dirichlet boundary condition: the black hole solution of the one-particle-irreducible action no longer satisfies the Dirichlet boundary condition as would be expected without going into details. The analysis also suggests that the tension between the Dirichlet boundary condition and loop effects is connected with a certain mechanism of information storage on the boundary.
NASA Astrophysics Data System (ADS)
Moreto, Jose; Liu, Xiaofeng
2017-11-01
The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.
Parametric embedding for class visualization.
Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B
2007-09-01
We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.
Casimir interaction between spheres in ( D + 1)-dimensional Minkowski spacetime
NASA Astrophysics Data System (ADS)
Teo, L. P.
2014-05-01
We consider the Casimir interaction between two spheres in ( D + 1)-dimensional Minkowski spacetime due to the vacuum fluctuations of scalar fields. We consider combinations of Dirichlet and Neumann boundary conditions. The TGTG formula of the Casimir interaction energy is derived. The computations of the T matrices of the two spheres are straightforward. To compute the two G matrices, known as translation matrices, which relate the hyper-spherical waves in two spherical coordinate frames differ by a translation, we generalize the operator approach employed in [39]. The result is expressed in terms of an integral over Gegenbauer polynomials. In contrast to the D=3 case, we do not re-express the integral in terms of 3 j-symbols and hyper-spherical waves, which in principle, can be done but does not simplify the formula. Using our expression for the Casimir interaction energy, we derive the large separation and small separation asymptotic expansions of the Casimir interaction energy. In the large separation regime, we find that the Casimir interaction energy is of order L -2 D+3, L -2 D+1 and L -2 D-1 respectively for Dirichlet-Dirichlet, Dirichlet-Neumann and Neumann-Neumann boundary conditions, where L is the center-to-center distance of the two spheres. In the small separation regime, we confirm that the leading term of the Casimir interaction agrees with the proximity force approximation, which is of order , where d is the distance between the two spheres. Another main result of this work is the analytic computations of the next-to-leading order term in the small separation asymptotic expansion. This term is computed using careful order analysis as well as perturbation method. In the case the radius of one of the sphere goes to infinity, we find that the results agree with the one we derive for sphere-plate configuration. When D=3, we also recover previously known results. We find that when D is large, the ratio of the next-to-leading order term to the leading order term is linear in D, indicating a larger correction at higher dimensions. The methodologies employed in this work and the results obtained can be used to study the one-loop effective action of the system of two spherical objects in the universe.
Better Assessment Science Integrating Point and Non-point Sources (BASINS)
Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is a multipurpose environmental analysis system designed to help regional, state, and local agencies perform watershed- and water quality-based studies.
Using phrases and document metadata to improve topic modeling of clinical reports.
Speier, William; Ong, Michael K; Arnold, Corey W
2016-06-01
Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.
Scalar Casimir densities and forces for parallel plates in cosmic string spacetime
NASA Astrophysics Data System (ADS)
Bezerra de Mello, E. R.; Saharian, A. A.; Abajyan, S. V.
2018-04-01
We analyze the Green function, the Casimir densities and forces associated with a massive scalar quantum field confined between two parallel plates in a higher dimensional cosmic string spacetime. The plates are placed orthogonal to the string, and the field obeys the Robin boundary conditions on them. The boundary-induced contributions are explicitly extracted in the vacuum expectation values (VEVs) of the field squared and of the energy-momentum tensor for both the single plate and two plates geometries. The VEV of the energy-momentum tensor, in additional to the diagonal components, contains an off diagonal component corresponding to the shear stress. The latter vanishes on the plates in special cases of Dirichlet and Neumann boundary conditions. For points outside the string core the topological contributions in the VEVs are finite on the plates. Near the string the VEVs are dominated by the boundary-free part, whereas at large distances the boundary-induced contributions dominate. Due to the nonzero off diagonal component of the vacuum energy-momentum tensor, in addition to the normal component, the Casimir forces have nonzero component parallel to the boundary (shear force). Unlike the problem on the Minkowski bulk, the normal forces acting on the separate plates, in general, do not coincide if the corresponding Robin coefficients are different. Another difference is that in the presence of the cosmic string the Casimir forces for Dirichlet and Neumann boundary conditions differ. For Dirichlet boundary condition the normal Casimir force does not depend on the curvature coupling parameter. This is not the case for other boundary conditions. A new qualitative feature induced by the cosmic string is the appearance of the shear stress acting on the plates. The corresponding force is directed along the radial coordinate and vanishes for Dirichlet and Neumann boundary conditions. Depending on the parameters of the problem, the radial component of the shear force can be either positive or negative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mejri, Youssef, E-mail: josef-bizert@hotmail.fr; Dép. des Mathématiques, Faculté des Sciences de Bizerte, 7021 Jarzouna; Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT BP 37, Le Belvedere, 1002 Tunis
In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.
Constructing Weyl group multiple Dirichlet series
NASA Astrophysics Data System (ADS)
Chinta, Gautam; Gunnells, Paul E.
2010-01-01
Let Phi be a reduced root system of rank r . A Weyl group multiple Dirichlet series for Phi is a Dirichlet series in r complex variables s_1,dots,s_r , initially converging for {Re}(s_i) sufficiently large, that has meromorphic continuation to {{C}}^r and satisfies functional equations under the transformations of {{C}}^r corresponding to the Weyl group of Phi . A heuristic definition of such a series was given by Brubaker, Bump, Chinta, Friedberg, and Hoffstein, and they have been investigated in certain special cases by others. In this paper we generalize results by Chinta and Gunnells to construct Weyl group multiple Dirichlet series by a uniform method and show in all cases that they have the expected properties.
On the existence of mosaic-skeleton approximations for discrete analogues of integral operators
NASA Astrophysics Data System (ADS)
Kashirin, A. A.; Taltykina, M. Yu.
2017-09-01
Exterior three-dimensional Dirichlet problems for the Laplace and Helmholtz equations are considered. By applying methods of potential theory, they are reduced to equivalent Fredholm boundary integral equations of the first kind, for which discrete analogues, i.e., systems of linear algebraic equations (SLAEs) are constructed. The existence of mosaic-skeleton approximations for the matrices of the indicated systems is proved. These approximations make it possible to reduce the computational complexity of an iterative solution of the SLAEs. Numerical experiments estimating the capabilities of the proposed approach are described.
Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling
NASA Astrophysics Data System (ADS)
Li, Gang; Han, Bo
2017-09-01
For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.
A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2010-08-01
A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.
NASA Astrophysics Data System (ADS)
Ciarlet, P.
1994-09-01
Hereafter, we describe and analyze, from both a theoretical and a numerical point of view, an iterative method for efficiently solving symmetric elliptic problems with possibly discontinuous coefficients. In the following, we use the Preconditioned Conjugate Gradient method to solve the symmetric positive definite linear systems which arise from the finite element discretization of the problems. We focus our interest on sparse and efficient preconditioners. In order to define the preconditioners, we perform two steps: first we reorder the unknowns and then we carry out a (modified) incomplete factorization of the original matrix. We study numerically and theoretically two preconditioners, the second preconditioner corresponding to the one investigated by Brand and Heinemann [2]. We prove convergence results about the Poisson equation with either Dirichlet or periodic boundary conditions. For a meshsizeh, Brand proved that the condition number of the preconditioned system is bounded byO(h-1/2) for Dirichlet boundary conditions. By slightly modifying the preconditioning process, we prove that the condition number is bounded byO(h-1/3).
1987-07-01
multinomial distribution as a magazine exposure model. J. of Marketing Research . 21, 100-106. Lehmann, E.L. (1983). Theory of Point Estimation. John Wiley and... Marketing Research . 21, 89-99. V I flWflW WflW~WWMWSS tWN ,rw fl rwwrwwr-w~ w-. ~. - - -- .~ 4’.) ~a 4’ ., . ’-4. .4.: .4~ I .4. ~J3iAf a,’ -a’ 4
Multispike solutions for the Brezis-Nirenberg problem in dimension three
NASA Astrophysics Data System (ADS)
Musso, Monica; Salazar, Dora
2018-06-01
We consider the problem Δu + λu +u5 = 0, u > 0, in a smooth bounded domain Ω in R3, under zero Dirichlet boundary conditions. We obtain solutions to this problem exhibiting multiple bubbling behavior at k different points of the domain as λ tends to a special positive value λ0, which we characterize in terms of the Green function of - Δ - λ.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Dr. Li; Cui, Xiaohui; Cemerlic, Alma
Ad hoc networks are very helpful in situations when no fixed network infrastructure is available, such as natural disasters and military conflicts. In such a network, all wireless nodes are equal peers simultaneously serving as both senders and routers for other nodes. Therefore, how to route packets through reliable paths becomes a fundamental problems when behaviors of certain nodes deviate from wireless ad hoc routing protocols. We proposed a novel Dirichlet reputation model based on Bayesian inference theory which evaluates reliability of each node in terms of packet delivery. Our system offers a way to predict and select a reliablemore » path through combination of first-hand observation and second-hand reputation reports. We also proposed moving window mechanism which helps to adjust ours responsiveness of our system to changes of node behaviors. We integrated the Dirichlet reputation into routing protocol of wireless ad hoc networks. Our extensive simulation indicates that our proposed reputation system can improve good throughput of the network and reduce negative impacts caused by misbehaving nodes.« less
Backenroth, Daniel; He, Zihuai; Kiryluk, Krzysztof; Boeva, Valentina; Pethukova, Lynn; Khurana, Ekta; Christiano, Angela; Buxbaum, Joseph D; Ionita-Laza, Iuliana
2018-05-03
We describe a method based on a latent Dirichlet allocation model for predicting functional effects of noncoding genetic variants in a cell-type- and/or tissue-specific way (FUN-LDA). Using this unsupervised approach, we predict tissue-specific functional effects for every position in the human genome in 127 different tissues and cell types. We demonstrate the usefulness of our predictions by using several validation experiments. Using eQTL data from several sources, including the GTEx project, Geuvadis project, and TwinsUK cohort, we show that eQTLs in specific tissues tend to be most enriched among the predicted functional variants in relevant tissues in Roadmap. We further show how these integrated functional scores can be used for (1) deriving the most likely cell or tissue type causally implicated for a complex trait by using summary statistics from genome-wide association studies and (2) estimating a tissue-based correlation matrix of various complex traits. We found large enrichment of heritability in functional components of relevant tissues for various complex traits, and FUN-LDA yielded higher enrichment estimates than existing methods. Finally, using experimentally validated functional variants from the literature and variants possibly implicated in disease by previous studies, we rigorously compare FUN-LDA with state-of-the-art functional annotation methods and show that FUN-LDA has better prediction accuracy and higher resolution than these methods. In particular, our results suggest that tissue- and cell-type-specific functional prediction methods tend to have substantially better prediction accuracy than organism-level prediction methods. Scores for each position in the human genome and for each ENCODE and Roadmap tissue are available online (see Web Resources). Copyright © 2018 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Bounded solutions in a T-shaped waveguide and the spectral properties of the Dirichlet ladder
NASA Astrophysics Data System (ADS)
Nazarov, S. A.
2014-08-01
The Dirichlet problem is considered on the junction of thin quantum waveguides (of thickness h ≪ 1) in the shape of an infinite two-dimensional ladder. Passage to the limit as h → +0 is discussed. It is shown that the asymptotically correct transmission conditions at nodes of the corresponding one-dimensional quantum graph are Dirichlet conditions rather than the conventional Kirchhoff transmission conditions. The result is obtained by analyzing bounded solutions of a problem in the T-shaped waveguide that the boundary layer phenomenon.
General stability of memory-type thermoelastic Timoshenko beam acting on shear force
NASA Astrophysics Data System (ADS)
Apalara, Tijani A.
2018-03-01
In this paper, we consider a linear thermoelastic Timoshenko system with memory effects where the thermoelastic coupling is acting on shear force under Neumann-Dirichlet-Dirichlet boundary conditions. The same system with fully Dirichlet boundary conditions was considered by Messaoudi and Fareh (Nonlinear Anal TMA 74(18):6895-6906, 2011, Acta Math Sci 33(1):23-40, 2013), but they obtained a general stability result which depends on the speeds of wave propagation. In our case, we obtained a general stability result irrespective of the wave speeds of the system.
A Stochastic Diffusion Process for the Dirichlet Distribution
Bakosi, J.; Ristorcelli, J. R.
2013-03-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less
Boundary conditions and formation of pure spin currents in magnetic field
NASA Astrophysics Data System (ADS)
Eliashvili, Merab; Tsitsishvili, George
2017-09-01
Schrödinger equation for an electron confined to a two-dimensional strip is considered in the presence of homogeneous orthogonal magnetic field. Since the system has edges, the eigenvalue problem is supplied by the boundary conditions (BC) aimed in preventing the leakage of matter away across the edges. In the case of spinless electrons the Dirichlet and Neumann BC are considered. The Dirichlet BC result in the existence of charge carrying edge states. For the Neumann BC each separate edge comprises two counterflow sub-currents which precisely cancel out each other provided the system is populated by electrons up to certain Fermi level. Cancelation of electric current is a good starting point for developing the spin-effects. In this scope we reconsider the problem for a spinning electron with Rashba coupling. The Neumann BC are replaced by Robin BC. Again, the two counterflow electric sub-currents cancel out each other for a separate edge, while the spin current survives thus modeling what is known as pure spin current - spin flow without charge flow.
A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.
Gao, Xiang; Lin, Huaiying; Dong, Qunfeng
2017-01-01
Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.
Lifshits Tails for Randomly Twisted Quantum Waveguides
NASA Astrophysics Data System (ADS)
Kirsch, Werner; Krejčiřík, David; Raikov, Georgi
2018-03-01
We consider the Dirichlet Laplacian H_γ on a 3D twisted waveguide with random Anderson-type twisting γ . We introduce the integrated density of states N_γ for the operator H_γ , and investigate the Lifshits tails of N_γ , i.e. the asymptotic behavior of N_γ (E) as E \\downarrow \\inf supp dN_γ . In particular, we study the dependence of the Lifshits exponent on the decay rate of the single-site twisting at infinity.
Hypergeometric Forms for Ising-Class Integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.; Borwein, David; Borwein, Jonathan M.
2006-07-01
We apply experimental-mathematical principles to analyzecertain integrals relevant to the Ising theory of solid-state physics. Wefind representations of the these integrals in terms of MeijerG-functions and nested-Barnes integrals. Our investigations began bycomputing 500-digit numerical values of Cn,k,namely a 2-D array of Isingintegrals for all integers n, k where n is in [2,12]and k is in [0,25].We found that some Cn,k enjoy exact evaluations involving DirichletL-functions or the Riemann zeta function. In theprocess of analyzinghypergeometric representations, we found -- experimentally and strikingly-- that the Cn,k almost certainly satisfy certain inter-indicialrelations including discrete k-recursions. Using generating functions,differential theory, complex analysis, and Wilf-Zeilbergermore » algorithms weare able to prove some central cases of these relations.« less
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Prior Design for Dependent Dirichlet Processes: An Application to Marathon Modeling
F. Pradier, Melanie; J. R. Ruiz, Francisco; Perez-Cruz, Fernando
2016-01-01
This paper presents a novel application of Bayesian nonparametrics (BNP) for marathon data modeling. We make use of two well-known BNP priors, the single-p dependent Dirichlet process and the hierarchical Dirichlet process, in order to address two different problems. First, we study the impact of age, gender and environment on the runners’ performance. We derive a fair grading method that allows direct comparison of runners regardless of their age and gender. Unlike current grading systems, our approach is based not only on top world records, but on the performances of all runners. The presented methodology for comparison of densities can be adopted in many other applications straightforwardly, providing an interesting perspective to build dependent Dirichlet processes. Second, we analyze the running patterns of the marathoners in time, obtaining information that can be valuable for training purposes. We also show that these running patterns can be used to predict finishing time given intermediate interval measurements. We apply our models to New York City, Boston and London marathons. PMID:26821155
NASA Astrophysics Data System (ADS)
Feehan, Paul M. N.
2017-09-01
We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].
Better Assessment Science Integrating Point and Nonpoint Sources
Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is not a model per se, but is a multipurpose environmental decision support system for use by regional, state, and local agencies in performing watershed- and water-quality-based studies. BASI...
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-02-01
In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.
Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.
1981-01-01
Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.
On the Dirichlet's Box Principle
ERIC Educational Resources Information Center
Poon, Kin-Keung; Shiu, Wai-Chee
2008-01-01
In this note, we will focus on several applications on the Dirichlet's box principle in Discrete Mathematics lesson and number theory lesson. In addition, the main result is an innovative game on a triangular board developed by the authors. The game has been used in teaching and learning mathematics in Discrete Mathematics and some high schools in…
Improved definition of crustal magnetic anomalies for MAGSAT data
NASA Technical Reports Server (NTRS)
Brown, R. D.; Frawley, J. F.; Davis, W. M.; Ray, R. D.; Didwall, E.; Regan, R. D. (Principal Investigator)
1982-01-01
The routine correction of MAGSAT vector magnetometer data for external field effects such as the ring current and the daily variation by filtering long wavelength harmonics from the data is described. Separation of fields due to low altitude sources from those caused by high altitude sources is affected by means of dual harmonic expansions in the solution of Dirichlet's problem. This regression/harmonic filter procedure is applied on an orbit by orbit basis, and initial tests on MAGSAT data from orbit 1176 show reduction in external field residuals by 24.33 nT RMS in the horizontal component, and 10.95 nT RMS in the radial component.
Repulsive Casimir effect from extra dimensions and Robin boundary conditions: From branes to pistons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizalde, E.; Odintsov, S. D.; Institucio Catalana de Recerca i Estudis Avanccats
2009-03-15
We evaluate the Casimir energy and force for a massive scalar field with general curvature coupling parameter, subject to Robin boundary conditions on two codimension-one parallel plates, located on a (D+1)-dimensional background spacetime with an arbitrary internal space. The most general case of different Robin coefficients on the two separate plates is considered. With independence of the geometry of the internal space, the Casimir forces are seen to be attractive for special cases of Dirichlet or Neumann boundary conditions on both plates and repulsive for Dirichlet boundary conditions on one plate and Neumann boundary conditions on the other. For Robinmore » boundary conditions, the Casimir forces can be either attractive or repulsive, depending on the Robin coefficients and the separation between the plates, what is actually remarkable and useful. Indeed, we demonstrate the existence of an equilibrium point for the interplate distance, which is stabilized due to the Casimir force, and show that stability is enhanced by the presence of the extra dimensions. Applications of these properties in braneworld models are discussed. Finally, the corresponding results are generalized to the geometry of a piston of arbitrary cross section.« less
The spectra of rectangular lattices of quantum waveguides
NASA Astrophysics Data System (ADS)
Nazarov, S. A.
2017-02-01
We obtain asymptotic formulae for the spectral segments of a thin (h\\ll 1) rectangular lattice of quantum waveguides which is described by a Dirichlet problem for the Laplacian. We establish that the structure of the spectrum of the lattice is incorrectly described by the commonly accepted quantum graph model with the traditional Kirchhoff conditions at the vertices. It turns out that the lengths of the spectral segments are infinitesimals of order O(e-δ/h), δ> 0, and O(h) as h\\to+0, and gaps of width O(h-2) and O(1) arise between them in the low- frequency and middle- frequency spectral ranges respectively. The first spectral segment is generated by the (unique) eigenvalue in the discrete spectrum of an infinite cross-shaped waveguide \\Theta. The absence of bounded solutions of the problem in \\Theta at the threshold frequency means that the correct model of the lattice is a graph with Dirichlet conditions at the vertices which splits into two infinite subsets of identical edges- intervals. By using perturbations of finitely many joints, we construct any given number of discrete spectrum points of the lattice below the essential spectrum as well as inside the gaps.
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Uniform gradient estimates on manifolds with a boundary and applications
NASA Astrophysics Data System (ADS)
Cheng, Li-Juan; Thalmaier, Anton; Thompson, James
2018-04-01
We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.
Burton-Miller-type singular boundary method for acoustic radiation and scattering
NASA Astrophysics Data System (ADS)
Fu, Zhuo-Jia; Chen, Wen; Gu, Yan
2014-08-01
This paper proposes the singular boundary method (SBM) in conjunction with Burton and Miller's formulation for acoustic radiation and scattering. The SBM is a strong-form collocation boundary discretization technique using the singular fundamental solutions, which is mathematically simple, easy-to-program, meshless and introduces the concept of source intensity factors (SIFs) to eliminate the singularities of the fundamental solutions. Therefore, it avoids singular numerical integrals in the boundary element method (BEM) and circumvents the troublesome placement of the fictitious boundary in the method of fundamental solutions (MFS). In the present method, we derive the SIFs of exterior Helmholtz equation by means of the SIFs of exterior Laplace equation owing to the same order of singularities between the Laplace and Helmholtz fundamental solutions. In conjunction with the Burton-Miller formulation, the SBM enhances the quality of the solution, particularly in the vicinity of the corresponding interior eigenfrequencies. Numerical illustrations demonstrate efficiency and accuracy of the present scheme on some benchmark examples under 2D and 3D unbounded domains in comparison with the analytical solutions, the boundary element solutions and Dirichlet-to-Neumann finite element solutions.
Dirichlet to Neumann operator for Abelian Yang-Mills gauge fields
NASA Astrophysics Data System (ADS)
Díaz-Marín, Homero G.
We consider the Dirichlet to Neumann operator for Abelian Yang-Mills boundary conditions. The aim is constructing a complex structure for the symplectic space of boundary conditions of Euler-Lagrange solutions modulo gauge for space-time manifolds with smooth boundary. Thus we prepare a suitable scenario for geometric quantization within the reduced symplectic space of boundary conditions of Abelian gauge fields.
The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...
Thermoelectric DC conductivities in hyperscaling violating Lifshitz theories
NASA Astrophysics Data System (ADS)
Cremonini, Sera; Cvetič, Mirjam; Papadimitriou, Ioannis
2018-04-01
We analytically compute the thermoelectric conductivities at zero frequency (DC) in the holographic dual of a four dimensional Einstein-Maxwell-Axion-Dilaton theory that admits a class of asymptotically hyperscaling violating Lifshitz backgrounds with a dynamical exponent z and hyperscaling violating parameter θ. We show that the heat current in the dual Lifshitz theory involves the energy flux, which is an irrelevant operator for z > 1. The linearized fluctuations relevant for computing the thermoelectric conductivities turn on a source for this irrelevant operator, leading to several novel and non-trivial aspects in the holographic renormalization procedure and the identification of the physical observables in the dual theory. Moreover, imposing Dirichlet or Neumann boundary conditions on the spatial components of one of the two Maxwell fields present leads to different thermoelectric conductivities. Dirichlet boundary conditions reproduce the thermoelectric DC conductivities obtained from the near horizon analysis of Donos and Gauntlett, while Neumann boundary conditions result in a new set of DC conductivities. We make preliminary analytical estimates for the temperature behavior of the thermoelectric matrix in appropriate regions of parameter space. In particular, at large temperatures we find that the only case which could lead to a linear resistivity ρ ˜ T corresponds to z = 4 /3.
Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen
2017-09-25
In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.
The MUSIC algorithm for impedance tomography of small inclusions from discrete data
NASA Astrophysics Data System (ADS)
Lechleiter, A.
2015-09-01
We consider a point-electrode model for electrical impedance tomography and show that current-to-voltage measurements from finitely many electrodes are sufficient to characterize the positions of a finite number of point-like inclusions. More precisely, we consider an asymptotic expansion with respect to the size of the small inclusions of the relative Neumann-to-Dirichlet operator in the framework of the point electrode model. This operator is naturally finite-dimensional and models difference measurements by finitely many small electrodes of the electric potential with and without the small inclusions. Moreover, its leading-order term explicitly characterizes the centers of the small inclusions if the (finite) number of point electrodes is large enough. This characterization is based on finite-dimensional test vectors and leads naturally to a MUSIC algorithm for imaging the inclusion centers. We show both the feasibility and limitations of this imaging technique via two-dimensional numerical experiments, considering in particular the influence of the number of point electrodes on the algorithm’s images.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
Generalized Riemann hypothesis and stochastic time series
NASA Astrophysics Data System (ADS)
Mussardo, Giuseppe; LeClair, André
2018-06-01
Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.
Regularity results for the minimum time function with Hörmander vector fields
NASA Astrophysics Data System (ADS)
Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa
2018-03-01
In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.
Bifurcation of solutions to Hamiltonian boundary value problems
NASA Astrophysics Data System (ADS)
McLachlan, R. I.; Offen, C.
2018-06-01
A bifurcation is a qualitative change in a family of solutions to an equation produced by varying parameters. In contrast to the local bifurcations of dynamical systems that are often related to a change in the number or stability of equilibria, bifurcations of boundary value problems are global in nature and may not be related to any obvious change in dynamical behaviour. Catastrophe theory is a well-developed framework which studies the bifurcations of critical points of functions. In this paper we study the bifurcations of solutions of boundary-value problems for symplectic maps, using the language of (finite-dimensional) singularity theory. We associate certain such problems with a geometric picture involving the intersection of Lagrangian submanifolds, and hence with the critical points of a suitable generating function. Within this framework, we then study the effect of three special cases: (i) some common boundary conditions, such as Dirichlet boundary conditions for second-order systems, restrict the possible types of bifurcations (for example, in generic planar systems only the A-series beginning with folds and cusps can occur); (ii) integrable systems, such as planar Hamiltonian systems, can exhibit a novel periodic pitchfork bifurcation; and (iii) systems with Hamiltonian symmetries or reversing symmetries can exhibit restricted bifurcations associated with the symmetry. This approach offers an alternative to the analysis of critical points in function spaces, typically used in the study of bifurcation of variational problems, and opens the way to the detection of more exotic bifurcations than the simple folds and cusps that are often found in examples.
Mappings of Least Dirichlet Energy and their Hopf Differentials
NASA Astrophysics Data System (ADS)
Iwaniec, Tadeusz; Onninen, Jani
2013-08-01
The paper is concerned with mappings {h \\colon {X}} {{begin{array}{ll} onto \\ longrightarrow }} {{Y}} between planar domains having least Dirichlet energy. The existence and uniqueness (up to a conformal change of variables in {{X}}) of the energy-minimal mappings is established within the class {overline{fancyscript{H}}_2({X}, {Y})} of strong limits of homeomorphisms in the Sobolev space {fancyscript{W}^{1,2}({X}, {Y})} , a result of considerable interest in the mathematical models of nonlinear elasticity. The inner variation of the independent variable in {{X}} leads to the Hopf differential {hz overline{h_{bar{z}}} dz ⊗ dz} and its trajectories. For a pair of doubly connected domains, in which {{X}} has finite conformal modulus, we establish the following principle: A mapping {h in overline{fancyscript{H}}2 ({X}, {Y})} is energy-minimal if and only if its Hopf-differential is analytic in {{X}} and real along {partial {X}} . In general, the energy-minimal mappings may not be injective, in which case one observes the occurrence of slits in {{X}} (cognate with cracks). Slits are triggered by points of concavity of {{Y}} . They originate from {partial {X}} and advance along vertical trajectories of the Hopf differential toward {{X}} where they eventually terminate, so no crosscuts are created.
GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, M.
1959-06-01
GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less
Atmospheric effect in three-space scenario for the Stokes-Helmert method of geoid determination
NASA Astrophysics Data System (ADS)
Yang, H.; Tenzer, R.; Vanicek, P.; Santos, M.
2004-05-01
: According to the Stokes-Helmert method for the geoid determination by Vanicek and Martinec (1994) and Vanicek et al. (1999), the Helmert gravity anomalies are computed at the earth surface. To formulate the fundamental formula of physical geodesy, Helmert's gravity anomalies are then downward continued from the earth surface onto the geoid. This procedure, i.e., the inverse Dirichlet's boundary value problem, is realized by solving the Poisson integral equation. The above mentioned "classical" approach can be modified so that the inverse Dirichlet's boundary value problem is solved in the No Topography (NT) space (Vanicek et al., 2004) instead of in the Helmert (H) space. This technique has been introduced by Vanicek et al. (2003) and was used by Tenzer and Vanicek (2003) for the determination of the geoid in the region of the Canadian Rocky Mountains. According to this new approach, the gravity anomalies referred to the earth surface are first transformed into the NT-space. This transformation is realized by subtracting the gravitational attraction of topographical and atmospheric masses from the gravity anomalies at the earth surface. Since the NT-anomalies are harmonic above the geoid, the Dirichlet boundary value problem is solved in the NT-space instead of the Helmert space according to the standard formulation. After being obtained on the geoid, the NT-anomalies are transformed into the H-space to minimize the indirect effect on the geoidal heights. This step, i.e., transformation from NT-space to H-space is realized by adding the gravitational attraction of condensed topographical and condensed atmospheric masses to the NT-anomalies at the geoid. The effects of atmosphere in the standard Stokes-Helmert method was intensively investigated by Sjöberg (1998 and 1999), and Novák (2000). In this presentation, the effect of the atmosphere in the three-space scenario for the Stokes-Helmert method is discussed and the numerical results over Canada are shown. Key words: Atmosphere - Geoid - Gravity
Yu, Weiyu; Wardrop, Nicola A; Bain, Robert; Wright, Jim A
2017-07-01
Sustainable Development Goal (SDG) 6 has expanded the Millennium Development Goals' focus from improved drinking-water to safely managed water services. This expanded focus to include issues such as water quality requires richer monitoring data and potentially integration of datasets from different sources. Relevant data sets include water point mapping (WPM), the survey of boreholes, wells and other water points, census and household survey data. This study examined inconsistencies between population census and WPM datasets for Cambodia, Liberia and Tanzania, and identified potential barriers to integrating the two datasets to meet monitoring needs. Literatures on numbers of people served per water point were used to convert WPM data to population served by water source type per area and compared with census reports. For Cambodia and Tanzania, discrepancies with census data suggested incomplete WPM coverage. In Liberia, where the data sets were consistent, WPM-derived data on functionality, quantity and quality of drinking water were further combined with census area statistics to generate an enhanced drinking-water access measure for protected wells and springs. The process revealed barriers to integrating census and WPM data, including exclusion of water points not used for drinking by households, matching of census and WPM source types; temporal mismatches between data sources; data quality issues such as missing or implausible data values, and underlying assumptions about population served by different water point technologies. However, integration of these two data sets could be used to identify and rectify gaps in WPM coverage. If WPM databases become more complete and the above barriers are addressed, it could also be used to develop more realistic measures of household drinking-water access for monitoring. Copyright © 2017 Elsevier GmbH. All rights reserved.
Discrete cosine and sine transforms generalized to honeycomb lattice
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Motlochová, Lenka
2018-06-01
The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.
NASA Astrophysics Data System (ADS)
Smith, Keith; Ricaud, Benjamin; Shahid, Nauman; Rhodes, Stephen; Starr, John M.; Ibáñez, Augustin; Parra, Mario A.; Escudero, Javier; Vandergheynst, Pierre
2017-02-01
Visual short-term memory binding tasks are a promising early marker for Alzheimer’s disease (AD). To uncover functional deficits of AD in these tasks it is meaningful to first study unimpaired brain function. Electroencephalogram recordings were obtained from encoding and maintenance periods of tasks performed by healthy young volunteers. We probe the task’s transient physiological underpinnings by contrasting shape only (Shape) and shape-colour binding (Bind) conditions, displayed in the left and right sides of the screen, separately. Particularly, we introduce and implement a novel technique named Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the functional network with unprecedented temporal precision. We find that connectivity in the Bind condition is less integrated with the global network than in the Shape condition in occipital and frontal modules during the encoding period of the right screen condition. Using MDE we are able to discern driving effects in the occipital module between 100-140 ms, coinciding with the P100 visually evoked potential, followed by a driving effect in the frontal module between 140-180 ms, suggesting that the differences found constitute an information processing difference between these modules. This provides temporally precise information over a heterogeneous population in promising tasks for the detection of AD.
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems
NASA Astrophysics Data System (ADS)
Cianchi, Andrea; Maz'ya, Vladimir G.
2018-05-01
Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.
Skyshine at neutron energies less than or equal to 400 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.
1980-10-01
The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less
Processing Uav and LIDAR Point Clouds in Grass GIS
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.
2016-06-01
Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
NASA Astrophysics Data System (ADS)
Li, Dong; Guo, Shangjiang
Chemotaxis is an observed phenomenon in which a biological individual moves preferentially toward a relatively high concentration, which is contrary to the process of natural diffusion. In this paper, we study a reaction-diffusion model with chemotaxis and nonlocal delay effect under Dirichlet boundary condition by using Lyapunov-Schmidt reduction and the implicit function theorem. The existence, multiplicity, stability and Hopf bifurcation of spatially nonhomogeneous steady state solutions are investigated. Moreover, our results are illustrated by an application to the model with a logistic source, homogeneous kernel and one-dimensional spatial domain.
NASA Astrophysics Data System (ADS)
Alessandrini, Giovanni; de Hoop, Maarten V.; Gaburro, Romina
2017-12-01
We discuss the inverse problem of determining the, possibly anisotropic, conductivity of a body Ω\\subset{R}n when the so-called Neumann-to-Dirichlet map is locally given on a non-empty curved portion Σ of the boundary \\partialΩ . We prove that anisotropic conductivities that are a priori known to be piecewise constant matrices on a given partition of Ω with curved interfaces can be uniquely determined in the interior from the knowledge of the local Neumann-to-Dirichlet map.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plotnikov, Mikhail G
2011-02-11
Multiple Walsh series (S) on the group G{sup m} are studied. It is proved that every at most countable set is a uniqueness set for series (S) under convergence over cubes. The recovery problem is solved for the coefficients of series (S) that converge outside countable sets or outside sets of Dirichlet type. A number of analogues of the de la Vallee Poussin theorem are established for series (S). Bibliography: 28 titles.
Sedghi, Aliasghar; Rezaei, Behrooz
2016-11-20
Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.
Acoustic response of a rectangular levitator with orifices
NASA Technical Reports Server (NTRS)
El-Raheb, Michael; Wagner, Paul
1990-01-01
The acoustic response of a rectangular cavity to speaker-generated excitation through waveguides terminating at orifices in the cavity walls is analyzed. To find the effects of orifices, acoustic pressure is expressed by eigenfunctions satisfying Neumann boundary conditions as well as by those satisfying Dirichlet ones. Some of the excess unknowns can be eliminated by point constraints set over the boundary, by appeal to Lagrange undetermined multipliers. The resulting transfer matrix must be further reduced by partial condensation to the order of a matrix describing unmixed boundary conditions. If the cavity is subjected to an axial temperature dependence, the transfer matrix is determined numerically.
NASA Astrophysics Data System (ADS)
Lin, C. W.; Wu, T. R.; Chuang, M. H.; Tsai, Y. L.
2015-12-01
The wind in Taiwan Strait is strong and stable which offers an opportunity to build offshore wind farms. However, frequently visited typhoons and strong ocean current require more attentions on the wave force and local scour around the foundation of the turbine piles. In this paper, we introduce an in-house, multi-phase CFD model, Splash3D, for solving the flow field with breaking wave, strong turbulent, and scour phenomena. Splash3D solves Navier-Stokes Equation with Large-Eddy Simulation (LES) for the fluid domain, and uses volume of fluid (VOF) with piecewise linear interface reconstruction (PLIC) method to describe the break free-surface. The waves were generated inside the computational domain by internal wave maker with a mass-source function. This function is designed to adequately simulate the wave condition under observed extreme events based on JONSWAP spectrum and dispersion relationship. Dirichlet velocity boundary condition is assigned at the upper stream boundary to induce the ocean current. At the downstream face, the sponge-layer method combined with pressure Dirichlet boundary condition is specified for dissipating waves and conducting current out of the domain. Numerical pressure gauges are uniformly set on the structure surface to obtain the force distribution on the structure. As for the local scour around the foundation, we developed Discontinuous Bi-viscous Model (DBM) for the development of the scour hole. Model validations were presented as well. The force distribution under observed irregular wave condition was extracted by the irregular-surface force extraction (ISFE) method, which provides a fast and elegant way to integrate the force acting on the surface of irregular structure. From the Simulation results, we found that the total force is mainly induced by the impinging waves, and the force from the ocean current is about 2 order of magnitude smaller than the wave force. We also found the dynamic pressure, wave height, and the projection area of the structure are the main factors to the total force. Detailed results and discussion are presented as well.
NASA Astrophysics Data System (ADS)
Petr, Rodney; Bykanov, Alexander; Freshman, Jay; Reilly, Dennis; Mangano, Joseph; Roche, Maureen; Dickenson, Jason; Burte, Mitchell; Heaton, John
2004-08-01
A high average power dense plasma focus (DPF), x-ray point source has been used to produce ˜70 nm line features in AlGaAs-based monolithic millimeter-wave integrated circuits (MMICs). The DPF source has produced up to 12 J per pulse of x-ray energy into 4π steradians at ˜1 keV effective wavelength in ˜2 Torr neon at pulse repetition rates up to 60 Hz, with an effective x-ray yield efficiency of ˜0.8%. Plasma temperature and electron concentration are estimated from the x-ray spectrum to be ˜170 eV and ˜5.1019 cm-3, respectively. The x-ray point source utilizes solid-state pulse power technology to extend the operating lifetime of electrodes and insulators in the DPF discharge. By eliminating current reversals in the DPF head, an anode electrode has demonstrated a lifetime of more than 5 million shots. The x-ray point source has also been operated continuously for 8 h run times at 27 Hz average pulse recurrent frequency. Measurements of shock waves produced by the plasma discharge indicate that overpressure pulses must be attenuated before a collimator can be integrated with the DPF point source.
Generalized species sampling priors with latent Beta reinforcements
Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele
2014-01-01
Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462
Nonlocal Reformulations of Water and Internal Waves and Asymptotic Reductions
NASA Astrophysics Data System (ADS)
Ablowitz, Mark J.
2009-09-01
Nonlocal reformulations of the classical equations of water waves and two ideal fluids separated by a free interface, bounded above by either a rigid lid or a free surface, are obtained. The kinematic equations may be written in terms of integral equations with a free parameter. By expressing the pressure, or Bernoulli, equation in terms of the surface/interface variables, a closed system is obtained. An advantage of this formulation, referred to as the nonlocal spectral (NSP) formulation, is that the vertical component is eliminated, thus reducing the dimensionality and fixing the domain in which the equations are posed. The NSP equations and the Dirichlet-Neumann operators associated with the water wave or two-fluid equations can be related to each other and the Dirichlet-Neumann series can be obtained from the NSP equations. Important asymptotic reductions obtained from the two-fluid nonlocal system include the generalizations of the Benney-Luke and Kadomtsev-Petviashvili (KP) equations, referred to as intermediate-long wave (ILW) generalizations. These 2+1 dimensional equations possess lump type solutions. In the water wave problem high-order asymptotic series are obtained for two and three dimensional gravity-capillary solitary waves. In two dimensions, the first term in the asymptotic series is the well-known hyperbolic secant squared solution of the KdV equation; in three dimensions, the first term is the rational lump solution of the KP equation.
Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope
NASA Technical Reports Server (NTRS)
Zissa, D. E.
1984-01-01
Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.
NASA Astrophysics Data System (ADS)
Yaparova, N.
2017-10-01
We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.
An integrated approach to assess heavy metal source apportionment in peri-urban agricultural soils.
Huang, Ying; Li, Tingqiang; Wu, Chengxian; He, Zhenli; Japenga, Jan; Deng, Meihua; Yang, Xiaoe
2015-12-15
Three techniques (Isotope Ratio Analysis, GIS mapping, and Multivariate Statistical Analysis) were integrated to assess heavy metal pollution and source apportionment in peri-urban agricultural soils. The soils in the study area were moderately polluted with cadmium (Cd) and mercury (Hg), lightly polluted with lead (Pb), and chromium (Cr). GIS Mapping suggested Cd pollution originates from point sources, whereas Hg, Pb, Cr could be traced back to both point and non-point sources. Principal component analysis (PCA) indicated aluminum (Al), manganese (Mn), nickel (Ni) were mainly inherited from natural sources, while Hg, Pb, and Cd were associated with two different kinds of anthropogenic sources. Cluster analysis (CA) further identified fertilizers, waste water, industrial solid wastes, road dust, and atmospheric deposition as potential sources. Based on isotope ratio analysis (IRA) organic fertilizers and road dusts accounted for 74-100% and 0-24% of the total Hg input, while road dusts and solid wastes contributed for 0-80% and 19-100% of the Pb input. This study provides a reliable approach for heavy metal source apportionment in this particular peri-urban area, with a clear potential for future application in other regions. Copyright © 2015 Elsevier B.V. All rights reserved.
A New Family of Solvable Pearson-Dirichlet Random Walks
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2011-07-01
An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthias C. M. Troffaes; Gero Walter; Dana Kelly
In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Kinetic and dynamic Delaunay tetrahedralizations in three dimensions
NASA Astrophysics Data System (ADS)
Schaller, Gernot; Meyer-Hermann, Michael
2004-09-01
We describe algorithms to implement fully dynamic and kinetic three-dimensional unconstrained Delaunay triangulations, where the time evolution of the triangulation is not only governed by moving vertices but also by a changing number of vertices. We use three-dimensional simplex flip algorithms, a stochastic visibility walk algorithm for point location and in addition, we propose a new simple method of deleting vertices from an existing three-dimensional Delaunay triangulation while maintaining the Delaunay property. As an example, we analyse the performance in various cases of practical relevance. The dual Dirichlet tessellation can be used to solve differential equations on an irregular grid, to define partitions in cell tissue simulations, for collision detection etc.
Sound source localization on an axial fan at different operating points
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes
2016-08-01
A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.
40 CFR 428.96 - Pretreatment standards for new sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion, and Mechanical... and attributable to pan, dry digestion, and mechanical reclaimed rubber processes which are integrated...
NASA Technical Reports Server (NTRS)
Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.
2016-01-01
Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).
Low frequency acoustic and electromagnetic scattering
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Maccamy, R. C.
1986-01-01
This paper deals with two classes of problems arising from acoustics and electromagnetics scattering in the low frequency stations. The first class of problem is solving Helmholtz equation with Dirichlet boundary conditions on an arbitrary two dimensional body while the second one is an interior-exterior interface problem with Helmholtz equation in the exterior. Low frequency analysis show that there are two intermediate problems which solve the above problems accurate to 0(k/2/ log k) where k is the frequency. These solutions greatly differ from the zero frequency approximations. For the Dirichlet problem numerical examples are shown to verify the theoretical estimates.
The first eigenvalue of the p-Laplacian on quantum graphs
NASA Astrophysics Data System (ADS)
Del Pezzo, Leandro M.; Rossi, Julio D.
2016-12-01
We study the first eigenvalue of the p-Laplacian (with 1
Detecting Anisotropic Inclusions Through EIT
NASA Astrophysics Data System (ADS)
Cristina, Jan; Päivärinta, Lassi
2017-12-01
We study the evolution equation {partialtu=-Λtu} where {Λt} is the Dirichlet-Neumann operator of a decreasing family of Riemannian manifolds with boundary {Σt}. We derive a lower bound for the solution of such an equation, and apply it to a quantitative density estimate for the restriction of harmonic functions on M}=Σ_{0 to the boundaries of {partialΣt}. Consequently we are able to derive a lower bound for the difference of the Dirichlet-Neumann maps in terms of the difference of a background metrics g and an inclusion metric {g+χ_{Σ}(h-g)} on a manifold M.
40 CFR 428.95 - Standards of performance for new sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion..., dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion...
40 CFR 428.95 - Standards of performance for new sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion..., dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion...
40 CFR 428.95 - Standards of performance for new sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion..., dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion...
Characterisation of a resolution enhancing image inversion interferometer.
Wicker, Kai; Sindbert, Simon; Heintzmann, Rainer
2009-08-31
Image inversion interferometers have the potential to significantly enhance the lateral resolution and light efficiency of scanning fluorescence microscopes. Self-interference of a point source's coherent point spread function with its inverted copy leads to a reduction in the integrated signal for off-axis sources compared to sources on the inversion axis. This can be used to enhance the resolution in a confocal laser scanning microscope. We present a simple image inversion interferometer relying solely on reflections off planar surfaces. Measurements of the detection point spread function for several types of light sources confirm the predicted performance and suggest its usability for scanning confocal fluorescence microscopy.
A Model for Selection of Eyespots on Butterfly Wings.
Sekimura, Toshio; Venkataraman, Chandrasekhar; Madzvamuse, Anotida
2015-01-01
The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins). A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not. We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed) boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions observed in nature. We therefore conclude that changes in the proximal boundary conditions are sufficient to explain the empirically observed distribution of eyespot focus points on the entire wing surface. The model predicts, subject to experimental verification, that the source strength of the activator at the proximal boundary should be lower in wing cells in which focus points form than in those that lack focus points. The model suggests that the number and locations of eyespot foci on the wing disc could be largely controlled by two kinds of gradients along two different directions, that is, the first one is the gradient in spatially varying parameters such as the reaction rate along the anterior-posterior direction on the proximal boundary of the wing cells, and the second one is the gradient in source values of the activator along the veins in the proximal-distal direction of the wing cell.
40 CFR 419.57 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 30 2013-07-01 2012-07-01 true Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS PETROLEUM REFINING POINT SOURCE CATEGORY Integrated Subcategory § 419.57 Pretreatment standards for new sources (PSNS). Except as provided in 40 CFR 403.7, any new source subject to...
SKYDOSE: A code for gamma skyshine calculations using the integral line-beam method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Brockhoff, R.C.
1994-07-01
SKYDOS evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated by three simple geometries: (1) a source in a silo; (2) a source behind an infinitely long, vertical, black wall; and (3) a source in a rectangular building. In all three geometries, an optical overhead shield may be specified. The source energy must be between 0.02 and 100 MeV (10 MeV for sources with an overhead shield). This is a user`s manual. Other references give more detail on the integral line-beam method used by SKYDOSE.
Magner, J A; Brooks, K N
2008-03-01
Section 303(d) of the Clean Water Act requires States and Tribes to list waters not meeting water quality standards. A total maximum daily load must be prepared for waters identified as impaired with respect to water quality standards. Historically, the management of pollution in Minnesota has been focused on point-source regulation. Regulatory effort in Minnesota has improved water quality over the last three decades. Non-point source pollution has become the largest driver of conventional 303(d) listings in the 21st century. Conventional pollutants, i.e., organic, sediment and nutrient imbalances can be identified with poor land use management practices. However, the cause and effect relationship can be elusive because of natural watershed-system influences that vary with scale. Elucidation is complex because the current water quality standards in Minnesota were designed to work best with water quality permits to control point sources of pollution. This paper presents a sentinel watershed-systems approach (SWSA) to the monitoring and assessment of Minnesota waterbodies. SWSA integrates physical, chemical, and biological data over space and time using advanced technologies at selected small watersheds across Minnesota to potentially improve understanding of natural and anthropogenic watershed processes and the management of point and non-point sources of pollution. Long-term, state-of-the-art monitoring and assessment is needed to advance and improve water quality standards. Advanced water quality or ecologically-based standards that integrate physical, chemical, and biological numeric criteria offer the potential to better understand, manage, protect, and restore Minnesota's waterbodies.
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%
Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.
2016-01-01
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.
Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M
2016-09-13
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.
40 CFR 430.75 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Mechanical Pulp Subcategory § 430.75 New source performance standards (NSPS). (a) The following applies to mechanical pulp...-mechanical process; mechanical pulp facilities where the integrated production of pulp and coarse paper...
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
On the exterior Dirichlet problem for Hessian quotient equations
NASA Astrophysics Data System (ADS)
Li, Dongsheng; Li, Zhisu
2018-06-01
In this paper, we establish the existence and uniqueness theorem for solutions of the exterior Dirichlet problem for Hessian quotient equations with prescribed asymptotic behavior at infinity. This extends the previous related results on the Monge-Ampère equations and on the Hessian equations, and rearranges them in a systematic way. Based on the Perron's method, the main ingredient of this paper is to construct some appropriate subsolutions of the Hessian quotient equation, which is realized by introducing some new quantities about the elementary symmetric polynomials and using them to analyze the corresponding ordinary differential equation related to the generalized radially symmetric subsolutions of the original equation.
A three dimensional Dirichlet-to-Neumann map for surface waves over topography
NASA Astrophysics Data System (ADS)
Nachbin, Andre; Andrade, David
2016-11-01
We consider three dimensional surface water waves in the potential theory regime. The bottom topography can have a quite general profile. In the case of linear waves the Dirichlet-to-Neumann operator is formulated in a matrix decomposition form. Computational simulations illustrate the performance of the method. Two dimensional periodic bottom variations are considered in both the Bragg resonance regime as well as the rapidly varying (homogenized) regime. In the three-dimensional case we use the Luneburg lens-shaped submerged mound, which promotes the focusing of the underlying rays. FAPERJ Cientistas do Nosso Estado Grant 102917/2011 and ANP/PRH-32.
Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar
2005-11-04
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.
Sheng, Yin; Zhang, Hao; Zeng, Zhigang
2017-10-01
This paper is concerned with synchronization for a class of reaction-diffusion neural networks with Dirichlet boundary conditions and infinite discrete time-varying delays. By utilizing theories of partial differential equations, Green's formula, inequality techniques, and the concept of comparison, algebraic criteria are presented to guarantee master-slave synchronization of the underlying reaction-diffusion neural networks via a designed controller. Additionally, sufficient conditions on exponential synchronization of reaction-diffusion neural networks with finite time-varying delays are established. The proposed criteria herein enhance and generalize some published ones. Three numerical examples are presented to substantiate the validity and merits of the obtained theoretical results.
40 CFR 430.127 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non.... Subpart L [PSNS for non-integrated mills where filter and non-woven papers are produced from purchased...
40 CFR 430.127 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2011 CFR
2011-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non.... Subpart L [PSNS for non-integrated mills where filter and non-woven papers are produced from purchased...
Integration of Geodata in Documenting Castle Ruins
NASA Astrophysics Data System (ADS)
Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.
2016-06-01
Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.
A nodal domain theorem for integrable billiards in two dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samajdar, Rhine; Jain, Sudhir R., E-mail: srjain@barc.gov.in
Eigenfunctions of integrable planar billiards are studied — in particular, the number of nodal domains, ν of the eigenfunctions with Dirichlet boundary conditions are considered. The billiards for which the time-independent Schrödinger equation (Helmholtz equation) is separable admit trivial expressions for the number of domains. Here, we discover that for all separable and non-separable integrable billiards, ν satisfies certain difference equations. This has been possible because the eigenfunctions can be classified in families labelled by the same value of mmodkn, given a particular k, for a set of quantum numbers, m,n. Further, we observe that the patterns in a familymore » are similar and the algebraic representation of the geometrical nodal patterns is found. Instances of this representation are explained in detail to understand the beauty of the patterns. This paper therefore presents a mathematical connection between integrable systems and difference equations. - Highlights: • We find that the number of nodal domains of eigenfunctions of integrable, planar billiards satisfy a class of difference equations. • The eigenfunctions labelled by quantum numbers (m,n) can be classified in terms of mmodkn. • A theorem is presented, realising algebraic representations of geometrical patterns exhibited by the domains. • This work presents a connection between integrable systems and difference equations.« less
Towards an expansive hybrid psychology: integrating theories of the mediated mind.
Brinkmann, Svend
2011-03-01
This article develops an integrative theory of the mind by examining how the mind, understood as a set of skills and dispositions, depends upon four sources of mediators. Harré's hybrid psychology is taken as a meta-theoretical starting point, but is expanded significantly by including the four sources of mediators that are the brain, the body, social practices and technological artefacts. It is argued that the mind is normative in the sense that mental processes do not simply happen, but can be done more or less well, and thus are subject to normative appraisal. The expanded hybrid psychology is meant to assist in integrating theoretical perspectives and research interests that are often thought of as incompatible, among them neuroscience, phenomenology of the body, social practice theory and technology studies. A main point of the article is that these perspectives each are necessary for an integrative approach to the human mind.
40 CFR 430.77 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Mechanical Pulp Subcategory § 430.77 Pretreatment standards for new sources (PSNS). (a) The following applies to mechanical... thermo-mechanical process; mechanical pulp facilities where the integrated production of pulp and coarse...
Numerical modeling of subsurface communication
NASA Astrophysics Data System (ADS)
Burke, G. J.; Dease, C. G.; Didwall, E. M.; Lytle, R. J.
1985-02-01
Techniques are described for numerical modeling of through-the-Earth communication. The basic problem considered is evaluation of the field at a surface or airborne station due to an antenna buried in the Earth. Equations are given for the field of a point source in a homogeneous or stratified earth. These expressions involve infinite integrals over wave number, sometimes known as Sommerfield integrals. Numerical techniques used for evaluating these integrals are outlined. The problem of determining the current on a real antenna in the Earth, including the effect of insulation, is considered. Results are included for the fields of a point source in homogeneous and stratified earths and the field of a finite insulated dipole. The results are for electromagnetic propagation in the ELF-VLF range, but the codes also can address propagation problems at higher frequencies.
Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho
2004-12-01
A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.
Integration by parts and Pohozaev identities for space-dependent fractional-order operators
NASA Astrophysics Data System (ADS)
Grubb, Gerd
2016-08-01
Consider a classical elliptic pseudodifferential operator P on Rn of order 2a (0 < a < 1) with even symbol. For example, P = A(x , D) a where A (x , D) is a second-order strongly elliptic differential operator; the fractional Laplacian (- Δ) a is a particular case. For solutions u of the Dirichlet problem on a bounded smooth subset Ω ⊂Rn, we show an integration-by-parts formula with a boundary integral involving (d-a u)|∂Ω, where d (x) = dist (x , ∂ Ω). This extends recent results of Ros-Oton, Serra and Valdinoci, to operators that are x-dependent, nonsymmetric, and have lower-order parts. We also generalize their formula of Pohozaev-type, that can be used to prove unique continuation properties, and nonexistence of nontrivial solutions of semilinear problems. An illustration is given with P =(- Δ +m2) a. The basic step in our analysis is a factorization of P, P ∼P-P+, where we set up a calculus for the generalized pseudodifferential operators P± that come out of the construction.
On singular and highly oscillatory properties of the Green function for ship motions
NASA Astrophysics Data System (ADS)
Chen, Xiao-Bo; Xiong Wu, Guo
2001-10-01
The Green function used for analysing ship motions in waves is the velocity potential due to a point source pulsating and advancing at a uniform forward speed. The behaviour of this function is investigated, in particular for the case when the source is located at or close to the free surface. In the far field, the Green function is represented by a single integral along one closed dispersion curve and two open dispersion curves. The single integral along the open dispersion curves is analysed based on the asymptotic expansion of a complex error function. The singular and highly oscillatory behaviour of the Green function is captured, which shows that the Green function oscillates with indefinitely increasing amplitude and indefinitely decreasing wavelength, when a field point approaches the track of the source point at the free surface. This sheds some light on the nature of the difficulties in the numerical methods used for predicting the motion of a ship advancing in waves.
Air source integrated heat pump simulation model for EnergyPlus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; New, Joshua; Baxter, Van
An Air Source Integrated Heat Pump (AS-IHP) is an air source, multi-functional spacing conditioning unit with water heating function (WH), which can lead to great energy savings by recovering the condensing waste heat for domestic water heating. This paper summarizes development of the EnergyPlus AS-IHP model, introducing the physics, sub-models, working modes, and control logic. Based on the model, building energy simulations were conducted to demonstrate greater than 50% annual energy savings, in comparison to a baseline heat pump with electric water heater, over 10 US cities, using the EnergyPlus quick-service restaurant template building. We assessed water heating energy savingmore » potentials using AS-IHP versus both gas and electric baseline systems, and pointed out climate zones where AS-IHPs are promising. In addition, a grid integration strategy was investigated to reveal further energy saving and electricity cost reduction potentials, via increasing the water heating set point temperature during off-peak hours and using larger water tanks.« less
McCarthy, Kathleen A.; Alvarez, David A.
2014-01-01
The Eugene Water & Electric Board (EWEB) supplies drinking water to approximately 200,000 people in Eugene, Oregon. The sole source of this water is the McKenzie River, which has consistently excellent water quality relative to established drinking-water standards. To ensure that this quality is maintained as land use in the source basin changes and water demands increase, EWEB has developed a proactive management strategy that includes a combination of conventional point-in-time discrete water sampling and time‑integrated passive sampling with a combination of chemical analyses and bioassays to explore water quality and identify where vulnerabilities may lie. In this report, we present the results from six passive‑sampling deployments at six sites in the basin, including the intake and outflow from the EWEB drinking‑water treatment plant (DWTP). This is the first known use of passive samplers to investigate both the source and finished water of a municipal DWTP. Results indicate that low concentrations of several polycyclic aromatic hydrocarbons and organohalogen compounds are consistently present in source waters, and that many of these compounds are also present in finished drinking water. The nature and patterns of compounds detected suggest that land-surface runoff and atmospheric deposition act as ongoing sources of polycyclic aromatic hydrocarbons, some currently used pesticides, and several legacy organochlorine pesticides. Comparison of results from point-in-time and time-integrated sampling indicate that these two methods are complementary and, when used together, provide a clearer understanding of contaminant sources than either method alone.
NASA Astrophysics Data System (ADS)
Barkeshli, Sina
A relatively simple and efficient closed form asymptotic representation of the microstrip dyadic surface Green's function is developed. The large parameter in this asymptotic development is proportional to the lateral separation between the source and field points along the planar microstrip configuration. Surprisingly, this asymptotic solution remains accurate even for very small (almost two tenths of a wavelength) lateral separation of the source and field points. The present asymptotic Green's function will thus allow a very efficient calculation of the currents excited on microstrip antenna patches/feed lines and monolithic millimeter and microwave integrated circuit (MIMIC) elements based on a moment method (MM) solution of an integral equation for these currents. The kernal of the latter integral equation is the present asymptotic form of the microstrip Green's function. It is noted that the conventional Sommerfeld integral representation of the microstrip surface Green's function is very poorly convergent when used in this MM formulation. In addition, an efficient exact steepest descent path integral form employing a radially propagating representation of the microstrip dyadic Green's function is also derived which exhibits a relatively faster convergence when compared to the conventional Sommerfeld integral representation. The same steepest descent form could also be obtained by deforming the integration contour of the conventional Sommerfeld representation; however, the radially propagating integral representation exhibits better convergence properties for laterally separated source and field points even before the steepest descent path of integration is used. Numerical results based on the efficient closed form asymptotic solution for the microstrip surface Green's function developed in this work are presented for the mutual coupling between a pair of dipoles on a single layer grounded dielectric slab. The accuracy of the latter calculations is confirmed by comparison with results based on an exact integral representation for that Green's function.
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and... of 5.0 to 9.0 at all times. Subpart L [NSPS for non-integrated mills where filter and non-woven...
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and... of 5.0 to 9.0 at all times. Subpart L [NSPS for non-integrated mills where filter and non-woven...
Numerical modeling of subsurface communication, revision 1
NASA Astrophysics Data System (ADS)
Burke, G. J.; Dease, C. G.; Didwall, E. M.; Lytle, R. J.
1985-08-01
Techniques are described for numerical modeling of through-the-Earth communication. The basic problem considered is evaluation of the field at a surface or airborne station due to an antenna buried in the earth. Equations are given for the field of a point source in a homogeneous or stratified Earth. These expressions involve infinite integrals over wave number, sometimes known as Sommerfeld integrals. Numerical techniques used for evaluating these integrals are outlined. The problem of determining the current on a real antenna in the Earth, including the effect of insulation, is considered. Results are included for the fields of a point source in homogeneous and stratified earths and the field of a finite insulated dipole. The results are for electromagnetic propagation in the ELF-VLF range, but the codes also can address propagation problems at higher frequencies.
AN INTEGRATED FRAMEWORK FOR WATERSHED ASSESSMENT AND MANAGEMENT
Watershed approaches to water quality management have become popular, because they can address multiple point and non-point sources and the influences of land use. Developing technically-sound watershed management strategies can be challenging due to the need to 1) account for mu...
NASA Astrophysics Data System (ADS)
Patel, Utkarsh R.; Triverio, Piero
2016-09-01
An accurate modeling of skin effect inside conductors is of capital importance to solve transmission line and scattering problems. This paper presents a surface-based formulation to model skin effect in conductors of arbitrary cross section, and compute the per-unit-length impedance of a multiconductor transmission line. The proposed formulation is based on the Dirichlet-Neumann operator that relates the longitudinal electric field to the tangential magnetic field on the boundary of a conductor. We demonstrate how the surface operator can be obtained through the contour integral method for conductors of arbitrary shape. The proposed algorithm is simple to implement, efficient, and can handle arbitrary cross-sections, which is a main advantage over the existing approach based on eigenfunctions, which is available only for canonical conductor's shapes. The versatility of the method is illustrated through a diverse set of examples, which includes transmission lines with trapezoidal, curved, and V-shaped conductors. Numerical results demonstrate the accuracy, versatility, and efficiency of the proposed technique.
Hierarchical brain mapping via a generalized Dirichlet solution for mapping brain manifolds
NASA Astrophysics Data System (ADS)
Joshi, Sarang C.; Miller, Michael I.; Christensen, Gary E.; Banerjee, Ayan; Coogan, Tom; Grenander, Ulf
1995-08-01
In this paper we present a coarse-to-fine approach for the transformation of digital anatomical textbooks from the ideal to the individual that unifies the work on landmark deformations and volume based transformation. The Hierarchical approach is linked to the Biological problem itself, coming out of the various kinds of information which is provided by the anatomists. This information is in the form of points, lines, surfaces and sub-volumes corresponding to 0, 1, 2, and 3 dimensional sub-manifolds respectively. The algorithm is driven by these sub- manifolds. We follow the approach that the highest dimensional transformation is a result from the solution of a sequence of lower dimensional problems driven by successive refinements or partitions of the images into various Biologically meaningful sub-structures.
Vacuum stress energy density and its gravitational implications
NASA Astrophysics Data System (ADS)
Estrada, Ricardo; Fulling, Stephen A.; Kaplan, Lev; Kirsten, Klaus; Liu, Zhonghai; Milton, Kimball A.
2008-04-01
In nongravitational physics the local density of energy is often regarded as merely a bookkeeping device; only total energy has an experimental meaning—and it is only modulo a constant term. But in general relativity the local stress-energy tensor is the source term in Einstein's equation. In closed universes, and those with Kaluza-Klein dimensions, theoretical consistency demands that quantum vacuum energy should exist and have gravitational effects, although there are no boundary materials giving rise to that energy by van der Waals interactions. In the lab there are boundaries, and in general the energy density has a nonintegrable singularity as a boundary is approached (for idealized boundary conditions). As pointed out long ago by Candelas and Deutsch, in this situation there is doubt about the viability of the semiclassical Einstein equation. Our goal is to show that the divergences in the linearized Einstein equation can be renormalized to yield a plausible approximation to the finite theory that presumably exists for realistic boundary conditions. For a scalar field with Dirichlet or Neumann boundary conditions inside a rectangular parallelepiped, we have calculated by the method of images all components of the stress tensor, for all values of the conformal coupling parameter and an exponential ultraviolet cutoff parameter. The qualitative features of contributions from various classes of closed classical paths are noted. Then the Estrada-Kanwal distributional theory of asymptotics, particularly the moment expansion, is used to show that the linearized Einstein equation with the stress-energy near a plane boundary as source converges to a consistent theory when the cutoff is removed. This paper reports work in progress on a project combining researchers in Texas, Louisiana and Oklahoma. It is supported by NSF Grants PHY-0554849 and PHY-0554926.
A Meinardus Theorem with Multiple Singularities
NASA Astrophysics Data System (ADS)
Granovsky, Boris L.; Stark, Dudley
2012-09-01
Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.
NASA Astrophysics Data System (ADS)
Ding, Xiao-Li; Nieto, Juan J.
2017-11-01
In this paper, we consider the analytical solutions of coupling fractional partial differential equations (FPDEs) with Dirichlet boundary conditions on a finite domain. Firstly, the method of successive approximations is used to obtain the analytical solutions of coupling multi-term time fractional ordinary differential equations. Then, the technique of spectral representation of the fractional Laplacian operator is used to convert the coupling FPDEs to the coupling multi-term time fractional ordinary differential equations. By applying the obtained analytical solutions to the resulting multi-term time fractional ordinary differential equations, the desired analytical solutions of the coupling FPDEs are given. Our results are applied to derive the analytical solutions of some special cases to demonstrate their applicability.
40 CFR 428.96 - Pretreatment standards for new sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES AND STANDARDS (CONTINUED) RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion, and... this section and attributable to pan, dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion reclaimed rubber process, which may be discharged to a publicly...
40 CFR 428.96 - Pretreatment standards for new sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDELINES AND STANDARDS (CONTINUED) RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion, and... this section and attributable to pan, dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion reclaimed rubber process, which may be discharged to a publicly...
40 CFR 428.96 - Pretreatment standards for new sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDELINES AND STANDARDS (CONTINUED) RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion, and... this section and attributable to pan, dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion reclaimed rubber process, which may be discharged to a publicly...
Discontinuous Galerkin Methods for Turbulence Simulation
NASA Technical Reports Server (NTRS)
Collis, S. Scott
2002-01-01
A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.
Exact Closed-form Solutions for Lamb's Problem
NASA Astrophysics Data System (ADS)
Feng, Xi; Zhang, Haiming
2018-04-01
In this article, we report on an exact closed-form solution for the displacement at the surface of an elastic half-space elicited by a buried point source that acts at some point underneath that surface. This is commonly referred to as the 3-D Lamb's problem, for which previous solutions were restricted to sources and receivers placed at the free surface. By means of the reciprocity theorem, our solution should also be valid as a means to obtain the displacements at interior points when the source is placed at the free surface. We manage to obtain explicit results by expressing the solution in terms of elementary algebraic expression as well as elliptic integrals. We anchor our developments on Poisson's ratio 0.25 starting from Johnson's (1974) integral solutions which must be computed numerically. In the end, our closed-form results agree perfectly with the numerical results of Johnson (1974), which strongly confirms the correctness of our explicit formulas. It is hoped that in due time, these formulas may constitute a valuable canonical solution that will serve as a yardstick against which other numerical solutions can be compared and measured.
Exact closed-form solutions for Lamb's problem
NASA Astrophysics Data System (ADS)
Feng, Xi; Zhang, Haiming
2018-07-01
In this paper, we report on an exact closed-form solution for the displacement at the surface of an elastic half-space elicited by a buried point source that acts at some point underneath that surface. This is commonly referred to as the 3-D Lamb's problem for which previous solutions were restricted to sources and receivers placed at the free surface. By means of the reciprocity theorem, our solution should also be valid as a means to obtain the displacements at interior points when the source is placed at the free surface. We manage to obtain explicit results by expressing the solution in terms of elementary algebraic expression as well as elliptic integrals. We anchor our developments on Poisson's ratio 0.25 starting from Johnson's integral solutions which must be computed numerically. In the end, our closed-form results agree perfectly with the numerical results of Johnson, which strongly confirms the correctness of our explicit formulae. It is hoped that in due time, these formulae may constitute a valuable canonical solution that will serve as a yardstick against which other numerical solutions can be compared and measured.
NASA Technical Reports Server (NTRS)
Fink, P. W.; Khayat, M. A.; Wilton, D. R.
2005-01-01
It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.
40 CFR 430.76 - Pretreatment standards for existing sources (PSES).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Mechanical Pulp... mechanical pulp facilities where pulp and paper at groundwood mills are produced through the application of the thermo-mechanical process; mechanical pulp facilities where the integrated production of pulp and...
On the Aharonov-Bohm Operators with Varying Poles: The Boundary Behavior of Eigenvalues
NASA Astrophysics Data System (ADS)
Noris, Benedetta; Nys, Manon; Terracini, Susanna
2015-11-01
We consider a magnetic Schrödinger operator with magnetic field concentrated at one point (the pole) of a domain and half integer circulation, and we focus on the behavior of Dirichlet eigenvalues as functions of the pole. Although the magnetic field vanishes almost everywhere, it is well known that it affects the operator at the spectral level (the Aharonov-Bohm effect, Phys Rev (2) 115:485-491, 1959). Moreover, the numerical computations performed in (Bonnaillie-Noël et al., Anal PDE 7(6):1365-1395, 2014; Noris and Terracini, Indiana Univ Math J 59(4):1361-1403, 2010) show a rather complex behavior of the eigenvalues as the pole varies in a planar domain. In this paper, in continuation of the analysis started in (Bonnaillie-Noël et al., Anal PDE 7(6):1365-1395, 2014; Noris and Terracini, Indiana Univ Math J 59(4):1361-1403, 2010), we analyze the relation between the variation of the eigenvalue and the nodal structure of the associated eigenfunctions. We deal with planar domains with Dirichlet boundary conditions and we focus on the case when the singular pole approaches the boundary of the domain: then, the operator loses its singular character and the k-th magnetic eigenvalue converges to that of the standard Laplacian. We can predict both the rate of convergence and whether the convergence happens from above or from below, in relation with the number of nodal lines of the k-th eigenfunction of the Laplacian. The proof relies on the variational characterization of eigenvalues, together with a detailed asymptotic analysis of the eigenfunctions, based on an Almgren-type frequency formula for magnetic eigenfunctions and on the blow-up technique.
The Casimir effect for parallel plates revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawakami, N. A.; Nemes, M. C.; Wreszinski, Walter F.
2007-10-15
The Casimir effect for a massless scalar field with Dirichlet and periodic boundary conditions (bc's) on infinite parallel plates is revisited in the local quantum field theory (lqft) framework introduced by Kay [Phys. Rev. D 20, 3052 (1979)]. The model displays a number of more realistic features than the ones he treated. In addition to local observables, as the energy density, we propose to consider intensive variables, such as the energy per unit area {epsilon}, as fundamental observables. Adopting this view, lqft rejects Dirichlet (the same result may be proved for Neumann or mixed) bc, and accepts periodic bc: inmore » the former case {epsilon} diverges, in the latter it is finite, as is shown by an expression for the local energy density obtained from lqft through the use of the Poisson summation formula. Another way to see this uses methods from the Euler summation formula: in the proof of regularization independence of the energy per unit area, a regularization-dependent surface term arises upon use of Dirichlet bc, but not periodic bc. For the conformally invariant scalar quantum field, this surface term is absent due to the condition of zero trace of the energy momentum tensor, as remarked by De Witt [Phys. Rep. 19, 295 (1975)]. The latter property does not hold in the application to the dark energy problem in cosmology, in which we argue that periodic bc might play a distinguished role.« less
INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE
INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...
Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied ...
USDA-ARS?s Scientific Manuscript database
Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied manure on undevelope...
Effect of an overhead shield on gamma-ray skyshine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stedry, M.H.; Shultis, J.K.; Faw, R.E.
1996-06-01
A hybrid Monte Carlo and integral line-beam method is used to determine the effect of a horizontal slab shield above a gamma-ray source on the resulting skyshine doses. A simplified Monte Carlo procedure is used to determine the energy and angular distribution of photons escaping the source shield into the atmosphere. The escaping photons are then treated as a bare, point, skyshine source, and the integral line-beam method is used to estimate the skyshine dose at various distances from the source. From results for arbitrarily collimated and shielded sources, the skyshine dose is found to depend primarily on the mean-free-pathmore » thickness of the shield and only very weakly on the shield material.« less
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
NASA Astrophysics Data System (ADS)
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen
2005-11-01
Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China
Black branes in a box: hydrodynamics, stability, and criticality
NASA Astrophysics Data System (ADS)
Emparan, Roberto; Martınez, Marina
2012-07-01
We study the effective hydrodynamics of neutral black branes enclosed in a finite cylindrical cavity with Dirichlet boundary conditions. We focus on how the Gregory-Laflamme instability changes as we vary the cavity radius R. Fixing the metric at the cavity wall increases the rigidity of the black brane by hindering gradients of the redshift on the wall. In the effective fluid, this is reflected in the growth of the squared speed of sound. As a consequence, when the cavity is smaller than a critical radius the black brane becomes dynamically stable. The correlation with the change in thermodynamic stability is transparent in our approach. We compute the bulk and shear viscosities of the black brane and find that they do not run with R. We find mean-field theory critical exponents near the critical point.
On the connection between multigrid and cyclic reduction
NASA Technical Reports Server (NTRS)
Merriam, M. L.
1984-01-01
A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.
NASA Astrophysics Data System (ADS)
Sun, Hao; Wang, Cheng; Wang, Boliang
2011-02-01
We present a hybrid generative-discriminative learning method for human action recognition from video sequences. Our model combines a bag-of-words component with supervised latent topic models. A video sequence is represented as a collection of spatiotemporal words by extracting space-time interest points and describing these points using both shape and motion cues. The supervised latent Dirichlet allocation (sLDA) topic model, which employs discriminative learning using labeled data under a generative framework, is introduced to discover the latent topic structure that is most relevant to action categorization. The proposed algorithm retains most of the desirable properties of generative learning while increasing the classification performance though a discriminative setting. It has also been extended to exploit both labeled data and unlabeled data to learn human actions under a unified framework. We test our algorithm on three challenging data sets: the KTH human motion data set, the Weizmann human action data set, and a ballet data set. Our results are either comparable to or significantly better than previously published results on these data sets and reflect the promise of hybrid generative-discriminative learning approaches.
Experimental and Analytical Studies of Shielding Concepts for Point Sources and Jet Noises.
NASA Astrophysics Data System (ADS)
Wong, Raymond Lee Man
This analytical and experimental study explores concepts for jet noise shielding. Model experiments centre on solid planar shields, simulating engine-over-wing installations, and 'sugar scoop' shields. Tradeoff on effective shielding length is set by interference 'edge noise' as the shield trailing edge approaches the spreading jet. Edge noise is minimized by (i) hyperbolic cutouts which trim off the portions of most intense interference between the jet flow and the barrier and (ii) hybrid shields--a thermal refractive extension (a flame); for (ii) the tradeoff is combustion noise. In general, shielding attenuation increases steadily with frequency, following low frequency enhancement by edge noise. Although broadband attenuation is typically only several dB, the reduction of the subjectively weighted perceived noise levels is higher. In addition, calculated ground contours of peak PN dB show a substantial contraction due to shielding: this reaches 66% for one of the 'sugar scoop' shields for the 90 PN dB contour. The experiments are complemented by analytical predictions. They are divided into an engineering scheme for jet noise shielding and more rigorous analysis for point source shielding. The former approach combines point source shielding with a suitable jet source distribution. The results are synthesized into a predictive algorithm for jet noise shielding: the jet is modelled as a line distribution of incoherent sources with narrow band frequency (TURN)(axial distance)('-1). The predictive version agrees well with experiment (1 to 1.5 dB) up to moderate frequencies. The insertion loss deduced from the point source measurements for semi-infinite as well as finite rectangular shields agrees rather well with theoretical calculation based on the exact half plane solution and the superposition of asymptotic closed-form solutions. An approximate theory, the Maggi-Rubinowicz line integral, is found to yield reasonable predictions for thin barriers including cutouts if a certain correction is applied. The more exact integral equation approach (solved numerically) is applied to a more demanding geometry: a half round sugar scoop shield. It is found that the solutions of integral equation derived from Helmholtz formula in normal derivative form show satisfactory agreement with measurements.
NASA Astrophysics Data System (ADS)
Jones, K. R.; Arrowsmith, S.; Whitaker, R. W.
2012-12-01
The overall mission of the National Center for Nuclear Security (NCNS) Source Physics Experiment at the National Nuclear Security Site (SPE-N) near Las Vegas, Nevada is to improve upon and develop new physics based models for underground nuclear explosions using scaled, underground chemical explosions as proxies. To this end, we use the Rayleigh integral as an approximation to the Helmholz-Kirchoff integral, [Whitaker, 2007 and Arrowsmith et al., 2011], to model infrasound generation in the far-field. Infrasound generated by single-point explosive sources above ground can typically be treated as monopole point-sources. While the source is relatively simple, the research needed to model above ground point-sources is complicated by path effects related to the propagation of the acoustic signal and out of the scope of this study. In contrast, for explosions that occur below ground, including the SPE explosions, the source region is more complicated but the observation distances are much closer (< 5 km), thus greatly reducing the complication of path effects. In this case, elastic energy from the explosions radiates upward and spreads out, depending on depth, to a more distributed region at the surface. Due to this broad surface perturbation of the atmosphere we cannot model the source as a simple monopole point-source. Instead, we use the analogy of a piston mounted in a rigid, infinite baffle, where the surface area that moves as a result of the explosion is the piston and the surrounding region is the baffle. The area of the "piston" is determined by the depth and explosive yield of the event. In this study we look at data from SPE-N-2 and SPE-N-3. Both shots had an explosive yield of 1 ton at a depth of 45 m. We collected infrasound data with up to eight stations and 32 sensors within a 5 km radius of ground zero. To determine the area of the surface acceleration, we used data from twelve surface accelerometers installed within 100 m radially about ground zero. With the accelerometer data defining the vertical motion of the surface, we use the Rayleigh Integral Method, [Whitaker, 2007 and Arrowsmith et al., 2011], to generate a synthetic infrasound pulse to compare to the observed data. Because the phase across the "piston" is not necessarily uniform, constructive and destructive interference will change the shape of the acoustic pulse if observed directly above the source (on-axis) or perpendicular to the source (off-axis). Comparing the observed data to the synthetic data we note that the overall structure of the pulse agrees well and that the differences can be attributed to a number of possibilities, including the sensors used, topography, meteorological conditions, etc. One other potential source of error between the observed and calculated data is that we use a flat, symmetric source region for the "piston" where in reality the source region is not flat and not perfectly symmetric. A primary goal of this work is to better understand and model the relationships between surface area, depth, and yield of underground explosions.
Measuring x-ray spectra of flash radiographic sources [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehring, Amanda Elizabeth; Espy, Michelle A.; Haines, Todd Joseph
2015-11-02
The x-ray spectra of flash radiographic sources are difficult to measure. The sources measured were Radiographic Integrated Test Stand-6 (370 rad at 1 m; 50 ns pulse) and Dual Axis Radiographic Hydrodynamic Test Facility (DARHT) (550 rad at 1 m; 50 ns pulse). Features of the Compton spectrometer are described, and spectra are shown. Additional slides present data on instrumental calibration.
Historically, water quality assessments in the United States primarily focused on water chemistry assays at or near discharge sources. As it has become clear that waters also can be highly impaired from dispersed (i.e., non-point source) chemicals and non-chemical impacts, direc...
A conceptual ground-water-quality monitoring network for San Fernando Valley, California
Setmire, J.G.
1985-01-01
A conceptual groundwater-quality monitoring network was developed for San Fernando Valley to provide the California State Water Resources Control Board with an integrated, basinwide control system to monitor the quality of groundwater. The geology, occurrence and movement of groundwater, land use, background water quality, and potential sources of pollution were described and then considered in designing the conceptual monitoring network. The network was designed to monitor major known and potential point and nonpoint sources of groundwater contamination over time. The network is composed of 291 sites where wells are needed to define the groundwater quality. The ideal network includes four specific-purpose networks to monitor (1) ambient water quality, (2) nonpoint sources of pollution, (3) point sources of pollution, and (4) line sources of pollution. (USGS)
NASA Astrophysics Data System (ADS)
Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok
We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.
Hierarchical Dirichlet process model for gene expression clustering
2013-01-01
Clustering is an important data processing tool for interpreting microarray data and genomic network inference. In this article, we propose a clustering algorithm based on the hierarchical Dirichlet processes (HDP). The HDP clustering introduces a hierarchical structure in the statistical model which captures the hierarchical features prevalent in biological data such as the gene express data. We develop a Gibbs sampling algorithm based on the Chinese restaurant metaphor for the HDP clustering. We apply the proposed HDP algorithm to both regulatory network segmentation and gene expression clustering. The HDP algorithm is shown to outperform several popular clustering algorithms by revealing the underlying hierarchical structure of the data. For the yeast cell cycle data, we compare the HDP result to the standard result and show that the HDP algorithm provides more information and reduces the unnecessary clustering fragments. PMID:23587447
Sound-turbulence interaction in transonic boundary layers
NASA Astrophysics Data System (ADS)
Lelostec, Ludovic; Scalo, Carlo; Lele, Sanjiva
2014-11-01
Acoustic wave scattering in a transonic boundary layer is investigated through a novel approach. Instead of simulating directly the interaction of an incoming oblique acoustic wave with a turbulent boundary layer, suitable Dirichlet conditions are imposed at the wall to reproduce only the reflected wave resulting from the interaction of the incident wave with the boundary layer. The method is first validated using the laminar boundary layer profiles in a parallel flow approximation. For this scattering problem an exact inviscid solution can be found in the frequency domain which requires numerical solution of an ODE. The Dirichlet conditions are imposed in a high-fidelity unstructured compressible flow solver for Large Eddy Simulation (LES), CharLESx. The acoustic field of the reflected wave is then solved and the interaction between the boundary layer and sound scattering can be studied.
Heat kernel for the elliptic system of linear elasticity with boundary conditions
NASA Astrophysics Data System (ADS)
Taylor, Justin; Kim, Seick; Brown, Russell
2014-10-01
We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.
Saint-Hilary, Gaelle; Cadour, Stephanie; Robert, Veronique; Gasparini, Mauro
2017-05-01
Quantitative methodologies have been proposed to support decision making in drug development and monitoring. In particular, multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) are useful tools to assess the benefit-risk ratio of medicines according to the performances of the treatments on several criteria, accounting for the preferences of the decision makers regarding the relative importance of these criteria. However, even in its probabilistic form, MCDA requires the exact elicitations of the weights of the criteria by the decision makers, which may be difficult to achieve in practice. SMAA allows for more flexibility and can be used with unknown or partially known preferences, but it is less popular due to its increased complexity and the high degree of uncertainty in its results. In this paper, we propose a simple model as a generalization of MCDA and SMAA, by applying a Dirichlet distribution to the weights of the criteria and by making its parameters vary. This unique model permits to fit both MCDA and SMAA, and allows for a more extended exploration of the benefit-risk assessment of treatments. The precision of its results depends on the precision parameter of the Dirichlet distribution, which could be naturally interpreted as the strength of confidence of the decision makers in their elicitation of preferences. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Topic Modeling of NASA Space System Problem Reports: Research in Practice
NASA Technical Reports Server (NTRS)
Layman, Lucas; Nikora, Allen P.; Meek, Joshua; Menzies, Tim
2016-01-01
Problem reports at NASA are similar to bug reports: they capture defects found during test, post-launch operational anomalies, and document the investigation and corrective action of the issue. These artifacts are a rich source of lessons learned for NASA, but are expensive to analyze since problem reports are comprised primarily of natural language text. We apply topic modeling to a corpus of NASA problem reports to extract trends in testing and operational failures. We collected 16,669 problem reports from six NASA space flight missions and applied Latent Dirichlet Allocation topic modeling to the document corpus. We analyze the most popular topics within and across missions, and how popular topics changed over the lifetime of a mission. We find that hardware material and flight software issues are common during the integration and testing phase, while ground station software and equipment issues are more common during the operations phase. We identify a number of challenges in topic modeling for trend analysis: 1) that the process of selecting the topic modeling parameters lacks definitive guidance, 2) defining semantically-meaningful topic labels requires nontrivial effort and domain expertise, 3) topic models derived from the combined corpus of the six missions were biased toward the larger missions, and 4) topics must be semantically distinct as well as cohesive to be useful. Nonetheless,topic modeling can identify problem themes within missions and across mission lifetimes, providing useful feedback to engineers and project managers.
40 CFR 411.10 - Applicability; description of the nonleaching subcategory.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS CEMENT MANUFACTURING POINT SOURCE CATEGORY Nonleaching... used in the manufacturing of cement and in which kiln dust is not contracted with water as an integral...
40 CFR 411.10 - Applicability; description of the nonleaching subcategory.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS CEMENT MANUFACTURING POINT SOURCE CATEGORY Nonleaching... used in the manufacturing of cement and in which kiln dust is not contracted with water as an integral...
40 CFR 411.10 - Applicability; description of the nonleaching subcategory.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS CEMENT MANUFACTURING POINT SOURCE CATEGORY Nonleaching... used in the manufacturing of cement and in which kiln dust is not contracted with water as an integral...
40 CFR 411.10 - Applicability; description of the nonleaching subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS CEMENT MANUFACTURING POINT SOURCE CATEGORY Nonleaching... used in the manufacturing of cement and in which kiln dust is not contracted with water as an integral...
40 CFR 411.10 - Applicability; description of the nonleaching subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS CEMENT MANUFACTURING POINT SOURCE CATEGORY Nonleaching... used in the manufacturing of cement and in which kiln dust is not contracted with water as an integral...
Liu, Feng; Zhang, Shunan; Luo, Pei; Zhuang, Xuliang; Chen, Xiang; Wu, Jinshui
2018-01-01
In this review, the applications of Myriophyllum-based integrative biotechnology to remove common non-point source (NPS) pollutants, such as nitrogen, phosphorus, heavy metals, and organic pollutants (e.g., pesticides and antibiotics) are summarized. The removal of these pollutants via various mechanisms, including uptake by plant and microbial communities in macrophyte-based treatment systems are discussed. This review highlights the potential use of Myriophyllum biomass to produce animal feed, fertilizer, and other valuable by-products, which can yield cost-effective returns and attract more attention to the regulation and recycling of NPS pollutants. In addition, it demonstrates that utilization of Myriophyllum species is a promising and reliable strategy for wastewater treatment. The future development of sustainable Myriophyllum-based treatment systems is discussed from various perspectives. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multiple Positive Solutions in the Second Order Autonomous Nonlinear Boundary Value Problems
NASA Astrophysics Data System (ADS)
Atslega, Svetlana; Sadyrbaev, Felix
2009-09-01
We construct the second order autonomous equations with arbitrarily large number of positive solutions satisfying homogeneous Dirichlet boundary conditions. Phase plane approach and bifurcation of solutions are the main tools.
Variational Problems with Long-Range Interaction
NASA Astrophysics Data System (ADS)
Soave, Nicola; Tavares, Hugo; Terracini, Susanna; Zilio, Alessandro
2018-06-01
We consider a class of variational problems for densities that repel each other at a distance. Typical examples are given by the Dirichlet functional and the Rayleigh functional D(u) = \\sum_{i=1}^k \\int_{Ω} |\
Application of a water quality model in the White Cart water catchment, Glasgow, UK.
Liu, S; Tucker, P; Mansell, M; Hursthouse, A
2003-03-01
Water quality models of urban systems have previously focused on point source (sewerage system) inputs. Little attention has been given to diffuse inputs and research into diffuse pollution has been largely confined to agriculture sources. This paper reports on new research that is aimed at integrating diffuse inputs into an urban water quality model. An integrated model is introduced that is made up of four modules: hydrology, contaminant point sources, nutrient cycling and leaching. The hydrology module, T&T consists of a TOPMODEL (a TOPography-based hydrological MODEL), which simulates runoff from pervious areas and a two-tank model, which simulates runoff from impervious urban areas. Linked into the two-tank model, the contaminant point source module simulates the overflow from the sewerage system in heavy rain. The widely known SOILN (SOIL Nitrate model) is the basis of nitrogen cycle module. Finally, the leaching module consists of two functions: the production function and the transfer function. The production function is based on SLIM (Solute Leaching Intermediate Model) while the transfer function is based on the 'flushing hypothesis' which postulates a relationship between contaminant concentrations in the receiving water course and the extent to which the catchment is saturated. This paper outlines the modelling methodology and the model structures that have been developed. An application of this model in the White Cart catchment (Glasgow) is also included.
Triangulation in aetiological epidemiology
Lawlor, Debbie A; Tilling, Kate; Davey Smith, George
2016-01-01
Abstract Triangulation is the practice of obtaining more reliable answers to research questions through integrating results from several different approaches, where each approach has different key sources of potential bias that are unrelated to each other. With respect to causal questions in aetiological epidemiology, if the results of different approaches all point to the same conclusion, this strengthens confidence in the finding. This is particularly the case when the key sources of bias of some of the approaches would predict that findings would point in opposite directions if they were due to such biases. Where there are inconsistencies, understanding the key sources of bias of each approach can help to identify what further research is required to address the causal question. The aim of this paper is to illustrate how triangulation might be used to improve causal inference in aetiological epidemiology. We propose a minimum set of criteria for use in triangulation in aetiological epidemiology, summarize the key sources of bias of several approaches and describe how these might be integrated within a triangulation framework. We emphasize the importance of being explicit about the expected direction of bias within each approach, whenever this is possible, and seeking to identify approaches that would be expected to bias the true causal effect in different directions. We also note the importance, when comparing results, of taking account of differences in the duration and timing of exposures. We provide three examples to illustrate these points. PMID:28108528
Superradiance in the BTZ black hole with Robin boundary conditions
NASA Astrophysics Data System (ADS)
Dappiaggi, Claudio; Ferreira, Hugo R. C.; Herdeiro, Carlos A. R.
2018-03-01
We show the existence of superradiant modes of massive scalar fields propagating in BTZ black holes when certain Robin boundary conditions, which never include the commonly considered Dirichlet boundary conditions, are imposed at spatial infinity. These superradiant modes are defined as those solutions whose energy flux across the horizon is towards the exterior region. Differently from rotating, asymptotically flat black holes, we obtain that not all modes which grow up exponentially in time are superradiant; for some of these, the growth is sourced by a bulk instability of AdS3, triggered by the scalar field with Robin boundary conditions, rather than by energy extraction from the BTZ black hole. Thus, this setup provides an example wherein Bosonic modes with low frequency are pumping energy into, rather than extracting energy from, a rotating black hole.
Greenwood, Daniel; Davids, Keith; Renshaw, Ian
2014-01-01
Coordination of dynamic interceptive movements is predicated on cyclical relations between an individual's actions and information sources from the performance environment. To identify dynamic informational constraints, which are interwoven with individual and task constraints, coaches' experiential knowledge provides a complementary source to support empirical understanding of performance in sport. In this study, 15 expert coaches from 3 sports (track and field, gymnastics and cricket) participated in a semi-structured interview process to identify potential informational constraints which they perceived to regulate action during run-up performance. Expert coaches' experiential knowledge revealed multiple information sources which may constrain performance adaptations in such locomotor pointing tasks. In addition to the locomotor pointing target, coaches' knowledge highlighted two other key informational constraints: vertical reference points located near the locomotor pointing target and a check mark located prior to the locomotor pointing target. This study highlights opportunities for broadening the understanding of perception and action coupling processes, and the identified information sources warrant further empirical investigation as potential constraints on athletic performance. Integration of experiential knowledge of expert coaches with theoretically driven empirical knowledge represents a promising avenue to drive future applied science research and pedagogical practice.
Koshkina, Vira; Wang, Yang; Gordon, Ascelin; Dorazio, Robert; White, Matthew; Stone, Lewi
2017-01-01
Two main sources of data for species distribution models (SDMs) are site-occupancy (SO) data from planned surveys, and presence-background (PB) data from opportunistic surveys and other sources. SO surveys give high quality data about presences and absences of the species in a particular area. However, due to their high cost, they often cover a smaller area relative to PB data, and are usually not representative of the geographic range of a species. In contrast, PB data is plentiful, covers a larger area, but is less reliable due to the lack of information on species absences, and is usually characterised by biased sampling. Here we present a new approach for species distribution modelling that integrates these two data types.We have used an inhomogeneous Poisson point process as the basis for constructing an integrated SDM that fits both PB and SO data simultaneously. It is the first implementation of an Integrated SO–PB Model which uses repeated survey occupancy data and also incorporates detection probability.The Integrated Model's performance was evaluated, using simulated data and compared to approaches using PB or SO data alone. It was found to be superior, improving the predictions of species spatial distributions, even when SO data is sparse and collected in a limited area. The Integrated Model was also found effective when environmental covariates were significantly correlated. Our method was demonstrated with real SO and PB data for the Yellow-bellied glider (Petaurus australis) in south-eastern Australia, with the predictive performance of the Integrated Model again found to be superior.PB models are known to produce biased estimates of species occupancy or abundance. The small sample size of SO datasets often results in poor out-of-sample predictions. Integrated models combine data from these two sources, providing superior predictions of species abundance compared to using either data source alone. Unlike conventional SDMs which have restrictive scale-dependence in their predictions, our Integrated Model is based on a point process model and has no such scale-dependency. It may be used for predictions of abundance at any spatial-scale while still maintaining the underlying relationship between abundance and area.
Bayesian correlated clustering to integrate multiple datasets
Kirk, Paul; Griffin, Jim E.; Savage, Richard S.; Ghahramani, Zoubin; Wild, David L.
2012-01-01
Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods. Availability: A Matlab implementation of MDI is available from http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. Contact: D.L.Wild@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047558
Yang, S A
2002-10-01
This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.
Safety Benefits of Access Spacing
DOT National Transportation Integrated Search
1997-01-01
The spacing of driveways and streets is an important element in roadway planning, design, and operation. Access points are the main source of accidents and congestion. Their location and spacing affects the safety and functional integrity of streets ...
Code of Federal Regulations, 2010 CFR
2010-10-01
... between two or more ports and/or points in the United States. (l) Eligible Vessel, means a vessel that... sources with a minimum speed of 12 knots. (2) Dry Cargo—All dry cargo ships, including integrated tug...
40 CFR 411.20 - Applicability; description of the leaching subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS CEMENT MANUFACTURING POINT SOURCE CATEGORY Leaching Subcategory... manufacturing of cement and in which kiln dust is contacted with water as an integral part of the process or...
2014-09-01
These renewable energy sources can include solar, wind, geothermal , biomass, hydroelectric, and nuclear. Of these sources, photovoltaic (PV) arrays...renewable energy source [1]. These renewable energy sources can include solar, wind, geothermal , biomass, hydroelectric, and nuclear. Of these sources...26, May 2011. [6] H. G. Xu, J. P. He, Y. Qin, and Y. H. Li, “Energy management and control strategy for DC micro-grid in data center,” China
The Hurwitz Enumeration Problem of Branched Covers and Hodge Integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Yun S.
We use algebraic methods to compute the simple Hurwitz numbers for arbitrary source and target Riemann surfaces. For an elliptic curve target, we reproduce the results previously obtained by string theorists. Motivated by the Gromov-Witten potentials, we find a general generating function for the simple Hurwitz numbers in terms of the representation theory of the symmetric group S{sub n}. We also find a generating function for Hodge integrals on the moduli space {bar M}{sub g,2} of Riemann surfaces with two marked points, similar to that found by Faber and Pandharipande for the case of one marked point.
Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection
NASA Astrophysics Data System (ADS)
Safi’ie, M. A.; Utami, E.; Fatta, H. A.
2018-03-01
Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.
Yu, Kezi; Quirk, J Gerald; Djurić, Petar M
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.
NASA Astrophysics Data System (ADS)
Cardone, G.; Durante, T.; Nazarov, S. A.
2017-07-01
We consider the spectral Dirichlet problem for the Laplace operator in the plane Ω∘ with double-periodic perforation but also in the domain Ω• with a semi-infinite foreign inclusion so that the Floquet-Bloch technique and the Gelfand transform do not apply directly. We describe waves which are localized near the inclusion and propagate along it. We give a formulation of the problem with radiation conditions that provides a Fredholm operator of index zero. The main conclusion concerns the spectra σ∘ and σ• of the problems in Ω∘ and Ω•, namely we present a concrete geometry which supports the relation σ∘ ⫋σ• due to a new non-empty spectral band caused by the semi-infinite inclusion called an open waveguide in the double-periodic medium.
Dirichlet Component Regression and its Applications to Psychiatric Data.
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2008-08-15
We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook's distance, and a local jackknife influence metric.
Unstable Mode Solutions to the Klein-Gordon Equation in Kerr-anti-de Sitter Spacetimes
NASA Astrophysics Data System (ADS)
Dold, Dominic
2017-03-01
For any cosmological constant {Λ = -3/ℓ2 < 0} and any {α < 9/4}, we find a Kerr-AdS spacetime {({M}, g_{KAdS})}, in which the Klein-Gordon equation {Box_{g_{KAdS}}ψ + α/ℓ2ψ = 0} has an exponentially growing mode solution satisfying a Dirichlet boundary condition at infinity. The spacetime violates the Hawking-Reall bound {r+2 > |a|ℓ}. We obtain an analogous result for Neumann boundary conditions if {5/4 < α < 9/4}. Moreover, in the Dirichlet case, one can prove that, for any Kerr-AdS spacetime violating the Hawking-Reall bound, there exists an open family of masses {α} such that the corresponding Klein-Gordon equation permits exponentially growing mode solutions. Our result adopts methods of Shlapentokh-Rothman developed in (Commun. Math. Phys. 329:859-891, 2014) and provides the first rigorous construction of a superradiant instability for negative cosmological constant.
NASA Astrophysics Data System (ADS)
Gross, Markus
2018-03-01
We consider a one-dimensional fluctuating interfacial profile governed by the Edwards–Wilkinson or the stochastic Mullins-Herring equation for periodic, standard Dirichlet and Dirichlet no-flux boundary conditions. The minimum action path of an interfacial fluctuation conditioned to reach a given maximum height M at a finite (first-passage) time T is calculated within the weak-noise approximation. Dynamic and static scaling functions for the profile shape are obtained in the transient and the equilibrium regime, i.e. for first-passage times T smaller or larger than the characteristic relaxation time, respectively. In both regimes, the profile approaches the maximum height M with a universal algebraic time dependence characterized solely by the dynamic exponent of the model. It is shown that, in the equilibrium regime, the spatial shape of the profile depends sensitively on boundary conditions and conservation laws, but it is essentially independent of them in the transient regime.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models
Yu, Kezi; Quirk, J. Gerald
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927
Stereochemistry of silicon in oxygen-containing compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serezhkin, V. N., E-mail: Serezhkin@samsu.ru; Urusov, V. S.
2017-01-15
Specific stereochemical features of silicon in oxygen-containing compounds, including hybrid silicates with all oxygen atoms of SiO{sub n} groups ({sub n} = 4, 5, or 6) entering into the composition of organic anions or molecules, are described by characteristics of Voronoi—Dirichlet polyhedra. It is found that in rutile-like stishovite and post-stishovite phases with the structures similar to those of СаСl{sub 2}, α-PbO{sub 2}, or pyrite FeS{sub 2}, the volume of Voronoi—Dirichlet polyhedra of silicon and oxygen atoms decreases linearly with pressure increasing to 268 GPa. Based on these results, the possibility of formation of new post-stishovite phases is shown, namely,more » the fluorite-like structure (transition predicted at ~400 GPa) and a body-centered cubic lattice with statistical arrangement of silicon and oxygen atoms (~900 GPa).« less
NASA Astrophysics Data System (ADS)
Farrokhabadi, A.; Mokhtari, J.; Koochi, A.; Abadyan, M.
2015-06-01
In this paper, the impact of the Casimir attraction on the electromechanical stability of nanowire-fabricated nanotweezers is investigated using a theoretical continuum mechanics model. The Dirichlet mode is considered and an asymptotic solution, based on path integral approach, is applied to consider the effect of vacuum fluctuations in the model. The Euler-Bernoulli beam theory is employed to derive the nonlinear governing equation of the nanotweezers. The governing equations are solved by three different approaches, i.e. the modified variation iteration method, generalized differential quadrature method and using a lumped parameter model. Various perspectives of the problem, including the comparison with the van der Waals force regime, the variation of instability parameters and effects of geometry are addressed in present paper. The proposed approach is beneficial for the precise determination of the electrostatic response of the nanotweezers in the presence of Casimir force.
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Zaky, M. A.
2015-01-01
In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.
Wu, Hao; Zhang, Yan; Yu, Qi; Ma, Weichun
2018-04-01
In this study, the authors endeavored to develop an effective framework for improving local urban air quality on meso-micro scales in cities in China that are experiencing rapid urbanization. Within this framework, the integrated Weather Research and Forecasting (WRF)/CALPUFF modeling system was applied to simulate the concentration distributions of typical pollutants (particulate matter with an aerodynamic diameter <10 μm [PM 10 ], sulfur dioxide [SO 2 ], and nitrogen oxides [NO x ]) in the urban area of Benxi. Statistical analyses were performed to verify the credibility of this simulation, including the meteorological fields and concentration fields. The sources were then categorized using two different classification methods (the district-based and type-based methods), and the contributions to the pollutant concentrations from each source category were computed to provide a basis for appropriate control measures. The statistical indexes showed that CALMET had sufficient ability to predict the meteorological conditions, such as the wind fields and temperatures, which provided meteorological data for the subsequent CALPUFF run. The simulated concentrations from CALPUFF showed considerable agreement with the observed values but were generally underestimated. The spatial-temporal concentration pattern revealed that the maximum concentrations tended to appear in the urban centers and during the winter. In terms of their contributions to pollutant concentrations, the districts of Xihu, Pingshan, and Mingshan all affected the urban air quality to different degrees. According to the type-based classification, which categorized the pollution sources as belonging to the Bengang Group, large point sources, small point sources, and area sources, the source apportionment showed that the Bengang Group, the large point sources, and the area sources had considerable impacts on urban air quality. Finally, combined with the industrial characteristics, detailed control measures were proposed with which local policy makers could improve the urban air quality in Benxi. In summary, the results of this study showed that this framework has credibility for effectively improving urban air quality, based on the source apportionment of atmospheric pollutants. The authors endeavored to build up an effective framework based on the integrated WRF/CALPUFF to improve the air quality in many cities on meso-micro scales in China. Via this framework, the integrated modeling tool is accurately used to study the characteristics of meteorological fields, concentration fields, and source apportionments of pollutants in target area. The impacts of classified sources on air quality together with the industrial characteristics can provide more effective control measures for improving air quality. Through the case study, the technical framework developed in this study, particularly the source apportionment, could provide important data and technical support for policy makers to assess air pollution on the scale of a city in China or even the world.
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
Integrated numerical modeling of a laser gun injector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H.; Benson, S.; Bisognano, J.
1993-06-01
CEBAF is planning to incorporate a laser gun injector into the linac front end as a high-charge cw source for a high-power free electron laser and nuclear physics. This injector consists of a DC laser gun, a buncher, a cryounit and a chicane. The performance of the injector is predicted based on integrated numerical modeling using POISSON, SUPERFISH and PARMELA. The point-by-point method incorporated into PARMELA by McDonald is chosen for space charge treatment. The concept of ``conditioning for final bunching`` is employed to vary several crucial parameters of the system for achieving highest peak current while maintaining low emittancemore » and low energy spread. Extensive parameter variation studies show that the design will perform beyond the specifications for FEL operations aimed at industrial applications and fundamental scientific research. The calculation also shows that the injector will perform as an extremely bright cw electron source.« less
Industrial pollution and the management of river water quality: a model of Kelani River, Sri Lanka.
Gunawardena, Asha; Wijeratne, E M S; White, Ben; Hailu, Atakelty; Pandit, Ram
2017-08-19
Water quality of the Kelani River has become a critical issue in Sri Lanka due to the high cost of maintaining drinking water standards and the market and non-market costs of deteriorating river ecosystem services. By integrating a catchment model with a river model of water quality, we developed a method to estimate the effect of pollution sources on ambient water quality. Using integrated model simulations, we estimate (1) the relative contribution from point (industrial and domestic) and non-point sources (river catchment) to river water quality and (2) pollutant transfer coefficients for zones along the lower section of the river. Transfer coefficients provide the basis for policy analyses in relation to the location of new industries and the setting of priorities for industrial pollution control. They also offer valuable information to design socially optimal economic policy to manage industrialized river catchments.
Kernel-PCA data integration with enhanced interpretability
2014-01-01
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747
Global Solar Magnetology and Reference Points of the Solar Cycle
NASA Astrophysics Data System (ADS)
Obridko, V. N.; Shelting, B. D.
2003-11-01
The solar cycle can be described as a complex interaction of large-scale/global and local magnetic fields. In general, this approach agrees with the traditional dynamo scheme, although there are numerous discrepancies in the details. Integrated magnetic indices introduced earlier are studied over long time intervals, and the epochs of the main reference points of the solar cycles are refined. A hypothesis proposed earlier concerning global magnetometry and the natural scale of the cycles is verified. Variations of the heliospheric magnetic field are determined by both the integrated photospheric i(B r )ph and source surface i(B r )ss indices, however, their roles are different. Local fields contribute significantly to the photospheric index determining the total increase in the heliospheric magnetic field. The i(B r )ss index (especially the partial index ZO, which is related to the quasi-dipolar field) determines narrow extrema. These integrated indices supply us with a “passport” for reference points, making it possible to identify them precisely. A prominent dip in the integrated indices is clearly visible at the cycle maximum, resulting in the typical double-peak form (the Gnevyshev dip), with the succeeding maximum always being higher than the preceding maximum. At the source surface, this secondary maximum significantly exceeds the primary maximum. Using these index data, we can estimate the progression expected for the 23rd cycle and predict the dates of the ends of the 23rd and 24th cycles (the middle of 2007 and December 2018, respectively).
THE RANGE OF VALUES OF λ2/λ1 AND λ3/λ1 FOR THE FIXED MEMBRANE PROBLEM
NASA Astrophysics Data System (ADS)
Ashbaugh, Mark S.; Benguria, Rafael D.
We investigate the region of the plane in which the point (λ2/λ1, λ3/λ1) can lie, where λ1, λ2, and λ3 are the first three eigenvalues of the Dirichlet Laplacian on an arbitrary bounded domain Ω ⊂ ℝ2. In particular, by making use of a technique introduced by de Vries we obtain the best bounds to date for the quantities λ3/λ1 and (λ2 + λ3)/λ1. These bounds are λ3/λ1 ≤ 3.90514+ and (λ2 + λ3)/λ1 ≤ 5.52485+ and give small improvements over previous bounds of Marcellini. Where Marcellini used a bound due to Brands in his argument we use a better version of this bound which we obtain by incorporating deVries' idea. The other bounds that yield the greatest information about the region where points (λ2/λ1, λ3/λ1) can (possibly) lie are those due to Marcellini, Hile and Protter, and us (of which there are several, with two of them being new with this paper).
NASA Technical Reports Server (NTRS)
Levine, H.
1982-01-01
The calculation of power output from a (finite) linear array of equidistant point sources is investigated with allowance for a relative phase shift and particular focus on the circumstances of small/large individual source separation. A key role is played by the estimates found for a twin parameter definite integral that involves the Fejer kernel functions, where N denotes a (positive) integer; these results also permit a quantitative accounting of energy partition between the principal and secondary lobes of the array pattern. Continuously distributed sources along a finite line segment or an open ended circular cylindrical shell are considered, and estimates for the relatively lower output in the latter configuration are made explicit when the shell radius is small compared to the wave length. A systematic reduction of diverse integrals which characterize the energy output from specific line and strip sources is investigated.
Triangulation in aetiological epidemiology.
Lawlor, Debbie A; Tilling, Kate; Davey Smith, George
2016-12-01
Triangulation is the practice of obtaining more reliable answers to research questions through integrating results from several different approaches, where each approach has different key sources of potential bias that are unrelated to each other. With respect to causal questions in aetiological epidemiology, if the results of different approaches all point to the same conclusion, this strengthens confidence in the finding. This is particularly the case when the key sources of bias of some of the approaches would predict that findings would point in opposite directions if they were due to such biases. Where there are inconsistencies, understanding the key sources of bias of each approach can help to identify what further research is required to address the causal question. The aim of this paper is to illustrate how triangulation might be used to improve causal inference in aetiological epidemiology. We propose a minimum set of criteria for use in triangulation in aetiological epidemiology, summarize the key sources of bias of several approaches and describe how these might be integrated within a triangulation framework. We emphasize the importance of being explicit about the expected direction of bias within each approach, whenever this is possible, and seeking to identify approaches that would be expected to bias the true causal effect in different directions. We also note the importance, when comparing results, of taking account of differences in the duration and timing of exposures. We provide three examples to illustrate these points. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.
Microgravity Experiments Safety and Integration Requirements Document Tree
NASA Technical Reports Server (NTRS)
Hogan, Jean M.
1995-01-01
This report is a document tree of the safety and integration documents required to develop a space experiment. Pertinent document information for each of the top level (tier one) safety and integration documents, and their applicable and reference (tier two) documents has been identified. This information includes: document title, revision level, configuration management, electronic availability, listed applicable and reference documents, source for obtaining the document, and document owner. One of the main conclusions of this report is that no single document tree exists for all safety and integration documents, regardless of the Shuttle carrier. This document also identifies the need for a single point of contact for customers wishing to access documents. The data in this report serves as a valuable information source for the NASA Lewis Research Center Project Documentation Center, as well as for all developers of space experiments.
Robust numerical electromagnetic eigenfunction expansion algorithms
NASA Astrophysics Data System (ADS)
Sainath, Kamalesh
This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.
Boundary conditions in Chebyshev and Legendre methods
NASA Technical Reports Server (NTRS)
Canuto, C.
1984-01-01
Two different ways of treating non-Dirichlet boundary conditions in Chebyshev and Legendre collocation methods are discussed for second order differential problems. An error analysis is provided. The effect of preconditioning the corresponding spectral operators by finite difference matrices is also investigated.
Acoustic power of a moving point source in a moving medium
NASA Technical Reports Server (NTRS)
Cole, J. E., III; Sarris, I. I.
1976-01-01
The acoustic power output of a moving point-mass source in an acoustic medium which is in uniform motion and infinite in extent is examined. The acoustic medium is considered to be a homogeneous fluid having both zero viscosity and zero thermal conductivity. Two expressions for the acoustic power output are obtained based on a different definition cited in the literature for the average energy-flux vector in an acoustic medium in uniform motion. The acoustic power output of the source is found by integrating the component of acoustic intensity vector in the radial direction over the surface of an infinitely long cylinder which is within the medium and encloses the line of motion of the source. One of the power expressions is found to give unreasonable results even though the flow is uniform.
Recommender system based on scarce information mining.
Lu, Wei; Chung, Fu-Lai; Lai, Kunfeng; Zhang, Liang
2017-09-01
Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed. Assuming that each clicking sample is generated from a representation of user preferences, DPIS models the sample level topic proportions as a multinomial item vector, and utilizes topical clustering on the user part for recommendation through a probit classifier. As demonstrated by the real-world application, the proposed DPIS achieves better performance in accuracy, perplexity as well as diversity in coverage than traditional methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Synthesis and crystal structure analysis of uranyl triple acetates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klepov, Vladislav V., E-mail: vladislavklepov@gmail.com; Department of Chemistry, Samara National Research University, 443086 Samara; Serezhkina, Larisa B.
2016-12-15
Single crystals of triple acetates NaR[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}·6H{sub 2}O (R=Mg, Co, Ni, Zn), well-known for their use as reagents for sodium determination, were grown from aqueous solutions and their structural and spectroscopic properties were studied. Crystal structures of the mentioned phases are based upon (Na[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}){sup 2–} clusters and [R(H{sub 2}O){sub 6}]{sup 2+} aqua-complexes. The cooling of a single crystal of NaMg[UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 3}·6H{sub 2}O from 300 to 100 K leads to a phase transition from trigonal to monoclinic crystal system. Intermolecular interactions between the structural units and their mutual packing were studiedmore » and compared from the point of view of the stereoatomic model of crystal structures based on Voronoi-Dirichlet tessellation. Using this method we compared the crystal structures of the triple acetates with Na[UO{sub 2}(CH{sub 3}COO){sub 3}] and [R(H{sub 2}O){sub 6}][UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 2} and proposed reasons of triple acetates stability. Infrared and Raman spectra were collected and their bands were assigned. - Graphical abstract: Single crystals of uranium based triple acetates, analytical reagents for sodium determination, were synthesized and structurally, spectroscopically and topologically characterized. The structures were compared with the structures of compounds from preceding families [M(H{sub 2}O){sub 6})][UO{sub 2}(CH{sub 3}COO){sub 3}]{sub 2} (M = Mg, Co, Ni, Zn) and Na[UO{sub 2}(CH{sub 3}COO){sub 3}]. Analysis was performed with the method of molecular Voronoi-Dirichlet polyhedra to reveal a large contribution of the hydrogen bonds into intermolecular interactions which can be a reason of low solubility of studied complexes.« less
Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J. Kenneth
2000-10-15
A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.
The excitation of long period seismic waves by a source spanning a structural discontinuity
NASA Astrophysics Data System (ADS)
Woodhouse, J. H.
Simple theoretical results are obtained for the excitation of seismic waves by an indigenous seismic source in the case that the source volume is intersected by a structural discontinuity. In the long wavelength approximation the seismic radiation is identical to that of a point source placed on one side of the discontinuity or of a different point source placed on the other side. The moment tensors of these two equivalent sources are related by a specific linear transformation and may differ appreciably both in magnitude and geometry. Either of these sources could be obtained by linear inversion of seismic data but the physical interpretation is more complicated than in the usual case. A source which involved no volume change would, for example, yield an isotropic component if, during inversion, it were assumed to lie on the wrong side of the discontinuity. The problem of determining the true moment tensor of the source is indeterminate unless further assumptions are made about the stress glut distribution; one way to resolve this indeterminancy is to assume proportionality between the integrated stress glut on each side of the discontinuity.
REMOTE SENSING APPLICATIONS FOR SUSTAINABLE WATERSHED MANAGEMENT AND FOOD SECURITY
The integration of IKONOS satellite data, airborne color infrared remote sensing, visualization, and decision support tools is discussed, within the contexts of management techniques for minimizing non-point source pollution in inland waterways, such s riparian buffer restoration...
On the Hilbert-Huang Transform Theoretical Developments
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Patrick, David; Hestnes, Phyllis
2005-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as linearity, of being stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectrum analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposition data, the HHT allows spectrum analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a near orthogonal adaptive basis, a basis that is derived from the data. The IMFs can be further analyzed for spectrum interpretation by the classical Hilbert Transform. A new engineering spectrum analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications post additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs near orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the developments of new HHT processing options, such as real-time and 2-D processing using Field Programmable Array (FPGA) computational resources, enhanced HHT synthesis, and broaden the scope of HHT applications for signal processing.
On Certain Theoretical Developments Underlying the Hilbert-Huang Transform
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Petrick, David; Hestness, Phyllis
2006-01-01
One of the main traditional tools used in scientific and engineering data spectral analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as being linear and stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectral analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposed data, the HHT allows spectral analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real-value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a nearly orthogonal derived from the data (adaptive) basis. The IMFs can be further analyzed for spectrum content by using the classical Hilbert Transform. A new engineering spectral analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications pose additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs nearly orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the development of new HHT processing options, such as real-time and 2-D processing using Field Programmable Gate Array (FPGA) computational resources,
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
SIBIS: a Bayesian model for inconsistent protein sequence estimation.
Khenoussi, Walyd; Vanhoutrève, Renaud; Poch, Olivier; Thompson, Julie D
2014-09-01
The prediction of protein coding genes is a major challenge that depends on the quality of genome sequencing, the accuracy of the model used to elucidate the exonic structure of the genes and the complexity of the gene splicing process leading to different protein variants. As a consequence, today's protein databases contain a huge amount of inconsistency, due to both natural variants and sequence prediction errors. We have developed a new method, called SIBIS, to detect such inconsistencies based on the evolutionary information in multiple sequence alignments. A Bayesian framework, combined with Dirichlet mixture models, is used to estimate the probability of observing specific amino acids and to detect inconsistent or erroneous sequence segments. We evaluated the performance of SIBIS on a reference set of protein sequences with experimentally validated errors and showed that the sensitivity is significantly higher than previous methods, with only a small loss of specificity. We also assessed a large set of human sequences from the UniProt database and found evidence of inconsistency in 48% of the previously uncharacterized sequences. We conclude that the integration of quality control methods like SIBIS in automatic analysis pipelines will be critical for the robust inference of structural, functional and phylogenetic information from these sequences. Source code, implemented in C on a linux system, and the datasets of protein sequences are freely available for download at http://www.lbgi.fr/∼julie/SIBIS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations
NASA Technical Reports Server (NTRS)
Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.
2006-01-01
In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.
Catchment-wide impacts on water quality: the use of 'snapshot' sampling during stable flow
NASA Astrophysics Data System (ADS)
Grayson, R. B.; Gippel, C. J.; Finlayson, B. L.; Hart, B. T.
1997-12-01
Water quality is usually monitored on a regular basis at only a small number of locations in a catchment, generally focused at the catchment outlet. This integrates the effect of all the point and non-point source processes occurring throughout the catchment. However, effective catchment management requires data which identify major sources and processes. As part of a wider study aimed at providing technical information for the development of integrated catchment management plans for a 5000 km 2 catchment in south eastern Australia, a 'snapshot' of water quality was undertaken during stable summer flow conditions. These low flow conditions exist for long periods so water quality at these flow levels is an important constraint on the health of in-stream biological communities. Over a 4 day period, a study of the low flow water quality characteristics throughout the Latrobe River catchment was undertaken. Sixty-four sites were chosen to enable a longitudinal profile of water quality to be established. All tributary junctions and sites along major tributaries, as well as all major industrial inputs were included. Samples were analysed for a range of parameters including total suspended solids concentration, pH, dissolved oxygen, electrical conductivity, turbidity, flow rate and water temperature. Filtered and unfiltered samples were taken from 27 sites along the main stream and tributary confluences for analysis of total N, NH 4, oxidised N, total P and dissolved reactive P concentrations. The data are used to illustrate the utility of this sampling methodology for establishing specific sources and estimating non-point source loads of phosphorous, total suspended solids and total dissolved solids. The methodology enabled several new insights into system behaviour including quantification of unknown point discharges, identification of key in-stream sources of suspended material and the extent to which biological activity (phytoplankton growth) affects water quality. The costs and benefits of the sampling exercise are reviewed.
Spheroidal Integral Equations for Geodetic Inversion of Geopotential Gradients
NASA Astrophysics Data System (ADS)
Novák, Pavel; Šprlák, Michal
2018-03-01
The static Earth's gravitational field has traditionally been described in geodesy and geophysics by the gravitational potential (geopotential for short), a scalar function of 3-D position. Although not directly observable, geopotential functionals such as its first- and second-order gradients are routinely measured by ground, airborne and/or satellite sensors. In geodesy, these observables are often used for recovery of the static geopotential at some simple reference surface approximating the actual Earth's surface. A generalized mathematical model is represented by a surface integral equation which originates in solving Dirichlet's boundary-value problem of the potential theory defined for the harmonic geopotential, spheroidal boundary and globally distributed gradient data. The mathematical model can be used for combining various geopotential gradients without necessity of their re-sampling or prior continuation in space. The model extends the apparatus of integral equations which results from solving boundary-value problems of the potential theory to all geopotential gradients observed by current ground, airborne and satellite sensors. Differences between spherical and spheroidal formulations of integral kernel functions of Green's kind are investigated. Estimated differences reach relative values at the level of 3% which demonstrates the significance of spheroidal approximation for flattened bodies such as the Earth. The observation model can be used for combined inversion of currently available geopotential gradients while exploring their spectral and stochastic characteristics. The model would be even more relevant to gravitational field modelling of other bodies in space with more pronounced spheroidal geometry than that of the Earth.
sourceR: Classification and source attribution of infectious agents among heterogeneous populations
French, Nigel
2017-01-01
Zoonotic diseases are a major cause of morbidity, and productivity losses in both human and animal populations. Identifying the source of food-borne zoonoses (e.g. an animal reservoir or food product) is crucial for the identification and prioritisation of food safety interventions. For many zoonotic diseases it is difficult to attribute human cases to sources of infection because there is little epidemiological information on the cases. However, microbial strain typing allows zoonotic pathogens to be categorised, and the relative frequencies of the strain types among the sources and in human cases allows inference on the likely source of each infection. We introduce sourceR, an R package for quantitative source attribution, aimed at food-borne diseases. It implements a Bayesian model using strain-typed surveillance data from both human cases and source samples, capable of identifying important sources of infection. The model measures the force of infection from each source, allowing for varying survivability, pathogenicity and virulence of pathogen strains, and varying abilities of the sources to act as vehicles of infection. A Bayesian non-parametric (Dirichlet process) approach is used to cluster pathogen strain types by epidemiological behaviour, avoiding model overfitting and allowing detection of strain types associated with potentially high “virulence”. sourceR is demonstrated using Campylobacter jejuni isolate data collected in New Zealand between 2005 and 2008. Chicken from a particular poultry supplier was identified as the major source of campylobacteriosis, which is qualitatively similar to results of previous studies using the same dataset. Additionally, the software identifies a cluster of 9 multilocus sequence types with abnormally high ‘virulence’ in humans. sourceR enables straightforward attribution of cases of zoonotic infection to putative sources of infection. As sourceR develops, we intend it to become an important and flexible resource for food-borne disease attribution studies. PMID:28558033
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Skyshine line-beam response functions for 20- to 100-MeV photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockhoff, R.C.; Shultis, J.K.; Faw, R.E.
1996-06-01
The line-beam response function, needed for skyshine analyses based on the integral line-beam method, was evaluated with the MCNP Monte Carlo code for photon energies from 20 to 100 MeV and for source-to-detector distances out to 1,000 m. These results are compared with point-kernel results, and the effects of bremsstrahlung and positron transport in the air are found to be important in this energy range. The three-parameter empirical formula used in the integral line-beam skyshine method was fit to the MCNP results, and values of these parameters are reported for various source energies and angles.
A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆
Ying, Wenjun; Henriquez, Craig S.
2013-01-01
This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600
NASA Technical Reports Server (NTRS)
Reese, Erik D.; Mroczkowski, Tony; Menanteau, Felipe; Hilton, Matt; Sievers, Jonathan; Aguirre, Paula; Appel, John William; Baker, Andrew J.; Bond, J. Richard; Das, Sudeep;
2011-01-01
We present follow-up observations with the Sunyaev-Zel'dovich Array (SZA) of optically-confirmed galaxy clusters found in the equatorial survey region of the Atacama Cosmology Telescope (ACT): ACT-CL J0022-0036, ACT-CL J2051+0057, and ACT-CL J2337+0016. ACT-CL J0022-0036 is a newly-discovered, massive (10(exp 15) Msun), high-redshift (z=0.81) cluster revealed by ACT through the Sunyaev-Zel'dovich effect (SZE). Deep, targeted observations with the SZA allow us to probe a broader range of cluster spatial scales, better disentangle cluster decrements from radio point source emission, and derive more robust integrated SZE flux and mass estimates than we can with ACT data alone. For the two clusters we detect with the SZA we compute integrated SZE signal and derive masses from the SZA data only. ACT-CL J2337+0016, also known as Abell 2631, has archival Chandra data that allow an additional X-ray-based mass estimate. Optical richness is also used to estimate cluster masses and shows good agreement with the SZE and X-ray-based estimates. Based on the point sources detected by the SZA in these three cluster fields and an extrapolation to ACT's frequency, we estimate that point sources could be contaminating the SZE decrement at the less than = 20% level for some fraction of clusters.
NASA Technical Reports Server (NTRS)
Reese, Erik; Mroczkowski, Tony; Menateau, Felipe; Hilton, Matt; Sievers, Jonathan; Aguirre, Paula; Appel, John William; Baker, Andrew J.; Bond, J. Richard; Das, Sudeep;
2011-01-01
We present follow-up observations with the Sunyaev-Zel'dovich Array (SZA) of optically-confirmed galaxy clusters found in the equatorial survey region of the Atacama Cosmology Telescope (ACT): ACT-CL J0022-0036, ACT-CL J2051+0057, and ACT-CL J2337+0016. ACT-CL J0022-0036 is a newly-discovered, massive ( approximately equals 10(exp 15) Solar M), high-redshift (z = 0.81) cluster revealed by ACT through the Sunyaev-Zeldovich effect (SZE). Deep, targeted observations with the SZA allow us to probe a broader range of cluster spatial scales, better disentangle cluster decrements from radio point source emission, and derive more robust integrated SZE flux and mass estimates than we can with ACT data alone. For the two clusters we detect with the SZA we compute integrated SZE signal and derive masses from the SZA data only. ACT-CL J2337+0016, also known as Abell 2631, has archival Chandra data that allow an additional X-ray-based mass estimate. Optical richness is also used to estimate cluster masses and shows good agreement with the SZE and X-ray-based estimates. Based on the point sources detected by the SZA in these three cluster fields and an extrapolation to ACT's frequency, we estimate that point sources could be contaminating the SZE decrement at the approx < 20% level for some fraction of clusters.
NASA Astrophysics Data System (ADS)
Tryka, Stanislaw
2007-04-01
A general formula and some special integral formulas were presented for calculating radiative fluxes incident on a circular plane from a planar multiple point source within a coaxial cylindrical enclosure perpendicular to the source. These formula were obtained for radiation propagating in a homogeneous isotropic medium assuming that the lateral surface of the enclosure completely absorbs the incident radiation. Exemplary results were computed numerically and illustrated with three-dimensional surface plots. The formulas presented are suitable for determining fluxes of radiation reaching planar circular detectors, collectors or other planar circular elements from systems of laser diodes, light emitting diodes and fiber lamps within cylindrical enclosures, as well as small biological emitters (bacteria, fungi, yeast, etc.) distributed on planar bases of open nontransparent cylindrical containers.
DUTIR at TREC 2009: Chemical IR Track
2009-11-01
We set the Dirichlet prior empirically at 1,500 as recommended in [2]. For example, Topic 15 “ Betaines for peripheral arterial disease” is...converted into the following Indri query: # (combine betaines for peripheral arterial disease ) which produces results rank-equivalent to a simple query
Modifications to holographic entanglement entropy in warped CFT
NASA Astrophysics Data System (ADS)
Song, Wei; Wen, Qiang; Xu, Jianfei
2017-02-01
In [1] it was observed that asymptotic boundary conditions play an important role in the study of holographic entanglement beyond AdS/CFT. In particular, the Ryu-Takayanagi proposal must be modified for warped AdS3 (WAdS3) with Dirichlet boundary conditions. In this paper, we consider AdS3 and WAdS3 with Dirichlet-Neumann boundary conditions. The conjectured holographic duals are warped conformal field theories (WCFTs), featuring a Virasoro-Kac-Moody algebra. We provide a holographic calculation of the entanglement entropy and Rényi entropy using AdS3/WCFT and WAdS3/WCFT dualities. Our bulk results are consistent with the WCFT results derived by Castro-Hofman-Iqbal using the Rindler method. Comparing with [1], we explicitly show that the holographic entanglement entropy is indeed affected by boundary conditions. Both results differ from the Ryu-Takayanagi proposal, indicating new relations between spacetime geometry and quantum entanglement for holographic dualities beyond AdS/CFT.
Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.
Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A
2017-12-01
Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.
Dirichlet Component Regression and its Applications to Psychiatric Data
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2011-01-01
Summary We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook’s distance, and a local jackknife influence metric. PMID:22058582
Positivity and Almost Positivity of Biharmonic Green's Functions under Dirichlet Boundary Conditions
NASA Astrophysics Data System (ADS)
Grunau, Hans-Christoph; Robert, Frédéric
2010-03-01
In general, for higher order elliptic equations and boundary value problems like the biharmonic equation and the linear clamped plate boundary value problem, neither a maximum principle nor a comparison principle or—equivalently—a positivity preserving property is available. The problem is rather involved since the clamped boundary conditions prevent the boundary value problem from being reasonably written as a system of second order boundary value problems. It is shown that, on the other hand, for bounded smooth domains {Ω subsetmathbb{R}^n} , the negative part of the corresponding Green’s function is “small” when compared with its singular positive part, provided {n≥q 3} . Moreover, the biharmonic Green’s function in balls {Bsubsetmathbb{R}^n} under Dirichlet (that is, clamped) boundary conditions is known explicitly and is positive. It has been known for some time that positivity is preserved under small regular perturbations of the domain, if n = 2. In the present paper, such a stability result is proved for {n≥q 3}.
Briggs, Andrew H; Ades, A E; Price, Martin J
2003-01-01
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.
Ferrari, Alberto
2017-01-01
Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.
NASA Astrophysics Data System (ADS)
Zhukovsky, K.; Oskolkov, D.
2018-03-01
A system of hyperbolic-type inhomogeneous differential equations (DE) is considered for non-Fourier heat transfer in thin films. Exact harmonic solutions to Guyer-Krumhansl-type heat equation and to the system of inhomogeneous DE are obtained in Cauchy- and Dirichlet-type conditions. The contribution of the ballistic-type heat transport, of the Cattaneo heat waves and of the Fourier heat diffusion is discussed and compared with each other in various conditions. The application of the study to the ballistic heat transport in thin films is performed. Rapid evolution of the ballistic quasi-temperature component in low-dimensional systems is elucidated and compared with slow evolution of its diffusive counterpart. The effect of the ballistic quasi-temperature component on the evolution of the complete quasi-temperature is explored. In this context, the influence of the Knudsen number and of Cauchy- and Dirichlet-type conditions on the evolution of the temperature distribution is explored. The comparative analysis of the obtained solutions is performed.
VizieR Online Data Catalog: Radio sources in the NCP region with the 21CMA (Zheng+, 2016)
NASA Astrophysics Data System (ADS)
Zheng, Q.; Wu, X.-P.; Johnston-Hollitt, M.; Gu, J.-H.; Xu, H.
2017-03-01
In the current work, we present the point radio sources observed with the 40 pods of the 21 Centimeter Array (21CMA) E-W baselines for an integration of 12hr made on 2013 April 13; centered on the North Celestial Pole (NCP). An extra deep sample with a higher sensitivity from a longer integration time of up to years will be published later. We have detected a total of 624 radio sources over the central field within 3° in a frequency range of 75-175MHz and the outer annulus of 3°-5° in the 75-125MHz bands. By performing a Monte-Carlo simulation, we have estimated a completeness of 50% at S~0.2Jy. (1 data file).
Refsgaard, A; Jacobsen, T; Jacobsen, B; Ørum, J-E
2007-01-01
The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied in river basin management. Point sources (e.g. sewage treatment plant discharges) and distributed diffuse sources (nitrate leakage) are included to provide a modelling tool capable of simulating pollution transport from source to recipient to analyse the effects of specific, localized basin water management plans. The paper also includes a land rent modelling approach which can be used to choose the most cost-effective measures and the location of these measures. As a forerunner to the use of basin-scale models in WFD basin water management plans this project demonstrates the potential and limitations of comprehensive, integrated modelling tools.
Integration of SAR and DEM data: Geometrical considerations
NASA Technical Reports Server (NTRS)
Kropatsch, Walter G.
1991-01-01
General principles for integrating data from different sources are derived from the experience of registration of SAR images with digital elevation models (DEM) data. The integration consists of establishing geometrical relations between the data sets that allow us to accumulate information from both data sets for any given object point (e.g., elevation, slope, backscatter of ground cover, etc.). Since the geometries of the two data are completely different they cannot be compared on a pixel by pixel basis. The presented approach detects instances of higher level features in both data sets independently and performs the matching at the high level. Besides the efficiency of this general strategy it further allows the integration of additional knowledge sources: world knowledge and sensor characteristics are also useful sources of information. The SAR features layover and shadow can be detected easily in SAR images. An analytical method to find such regions also in a DEM needs in addition the parameters of the flight path of the SAR sensor and the range projection model. The generation of the SAR layover and shadow maps is summarized and new extensions to this method are proposed.
De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio Luigi
2016-12-15
Mediterranean areas are characterized by complex hydrogeological systems, where management of freshwater resources, mostly stored in karstic, coastal aquifers, is necessary and requires the application of numerical tools to detect and prevent deterioration of groundwater, mostly caused by overexploitation. In the Taranto area (southern Italy), the deep, karstic aquifer is the only source of freshwater and satisfies the main human activities. Preserving quantity and quality of this system through management policies is so necessary and such task can be addressed through modeling tools which take into account human impacts and the effects of climate changes. A variable-density flow model was developed with SEAWAT to depict the "current" status of the saltwater intrusion, namely the status simulated over an average hydrogeological year. Considering the goals of this analysis and the scale at which the model was built, the equivalent porous medium approach was adopted to represent the deep aquifer. The effects that different flow boundary conditions along the coast have on the transport model were assessed. Furthermore, salinity stratification occurs within a strip spreading between 4km and 7km from the coast in the deep aquifer. The model predicts a similar phenomenon for some submarine freshwater springs and modeling outcomes were positively compared with measurements found in the literature. Two scenarios were simulated to assess the effects of decreased rainfall and increased pumping on saline intrusion. Major differences in the concentration field with respect to the "current" status were found where the hydraulic conductivity of the deep aquifer is higher and such differences are higher when Dirichlet flow boundary conditions are assigned. Furthermore, the Dirichlet boundary condition along the coast for transport modeling influences the concentration field in different scenarios at shallow depths; as such, concentration values simulated under stressed conditions are lower than those simulated under undisturbed conditions. Copyright © 2016 Elsevier B.V. All rights reserved.
Diminishing Manufacturing Sources and Material Shortages (DMSMS) Guidebook
2006-11-01
www.dau.mil/registrar/enroll.aspx DoD Acquisition, Technology, and Logistics (AT&L) Integrated Framework Chart (IFC) lifecycle activities and...ROI) and Break Even Point ( BEP ). Two analysts could look at the same data and generate different outcomes if they use different assumptions or...principal output of the BCA is the Break Even Point ( BEP ), which shows the payback period of an alternative. It is found from a plot of the
NASA Astrophysics Data System (ADS)
Goltz, Mark N.; Huang, Junqi
2014-12-01
We thank Sun (2014) for his comment on our paper, Goltz et al. (2009). The commenter basically makes two points: (1) equation (6) in Goltz et al. (2009) is incorrect, and (2) screen loss should be further considered as a source of error in the modified integral pump test (MIPT) experiment. We will address each of these points, below.
Manufacturing and Integration Status of the JWST OSIM Optical Simulator
NASA Technical Reports Server (NTRS)
Sullivan, Joe; Eichhorn, William; vonHandorf, Rob; Sabatke, Derek; Barr, Nick; Nyquist, Rich; Pederson, Bob; Bennett, Rick; Volmer, Paul; Happs, Dave;
2010-01-01
OSIM is a full field, cryogenic, optical simulator of the James Webb Space Telescope (JWST) Optical Telescope Element (OTE). It provides simulated point source/star images for optical performance testing of the JWST Integrated Science Instrument Module (ISIM). OSIM is currently being assembled at the Goddard Space Flight Center (GSFC). In this paper, we describe the capabilities, design, manufacturing and integration status, and uses of the OSIM during the optical test program of ISIM and the Science Instruments. Where applicable, the ISIM tests are also described.
The Massive Star-Forming Regions Omnibus X-Ray Catalog
NASA Astrophysics Data System (ADS)
Townsley, Leisa K.; Broos, Patrick S.; Garmire, Gordon P.; Bouwman, Jeroen; Povich, Matthew S.; Feigelson, Eric D.; Getman, Konstantin V.; Kuhn, Michael A.
2014-07-01
We present the Massive Star-forming Regions (MSFRs) Omnibus X-ray Catalog (MOXC), a compendium of X-ray point sources from Chandra/ACIS observations of a selection of MSFRs across the Galaxy, plus 30 Doradus in the Large Magellanic Cloud. MOXC consists of 20,623 X-ray point sources from 12 MSFRs with distances ranging from 1.7 kpc to 50 kpc. Additionally, we show the morphology of the unresolved X-ray emission that remains after the cataloged X-ray point sources are excised from the ACIS data, in the context of Spitzer and WISE observations that trace the bubbles, ionization fronts, and photon-dominated regions that characterize MSFRs. In previous work, we have found that this unresolved X-ray emission is dominated by hot plasma from massive star wind shocks. This diffuse X-ray emission is found in every MOXC MSFR, clearly demonstrating that massive star feedback (and the several-million-degree plasmas that it generates) is an integral component of MSFR physics.
Yao, Hong; Li, Weixin; Qian, Xin
2015-01-01
Environmental safety in multi-district boundary regions has been one of the focuses in China and is mentioned many times in the Environmental Protection Act of 2014. Five types were categorized concerning the risk sources for surface water pollution in the multi-provincial boundary region of the Taihu basin: production enterprises, waste disposal sites, chemical storage sites, agricultural non-point sources and waterway transportations. Considering the hazard of risk sources, the purification property of environmental medium and the vulnerability of risk receptors, 52 specific attributes on the risk levels of each type of risk source were screened out. Continuous piecewise linear function model, expert consultation method and fuzzy integral model were used to calculate the integrated risk indexes (RI) to characterize the risk levels of pollution sources. In the studied area, 2716 pollution sources were characterized by RI values. There were 56 high-risk sources screened out as major risk sources, accounting for about 2% of the total. The numbers of sources with high-moderate, moderate, moderate-low and low pollution risk were 376, 1059, 101 and 1124, respectively, accounting for 14%, 38%, 5% and 41% of the total. The procedure proposed could be included in the integrated risk management systems of the multi-district boundary region of the Taihu basin. It could help decision makers to identify major risk sources in the risk prevention and reduction of surface water pollution. PMID:26308032
Yao, Hong; Li, Weixin; Qian, Xin
2015-08-21
Environmental safety in multi-district boundary regions has been one of the focuses in China and is mentioned many times in the Environmental Protection Act of 2014. Five types were categorized concerning the risk sources for surface water pollution in the multi-provincial boundary region of the Taihu basin: production enterprises, waste disposal sites, chemical storage sites, agricultural non-point sources and waterway transportations. Considering the hazard of risk sources, the purification property of environmental medium and the vulnerability of risk receptors, 52 specific attributes on the risk levels of each type of risk source were screened out. Continuous piecewise linear function model, expert consultation method and fuzzy integral model were used to calculate the integrated risk indexes (RI) to characterize the risk levels of pollution sources. In the studied area, 2716 pollution sources were characterized by RI values. There were 56 high-risk sources screened out as major risk sources, accounting for about 2% of the total. The numbers of sources with high-moderate, moderate, moderate-low and low pollution risk were 376, 1059, 101 and 1124, respectively, accounting for 14%, 38%, 5% and 41% of the total. The procedure proposed could be included in the integrated risk management systems of the multi-district boundary region of the Taihu basin. It could help decision makers to identify major risk sources in the risk prevention and reduction of surface water pollution.
An Integrated Chemical Environment to Support 21st-Century Toxicology.
Bell, Shannon M; Phillips, Jason; Sedykh, Alexander; Tandon, Arpit; Sprankle, Catherine; Morefield, Stephen Q; Shapiro, Andy; Allen, David; Shah, Ruchir; Maull, Elizabeth A; Casey, Warren M; Kleinstreuer, Nicole C
2017-05-25
SUMMARY : Access to high-quality reference data is essential for the development, validation, and implementation of in vitro and in silico approaches that reduce and replace the use of animals in toxicity testing. Currently, these data must often be pooled from a variety of disparate sources to efficiently link a set of assay responses and model predictions to an outcome or hazard classification. To provide a central access point for these purposes, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods developed the Integrated Chemical Environment (ICE) web resource. The ICE data integrator allows users to retrieve and combine data sets and to develop hypotheses through data exploration. Open-source computational workflows and models will be available for download and application to local data. ICE currently includes curated in vivo test data, reference chemical information, in vitro assay data (including Tox21 TM /ToxCast™ high-throughput screening data), and in silico model predictions. Users can query these data collections focusing on end points of interest such as acute systemic toxicity, endocrine disruption, skin sensitization, and many others. ICE is publicly accessible at https://ice.ntp.niehs.nih.gov. https://doi.org/10.1289/EHP1759.
An Integrated Chemical Environment to Support 21st-Century Toxicology
Bell, Shannon M.; Phillips, Jason; Sedykh, Alexander; Tandon, Arpit; Sprankle, Catherine; Morefield, Stephen Q.; Shapiro, Andy; Allen, David; Shah, Ruchir; Maull, Elizabeth A.; Casey, Warren M.
2017-01-01
Summary: Access to high-quality reference data is essential for the development, validation, and implementation of in vitro and in silico approaches that reduce and replace the use of animals in toxicity testing. Currently, these data must often be pooled from a variety of disparate sources to efficiently link a set of assay responses and model predictions to an outcome or hazard classification. To provide a central access point for these purposes, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods developed the Integrated Chemical Environment (ICE) web resource. The ICE data integrator allows users to retrieve and combine data sets and to develop hypotheses through data exploration. Open-source computational workflows and models will be available for download and application to local data. ICE currently includes curated in vivo test data, reference chemical information, in vitro assay data (including Tox21TM/ToxCast™ high-throughput screening data), and in silico model predictions. Users can query these data collections focusing on end points of interest such as acute systemic toxicity, endocrine disruption, skin sensitization, and many others. ICE is publicly accessible at https://ice.ntp.niehs.nih.gov. https://doi.org/10.1289/EHP1759 PMID:28557712
Technologies for autonomous integrated lab-on-chip systems for space missions
NASA Astrophysics Data System (ADS)
Nascetti, A.; Caputo, D.; Scipinotti, R.; de Cesare, G.
2016-11-01
Lab-on-chip devices are ideal candidates for use in space missions where experiment automation, system compactness, limited weight and low sample and reagent consumption are required. Currently, however, most microfluidic systems require external desktop instrumentation to operate and interrogate the chip, thus strongly limiting their use as stand-alone systems. In order to overcome the above-mentioned limitations our research group is currently working on the design and fabrication of "true" lab-on-chip systems that integrate in a single device all the analytical steps from the sample preparation to the detection without the need for bulky external components such as pumps, syringes, radiation sources or optical detection systems. Three critical points can be identified to achieve 'true' lab-on-chip devices: sample handling, analytical detection and signal transduction. For each critical point, feasible solutions are presented and evaluated. Proposed microfluidic actuation and control is based on electrowetting on dielectrics, autonomous capillary networks and active valves. Analytical detection based on highly specific chemiluminescent reactions is used to avoid external radiation sources. Finally, the integration on the same chip of thin film sensors based on hydrogenated amorphous silicon is discussed showing practical results achieved in different sensing tasks.
40 CFR 413.04 - Standards for integrated facilities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS ELECTROPLATING POINT SOURCE CATEGORY General Provisions § 413.04 Standards for... § 403.6(e) of EPA's General Pretreatment Regulations. In cases where electroplating process wastewaters... average standard for the electroplating wastewaters must be used. The 30 day average shall be determined...
The integration of satellite and airborne remote sensing, scientific visualization and decision support tools is discussed within the context of management techniques for minimizing the non-point source pollution load of inland waterways and the sustainability of food crop produc...
Passive lighting responsive three-dimensional integral imaging
NASA Astrophysics Data System (ADS)
Lou, Yimin; Hu, Juanmei
2017-11-01
A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.
Use of speckle for determining the response characteristics of Doppler imaging radars
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1986-01-01
An optical model is developed for imaging optical radars such as the SAR on Seasat and the Shuttle Imaging Radar (SIR-B) by analyzing the Doppler shift of individual speckles in the image. The signal received at the spacecraft is treated in terms of a Fresnel-Kirchhoff integration over all backscattered radiation within a Huygen aperture at the earth. Account is taken of the movement of the spacecraft along the orbital path between emission and reception. The individual points are described by integration of the point source amplitude with a Green's function scattering kernel. Doppler data at each point furnishes the coordinates for visual representations. A Rayleigh-Poisson model of the surface scattering characteristics is used with Monte Carlo methods to generate simulations of Doppler radar speckle that compare well with Seasat SAR data SIR-B data.
Memoized Online Variational Inference for Dirichlet Process Mixture Models
2014-06-27
breaking process [7], which places artifically large mass on the final component. It is more efficient and broadly applicable than an alternative trunction...models. In Uncertainty in Artificial Intelligence , 2008. [13] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential
ERIC Educational Resources Information Center
Brilleslyper, Michael A.; Wolverton, Robert H.
2008-01-01
In this article we consider an example suitable for investigation in many mid and upper level undergraduate mathematics courses. Fourier series provide an excellent example of the differences between uniform and non-uniform convergence. We use Dirichlet's test to investigate the convergence of the Fourier series for a simple periodic saw tooth…
Linguistic Extensions of Topic Models
ERIC Educational Resources Information Center
Boyd-Graber, Jordan
2010-01-01
Topic models like latent Dirichlet allocation (LDA) provide a framework for analyzing large datasets where observations are collected into groups. Although topic modeling has been fruitfully applied to problems social science, biology, and computer vision, it has been most widely used to model datasets where documents are modeled as exchangeable…
Scalable Topic Modeling: Online Learning, Diagnostics, and Recommendation
2017-03-01
Chinese restaurant processes. Journal of Machine Learning Research, 12:2461–2488, 2011. 15. L. Hannah, D. Blei and W. Powell. Dirichlet process mixtures of...34. S. Ghosh, A. Ungureanu, E. Sudderth, and D. Blei. A Spatial distance dependent Chinese restaurant process for image segmentation. In Neural
Development of a low background test facility for the SPICA-SAFARI on-ground calibration
NASA Astrophysics Data System (ADS)
Dieleman, P.; Laauwen, W. M.; Ferrari, L.; Ferlet, M.; Vandenbussche, B.; Meinsma, L.; Huisman, R.
2012-09-01
SAFARI is a far-infrared camera to be launched in 2021 onboard the SPICA satellite. SAFARI offers imaging spectroscopy and imaging photometry in the wavelength range of 34 to 210 μm with detector NEP of 2•10-19 W/√Hz. A cryogenic test facility for SAFARI on-ground calibration and characterization is being developed. The main design driver is the required low background of a few attoWatts per pixel. This prohibits optical access to room temperature and hence all test equipment needs to be inside the cryostat at 4.5K. The instrument parameters to be verified are interfaces with the SPICA satellite, sensitivity, alignment, image quality, spectral response, frequency calibration, and point spread function. The instrument sensitivity is calibrated by a calibration source providing a spatially homogeneous signal at the attoWatt level. This low light intensity is achieved by geometrical dilution of a 150K source to an integrating sphere. The beam quality and point spread function is measured by a pinhole/mask plate wheel, back-illuminated by a second integrating sphere. This sphere is fed by a stable wide-band source, providing spectral lines via a cryogenic etalon.
An integral equation formulation for the diffraction from convex plates and polyhedra.
Asheim, Andreas; Svensson, U Peter
2013-06-01
A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.
Boundary Regularity for the Porous Medium Equation
NASA Astrophysics Data System (ADS)
Björn, Anders; Björn, Jana; Gianazza, Ugo; Siljander, Juhana
2018-05-01
We study the boundary regularity of solutions to the porous medium equation {u_t = Δ u^m} in the degenerate range {m > 1} . In particular, we show that in cylinders the Dirichlet problem with positive continuous boundary data on the parabolic boundary has a solution which attains the boundary values, provided that the spatial domain satisfies the elliptic Wiener criterion. This condition is known to be optimal, and it is a consequence of our main theorem which establishes a barrier characterization of regular boundary points for general—not necessarily cylindrical—domains in {{R}^{n+1}} . One of our fundamental tools is a new strict comparison principle between sub- and superparabolic functions, which makes it essential for us to study both nonstrict and strict Perron solutions to be able to develop a fruitful boundary regularity theory. Several other comparison principles and pasting lemmas are also obtained. In the process we obtain a rather complete picture of the relation between sub/superparabolic functions and weak sub/supersolutions.
The Boundary Function Method. Fundamentals
NASA Astrophysics Data System (ADS)
Kot, V. A.
2017-03-01
The boundary function method is proposed for solving applied problems of mathematical physics in the region defined by a partial differential equation of the general form involving constant or variable coefficients with a Dirichlet, Neumann, or Robin boundary condition. In this method, the desired function is defined by a power polynomial, and a boundary function represented in the form of the desired function or its derivative at one of the boundary points is introduced. Different sequences of boundary equations have been set up with the use of differential operators. Systems of linear algebraic equations constructed on the basis of these sequences allow one to determine the coefficients of a power polynomial. Constitutive equations have been derived for initial boundary-value problems of all the main types. With these equations, an initial boundary-value problem is transformed into the Cauchy problem for the boundary function. The determination of the boundary function by its derivative with respect to the time coordinate completes the solution of the problem.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Rapid Airplane Parametric Input Design(RAPID)
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.
2004-01-01
An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.
NASA Astrophysics Data System (ADS)
Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.
2018-05-01
In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.
2017-08-20
liquid crystal cell was successfully employed as an active q-switching element in the same type of chip lasers. The short laser pulses that were...switched mode-locked (QML) operation of those chip lasers. Further, a novel nematic liquid crystal cell was successfully employed as an active q... gas spectroscopy and environmental monitoring, areas that hold immense significance and importance. However, laser source development at these
Mellow, Tim; Kärkkäinen, Leo
2014-03-01
An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.
NASA Astrophysics Data System (ADS)
Wasklewicz, Thad; Zhu, Zhen; Gares, Paul
2017-12-01
Rapid technological advances, sustained funding, and a greater recognition of the value of topographic data have helped develop an increasing archive of topographic data sources. Advances in basic and applied research related to Earth surface changes require researchers to integrate recent high-resolution topography (HRT) data with the legacy datasets. Several technical challenges and data uncertainty issues persist to date when integrating legacy datasets with more recent HRT data. The disparate data sources required to extend the topographic record back in time are often stored in formats that are not readily compatible with more recent HRT data. Legacy data may also contain unknown error or unreported error that make accounting for data uncertainty difficult. There are also cases of known deficiencies in legacy datasets, which can significantly bias results. Finally, scientists are faced with the daunting challenge of definitively deriving the extent to which a landform or landscape has or will continue to change in response natural and/or anthropogenic processes. Here, we examine the question: how do we evaluate and portray data uncertainty from the varied topographic legacy sources and combine this uncertainty with current spatial data collection techniques to detect meaningful topographic changes? We view topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations from a numerical simulation and physical modeling experiment. The numerical simulation incorporates numerous topographic data sources typically found across a range of legacy data to present high-resolution data, while the physical model focuses on more recent HRT data acquisition techniques. Elevation uncertainties observed from anchor points in the digital terrain models are modeled using "states" in a stochastic estimator. Stochastic estimators trace the temporal evolution of the uncertainties and are natively capable of incorporating sensor measurements observed at various times in history. The geometric relationship between the anchor point and the sensor measurement can be approximated via spatial correlation even when a sensor does not directly observe an anchor point. Findings from a numerical simulation indicate the estimated error coincides with the actual error using certain sensors (Kinematic GNSS, ALS, TLS, and SfM-MVS). Data from 2D imagery and static GNSS did not perform as well at the time the sensor is integrated into estimator largely as a result of the low density of data added from these sources. The estimator provides a history of DEM estimation as well as the uncertainties and cross correlations observed on anchor points. Our work provides preliminary evidence that our approach is valid for integrating legacy data with HRT and warrants further exploration and field validation. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.
2014-09-01
Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.
Guo, Z.; Zweibaum, N.; Shao, M.; ...
2016-04-19
The University of California, Berkeley (UCB) is performing thermal hydraulics safety analysis to develop the technical basis for design and licensing of fluoride-salt-cooled, high-temperature reactors (FHRs). FHR designs investigated by UCB use natural circulation for emergency, passive decay heat removal when normal decay heat removal systems fail. The FHR advanced natural circulation analysis (FANCY) code has been developed for assessment of passive decay heat removal capability and safety analysis of these innovative system designs. The FANCY code uses a one-dimensional, semi-implicit scheme to solve for pressure-linked mass, momentum and energy conservation equations. Graph theory is used to automatically generate amore » staggered mesh for complicated pipe network systems. Heat structure models have been implemented for three types of boundary conditions (Dirichlet, Neumann and Robin boundary conditions). Heat structures can be composed of several layers of different materials, and are used for simulation of heat structure temperature distribution and heat transfer rate. Control models are used to simulate sequences of events or trips of safety systems. A proportional-integral controller is also used to automatically make thermal hydraulic systems reach desired steady state conditions. A point kinetics model is used to model reactor kinetics behavior with temperature reactivity feedback. The underlying large sparse linear systems in these models are efficiently solved by using direct and iterative solvers provided by the SuperLU code on high performance machines. Input interfaces are designed to increase the flexibility of simulation for complicated thermal hydraulic systems. In conclusion, this paper mainly focuses on the methodology used to develop the FANCY code, and safety analysis of the Mark 1 pebble-bed FHR under development at UCB is performed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Z.; Zweibaum, N.; Shao, M.
The University of California, Berkeley (UCB) is performing thermal hydraulics safety analysis to develop the technical basis for design and licensing of fluoride-salt-cooled, high-temperature reactors (FHRs). FHR designs investigated by UCB use natural circulation for emergency, passive decay heat removal when normal decay heat removal systems fail. The FHR advanced natural circulation analysis (FANCY) code has been developed for assessment of passive decay heat removal capability and safety analysis of these innovative system designs. The FANCY code uses a one-dimensional, semi-implicit scheme to solve for pressure-linked mass, momentum and energy conservation equations. Graph theory is used to automatically generate amore » staggered mesh for complicated pipe network systems. Heat structure models have been implemented for three types of boundary conditions (Dirichlet, Neumann and Robin boundary conditions). Heat structures can be composed of several layers of different materials, and are used for simulation of heat structure temperature distribution and heat transfer rate. Control models are used to simulate sequences of events or trips of safety systems. A proportional-integral controller is also used to automatically make thermal hydraulic systems reach desired steady state conditions. A point kinetics model is used to model reactor kinetics behavior with temperature reactivity feedback. The underlying large sparse linear systems in these models are efficiently solved by using direct and iterative solvers provided by the SuperLU code on high performance machines. Input interfaces are designed to increase the flexibility of simulation for complicated thermal hydraulic systems. In conclusion, this paper mainly focuses on the methodology used to develop the FANCY code, and safety analysis of the Mark 1 pebble-bed FHR under development at UCB is performed.« less
Evaluating topic model interpretability from a primary care physician perspective.
Arnold, Corey W; Oh, Andrea; Chen, Shawn; Speier, William
2016-02-01
Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view. Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks. While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Evaluating Topic Model Interpretability from a Primary Care Physician Perspective
Arnold, Corey W.; Oh, Andrea; Chen, Shawn; Speier, William
2015-01-01
Background and Objective Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician’s point of view. Methods Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks. Results While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. Conclusion This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. PMID:26614020
Volume integrals associated with the inhomogeneous Helmholtz equation. Part 1: Ellipsoidal region
NASA Technical Reports Server (NTRS)
Fu, L. S.; Mura, T.
1983-01-01
Problems of wave phenomena in fields of acoustics, electromagnetics and elasticity are often reduced to an integration of the inhomogeneous Helmholtz equation. Results are presented for volume integrals associated with the Helmholtz operator, nabla(2) to alpha(2), for the case of an ellipsoidal region. By using appropriate Taylor series expansions and multinomial theorem, these volume integrals are obtained in series form for regions r 4' and r r', where r and r' are distances from the origin to the point of observation and source, respectively. Derivatives of these integrals are easily evaluated. When the wave number approaches zero, the results reduce directly to the potentials of variable densities.
Strategies to support drug discovery through integration of systems and data.
Waller, Chris L; Shah, Ajay; Nolte, Matthias
2007-08-01
Much progress has been made over the past several years to provide technologies for the integration of drug discovery software applications and the underlying data bits. Integration at the application layer has focused primarily on developing and delivering applications that support specific workflows within the drug discovery arena. A fine balance between creating behemoth applications and providing business value must be maintained. Heterogeneous data sources have typically been integrated at the data level in an effort to provide a more holistic view of the data packages supporting key decision points. This review will highlight past attempts, current status, and potential future directions for systems and data integration strategies in support of drug discovery efforts.
Integrating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
NASA Astrophysics Data System (ADS)
Krishnan, Chethan; Maheshwari, Shubham; Bala Subramanian, P. N.
2017-08-01
We write down a Robin boundary term for general relativity. The construction relies on the Neumann result of arXiv:1605.01603 in an essential way. This is unlike in mechanics and (polynomial) field theory, where two formulations of the Robin problem exist: one with Dirichlet as the natural limiting case, and another with Neumann.
A weighted anisotropic variant of the Caffarelli-Kohn-Nirenberg inequality and applications
NASA Astrophysics Data System (ADS)
Bahrouni, Anouar; Rădulescu, Vicenţiu D.; Repovš, Dušan D.
2018-04-01
We present a weighted version of the Caffarelli-Kohn-Nirenberg inequality in the framework of variable exponents. The combination of this inequality with a variant of the fountain theorem, yields the existence of infinitely many solutions for a class of non-homogeneous problems with Dirichlet boundary condition.
The use of MACSYMA for solving elliptic boundary value problems
NASA Technical Reports Server (NTRS)
Thejll, Peter; Gilbert, Robert P.
1990-01-01
A boundary method is presented for the solution of elliptic boundary value problems. An approach based on the use of complete systems of solutions is emphasized. The discussion is limited to the Dirichlet problem, even though the present method can possibly be adapted to treat other boundary value problems.
Test Design Project: Studies in Test Adequacy. Annual Report.
ERIC Educational Resources Information Center
Wilcox, Rand R.
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
NASA Astrophysics Data System (ADS)
Chernyshov, A. D.
2018-05-01
The analytical solution of the nonlinear heat conduction problem for a curvilinear region is obtained with the use of the fast-expansion method together with the method of extension of boundaries and pointwise technique of computing Fourier coefficients.
Pig Data and Bayesian Inference on Multinomial Probabilities
ERIC Educational Resources Information Center
Kern, John C.
2006-01-01
Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…
Comment Data Mining to Estimate Student Performance Considering Consecutive Lessons
ERIC Educational Resources Information Center
Sorour, Shaymaa E.; Goda, Kazumasa; Mine, Tsunenori
2017-01-01
The purpose of this study is to examine different formats of comment data to predict student performance. Having students write comment data after every lesson can reflect students' learning attitudes, tendencies and learning activities involved with the lesson. In this research, Latent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic…
Time-integrated Searches for Point-like Sources of Neutrinos with the 40-string IceCube Detector
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brown, A. M.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lehmann, R.; Lünemann, J.; Madsen, J.; Majumdar, P.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Matusik, M.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Ono, M.; Panknin, S.; Paul, L.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Singh, K.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; IceCube Collaboration
2011-05-01
We present the results of time-integrated searches for astrophysical neutrino sources in both the northern and southern skies. Data were collected using the partially completed IceCube detector in the 40-string configuration recorded between 2008 April 5 and 2009 May 20, totaling 375.5 days livetime. An unbinned maximum likelihood ratio method is used to search for astrophysical signals. The data sample contains 36,900 events: 14,121 from the northern sky, mostly muons induced by atmospheric neutrinos, and 22,779 from the southern sky, mostly high-energy atmospheric muons. The analysis includes searches for individual point sources and stacked searches for sources in a common class, sometimes including a spatial extent. While this analysis is sensitive to TeV-PeV energy neutrinos in the northern sky, it is primarily sensitive to neutrinos with energy greater than about 1 PeV in the southern sky. No evidence for a signal is found in any of the searches. Limits are set for neutrino fluxes from astrophysical sources over the entire sky and compared to predictions. The sensitivity is at least a factor of two better than previous searches (depending on declination), with 90% confidence level muon neutrino flux upper limits being between E 2 dΦ/dE ~ 2-200 × 10-12 TeV cm-2 s-1 in the northern sky and between 3-700 × 10-12 TeV cm-2 s-1 in the southern sky. The stacked source searches provide the best limits to specific source classes. The full IceCube detector is expected to improve the sensitivity to dΦ/dEvpropE -2 sources by another factor of two in the first year of operation.
The dynamics of multimodal integration: The averaging diffusion model.
Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James
2017-12-01
We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.
NASA Astrophysics Data System (ADS)
Červený, Vlastislav; Pšenčík, Ivan
2017-08-01
Integral superposition of Gaussian beams is a useful generalization of the standard ray theory. It removes some of the deficiencies of the ray theory like its failure to describe properly behaviour of waves in caustic regions. It also leads to a more efficient computation of seismic wavefields since it does not require the time-consuming two-point ray tracing. We present the formula for a high-frequency elementary Green function expressed in terms of the integral superposition of Gaussian beams for inhomogeneous, isotropic or anisotropic, layered structures, based on the dynamic ray tracing (DRT) in Cartesian coordinates. For the evaluation of the superposition formula, it is sufficient to solve the DRT in Cartesian coordinates just for the point-source initial conditions. Moreover, instead of seeking 3 × 3 paraxial matrices in Cartesian coordinates, it is sufficient to seek just 3 × 2 parts of these matrices. The presented formulae can be used for the computation of the elementary Green function corresponding to an arbitrary direct, multiply reflected/transmitted, unconverted or converted, independently propagating elementary wave of any of the three modes, P, S1 and S2. Receivers distributed along or in a vicinity of a target surface may be situated at an arbitrary part of the medium, including ray-theory shadow regions. The elementary Green function formula can be used as a basis for the computation of wavefields generated by various types of point sources (explosive, moment tensor).
Fourth-order convergence of a compact scheme for the one-dimensional biharmonic equation
NASA Astrophysics Data System (ADS)
Fishelov, D.; Ben-Artzi, M.; Croisille, J.-P.
2012-09-01
The convergence of a fourth-order compact scheme to the one-dimensional biharmonic problem is established in the case of general Dirichlet boundary conditions. The compact scheme invokes value of the unknown function as well as Pade approximations of its first-order derivative. Using the Pade approximation allows us to approximate the first-order derivative within fourth-order accuracy. However, although the truncation error of the discrete biharmonic scheme is of fourth-order at interior point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. This is done by a careful inspection of the matrix elements of the discrete biharmonic operator. A number of numerical examples corroborate this effect. We also present a study of the eigenvalue problem uxxxx = νu. We compute and display the eigenvalues and the eigenfunctions related to the continuous and the discrete problems. By the positivity of the eigenvalues, one can deduce the stability of of the related time-dependent problem ut = -uxxxx. In addition, we study the eigenvalue problem uxxxx = νuxx. This is related to the stability of the linear time-dependent equation uxxt = νuxxxx. Its continuous and discrete eigenvalues and eigenfunction (or eigenvectors) are computed and displayed graphically.
Uncertainty Propagation for Terrestrial Mobile Laser Scanner
NASA Astrophysics Data System (ADS)
Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas
2016-06-01
Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.
AN ACCURACY ASSESSMENT OF MULTIPLE MID-ATLANTIC SUB-PIXEL IMPERVIOUS SURFACE MAPS
Anthropogenic impervious surfaces have an important relationship with non-point source pollution (NPS) in urban watersheds. The amount of impervious surface area in a watershed is a key indicator of landscape change. As a single variable, it serves to integrate a number of conc...
SIMULATIONS OF AEROSOLS AND PHOTOCHEMICAL SPECIES WITH THE CMAQ PLUME-IN-GRID MODELING SYSTEM
A plume-in-grid (PinG) method has been an integral component of the CMAQ modeling system and has been designed in order to realistically simulate the relevant processes impacting pollutant concentrations in plumes released from major point sources. In particular, considerable di...
Six New Millisecond Pulsars From Arecibo Searches Of Fermi Gamma-Ray Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cromartie, H. T.; Camilo, F.; Kerr, M.
2016-02-25
We have discovered six radio millisecond pulsars (MSPs) in a search with the Arecibo telescope of 34 unidentified gamma-ray sources from the Fermi Large Area Telescope (LAT) 4-year point source catalog. Among the 34 sources, we also detected two MSPs previously discovered elsewhere. Each source was observed at a center frequency of 327 MHz, typically at three epochs with individual integration times of 15 minutes. The new MSP spin periods range from 1.99 to 4.66 ms. Five of the six pulsars are in interacting compact binaries (period ≤ 8.1 hr), while the sixth is a more typical neutron star-white dwarfmore » binary with an 83-day orbital period. This is a higher proportion of interacting binaries than for equivalent Fermi-LAT searches elsewhere. The reason is that Arecibo’s large gain afforded us the opportunity to limit integration times to 15 minutes, which significantly increased our sensitivity to these highly accelerated systems. Seventeen of the remaining 26 gamma-ray sources are still categorized as strong MSP candidates, and will be re-searched.« less
SIX NEW MILLISECOND PULSARS FROM ARECIBO SEARCHES OF FERMI GAMMA-RAY SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cromartie, H. T.; Camilo, F.; Kerr, M.
2016-03-01
We have discovered six radio millisecond pulsars (MSPs) in a search with the Arecibo telescope of 34 unidentified gamma-ray sources from the Fermi Large Area Telescope (LAT) four year point source catalog. Among the 34 sources, we also detected two MSPs previously discovered elsewhere. Each source was observed at a center frequency of 327 MHz, typically at three epochs with individual integration times of 15 minutes. The new MSP spin periods range from 1.99 to 4.66 ms. Five of the six pulsars are in interacting compact binaries (period ≤ 8.1 hr), while the sixth is a more typical neutron star-whitemore » dwarf binary with an 83 day orbital period. This is a higher proportion of interacting binaries than for equivalent Fermi-LAT searches elsewhere. The reason is that Arecibo's large gain afforded us the opportunity to limit integration times to 15 minutes, which significantly increased our sensitivity to these highly accelerated systems. Seventeen of the remaining 26 gamma-ray sources are still categorized as strong MSP candidates, and will be re-searched.« less
NASA Astrophysics Data System (ADS)
Ehret, G.; Amediek, A.; Wirth, M.; Fix, A.; Kiemle, C.; Quatrevalet, M.
2016-12-01
We report on a new method and on the first demonstration to quantify emission rates from strong greenhouse gas (GHG) point sources using airborne Integrated Path Differential Absorption (IPDA) Lidar measurements. In order to build trust in the self-reported emission rates by countries, verification against independent monitoring systems is a prerequisite to check the reported budget. A significant fraction of the total anthropogenic emission of CO2 and CH4 originates from localized strong point sources of large energy production sites or landfills. Both are not monitored with sufficiently accuracy by the current observation system. There is a debate whether airborne remote sensing could fill in the gap to infer those emission rates from budgeting or from Gaussian plume inversion approaches, whereby measurements of the GHG column abundance beneath the aircraft can be used to constrain inverse models. In contrast to passive sensors, the use of an active instrument like CHARM-F for such emission verification measurements is new. CHARM-F is a new airborne IPDA-Lidar devised for the German research aircraft HALO for the simultaneous measurement of the column-integrated dry-air mixing ratio of CO2 and CH4 commonly denoted as XCO2 und XCH4, respectively. It has successfully been tested in a serious of flights over Central Europe to assess its performance under various reflectivity conditions and in a strongly varying topography like the Alps. The analysis of a methane plume measured in crosswind direction of a coal mine ventilation shaft revealed an instantaneous emission rate of 9.9 ± 1.7 kt CH4 yr-1. We discuss the methodology of our point source estimation approach and give an outlook on the CoMet field experiment scheduled in 2017 for the measurement of anthropogenic and natural GHG emissions by a combination of active and passive remote sensing instruments on research aircraft.
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-07-01
Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets
Wernisch, Lorenz
2017-01-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm. PMID:29036190
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets.
Gabasova, Evelina; Reid, John; Wernisch, Lorenz
2017-10-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Bi, Chuan-Xing; Zhang, Chuanzeng; Gao, Hai-Feng; Chen, Hai-Bo
2018-04-01
The vibration behavior of thin elastic structures can be noticeably influenced by the surrounding water, which represents a kind of heavy fluid. Since the feedback of the acoustic pressure onto the structure cannot be neglected in this case, a strong coupled scheme between the structural and fluid domains is usually required. In this work, a coupled finite element and boundary element (FE-BE) solver is developed for the free vibration analysis of structures submerged in an infinite fluid domain or a semi-infinite fluid domain with a free water surface. The structure is modeled by the finite element method (FEM). The compressibility of the fluid is taken into account, and hence the Helmholtz equation serves as the governing equation of the fluid domain. The boundary element method (BEM) is employed to model the fluid domain, and a boundary integral formulation with a half-space fundamental solution is used to satisfy the Dirichlet boundary condition on the free water surface exactly. The resulting nonlinear eigenvalue problem (NEVP) is converted into a small linear one by using a contour integral method. Adequate modifications are suggested to improve the efficiency of the contour integral method and avoid missing the eigenfrequencies of interest. The Burton-Miller method is used to filter out the fictitious eigenfrequencies of the boundary integral formulations. Numerical examples are given to demonstrate the accuracy and applicability of the developed eigensolver, and also show that the fluid-loading effect strongly depends on both the water depth and the mode shapes.
Detecting Spatial Patterns of Natural Hazards from the Wikipedia Knowledge Base
NASA Astrophysics Data System (ADS)
Fan, J.; Stewart, K.
2015-07-01
The Wikipedia database is a data source of immense richness and variety. Included in this database are thousands of geotagged articles, including, for example, almost real-time updates on current and historic natural hazards. This includes usercontributed information about the location of natural hazards, the extent of the disasters, and many details relating to response, impact, and recovery. In this research, a computational framework is proposed to detect spatial patterns of natural hazards from the Wikipedia database by combining topic modeling methods with spatial analysis techniques. The computation is performed on the Neon Cluster, a high performance-computing cluster at the University of Iowa. This work uses wildfires as the exemplar hazard, but this framework is easily generalizable to other types of hazards, such as hurricanes or flooding. Latent Dirichlet Allocation (LDA) modeling is first employed to train the entire English Wikipedia dump, transforming the database dump into a 500-dimension topic model. Over 230,000 geo-tagged articles are then extracted from the Wikipedia database, spatially covering the contiguous United States. The geo-tagged articles are converted into an LDA topic space based on the topic model, with each article being represented as a weighted multidimension topic vector. By treating each article's topic vector as an observed point in geographic space, a probability surface is calculated for each of the topics. In this work, Wikipedia articles about wildfires are extracted from the Wikipedia database, forming a wildfire corpus and creating a basis for the topic vector analysis. The spatial distribution of wildfire outbreaks in the US is estimated by calculating the weighted sum of the topic probability surfaces using a map algebra approach, and mapped using GIS. To provide an evaluation of the approach, the estimation is compared to wildfire hazard potential maps created by the USDA Forest service.
Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points
NASA Technical Reports Server (NTRS)
Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.
2009-01-01
This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.
Data Foundry: Data Warehousing and Integration for Scientific Data Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musick, R.; Critchlow, T.; Ganesh, M.
2000-02-29
Data warehousing is an approach for managing data from multiple sources by representing them with a single, coherent point of view. Commercial data warehousing products have been produced by companies such as RebBrick, IBM, Brio, Andyne, Ardent, NCR, Information Advantage, Informatica, and others. Other companies have chosen to develop their own in-house data warehousing solution using relational databases, such as those sold by Oracle, IBM, Informix and Sybase. The typical approaches include federated systems, and mediated data warehouses, each of which, to some extent, makes use of a series of source-specific wrapper and mediator layers to integrate the data intomore » a consistent format which is then presented to users as a single virtual data store. These approaches are successful when applied to traditional business data because the data format used by the individual data sources tends to be rather static. Therefore, once a data source has been integrated into a data warehouse, there is relatively little work required to maintain that connection. However, that is not the case for all data sources. Data sources from scientific domains tend to regularly change their data model, format and interface. This is problematic because each change requires the warehouse administrator to update the wrapper, mediator, and warehouse interfaces to properly read, interpret, and represent the modified data source. Furthermore, the data that scientists require to carry out research is continuously changing as their understanding of a research question develops, or as their research objectives evolve. The difficulty and cost of these updates effectively limits the number of sources that can be integrated into a single data warehouse, or makes an approach based on warehousing too expensive to consider.« less
SpecOp: Optimal Extraction Software for Integral Field Unit Spectrographs
NASA Astrophysics Data System (ADS)
McCarron, Adam; Ciardullo, Robin; Eracleous, Michael
2018-01-01
The Hobby-Eberly Telescope’s new low resolution integral field spectrographs, LRS2-B and LRS2-R, each cover a 12”x6” area on the sky with 280 fibers and generate spectra with resolutions between R=1100 and R=1900. To extract 1-D spectra from the instrument’s 3D data cubes, a program is needed that is flexible enough to work for a wide variety of targets, including continuum point sources, emission line sources, and compact sources embedded in complex backgrounds. We therefore introduce SpecOp, a user-friendly python program for optimally extracting spectra from integral-field unit spectrographs. As input, SpecOp takes a sky-subtracted data cube consisting of images at each wavelength increment set by the instrument’s spectral resolution, and an error file for each count measurement. All of these files are generated by the current LRS2 reduction pipeline. The program then collapses the cube in the image plane using the optimal extraction algorithm detailed by Keith Horne (1986). The various user-selected options include the fraction of the total signal enclosed in a contour-defined region, the wavelength range to analyze, and the precision of the spatial profile calculation. SpecOp can output the weighted counts and errors at each wavelength in various table formats using python’s astropy package. We outline the algorithm used for extraction and explain how the software can be used to easily obtain high-quality 1-D spectra. We demonstrate the utility of the program by applying it to spectra of a variety of quasars and AGNs. In some of these targets, we extract the spectrum of a nuclear point source that is superposed on a spatially extended galaxy.
McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Stedry, M.H.
1994-07-01
McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less
Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies
Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.
2009-01-01
Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601
AdS and Lifshitz black hole solutions in conformal gravity sourced with a scalar field
NASA Astrophysics Data System (ADS)
Herrera, Felipe; Vásquez, Yerko
2018-07-01
In this paper we obtain exact asymptotically anti-de Sitter black hole solutions and asymptotically Lifshitz black hole solutions with dynamical exponents z = 0 and z = 4 of four-dimensional conformal gravity coupled with a self-interacting conformally invariant scalar field. Then, we compute their thermodynamical quantities, such as the mass, the Wald entropy and the Hawking temperature. The mass expression is obtained by using the generalized off-shell Noether potential formulation. It is found that the anti-de Sitter black holes as well as the Lifshitz black holes with z = 0 have zero mass and zero entropy, although they have non-zero temperature. A similar behavior has been observed in previous works, where the integration constant is not associated with a conserved charge, and it can be interpreted as a kind of gravitational hair. On the other hand, the Lifshitz black holes with dynamical exponent z = 4 have non-zero conserved charges, and the first law of black hole thermodynamics holds. Also, we analyze the horizon thermodynamics for the Lifshitz black holes with z = 4, and we show that the first law of black hole thermodynamics arises from the field equations evaluated on the horizon. Furthermore, we study the propagation of a conformally coupled scalar field on these backgrounds and we find the quasinormal modes analytically in several cases. We find that for anti-de Sitter black holes and Lifshitz black holes with z = 4, there is a continuous spectrum of frequencies for Dirichlet boundary condition; however, we show that discrete sets of well defined quasinormal frequencies can be obtained by considering Neumann boundary conditions.
NASA Astrophysics Data System (ADS)
Its, Alexander; Its, Elizabeth
2018-04-01
We revisit the Helmholtz equation in a quarter-plane in the framework of the Riemann-Hilbert approach to linear boundary value problems suggested in late 1990s by A. Fokas. We show the role of the Sommerfeld radiation condition in Fokas' scheme.
A Bayesian Semiparametric Item Response Model with Dirichlet Process Priors
ERIC Educational Resources Information Center
Miyazaki, Kei; Hoshino, Takahiro
2009-01-01
In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models…
Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers
ERIC Educational Resources Information Center
Anaya, Leticia H.
2011-01-01
In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…
Vectorized multigrid Poisson solver for the CDC CYBER 205
NASA Technical Reports Server (NTRS)
Barkai, D.; Brandt, M. A.
1984-01-01
The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.
Using Dirichlet Processes for Modeling Heterogeneous Treatment Effects across Sites
ERIC Educational Resources Information Center
Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep
2016-01-01
Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…
Nonparametric Bayesian predictive distributions for future order statistics
Richard A. Johnson; James W. Evans; David W. Green
1999-01-01
We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu
2017-07-01
In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.
Discretized energy minimization in a wave guide with point sources
NASA Technical Reports Server (NTRS)
Propst, G.
1994-01-01
An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.
Cabarcos, Alba; Sanchez, Tamara; Seoane, Jose A; Aguiar-Pulido, Vanessa; Freire, Ana; Dorado, Julian; Pazos, Alejandro
2010-01-01
Nowadays, medical practice needs, at the patient Point-of-Care (POC), personalised knowledge adjustable in each moment to the clinical needs of each patient, in order to provide support to decision-making processes, taking into account personalised information. To achieve this, adapting the hospital information systems is necessary. Thus, there is a need of computational developments capable of retrieving and integrating the large amount of biomedical information available today, managing the complexity and diversity of these systems. Hence, this paper describes a prototype which retrieves biomedical information from different sources, manages it to improve the results obtained and to reduce response time and, finally, integrates it so that it is useful for the clinician, providing all the information available about the patient at the POC. Moreover, it also uses tools which allow medical staff to communicate and share knowledge.
Scaled SFS method for Lambertian surface 3D measurement under point source lighting.
Ma, Long; Lyu, Yi; Pei, Xin; Hu, Yan Min; Sun, Feng Ming
2018-05-28
A Lambertian surface is a kind of very important assumption in shape from shading (SFS), which is widely used in many measurement cases. In this paper, a novel scaled SFS method is developed to measure the shape of a Lambertian surface with dimensions. In which, a more accurate light source model is investigated under the illumination of a simple point light source, the relationship between surface depth map and the recorded image grayscale is established by introducing the camera matrix into the model. Together with the constraints of brightness, smoothness and integrability, the surface shape with dimensions can be obtained by analyzing only one image using the scaled SFS method. The algorithm simulations show a perfect matching between the simulated structures and the results, the rebuilding root mean square error (RMSE) is below 0.6mm. Further experiment is performed by measuring a PVC tube internal surface, the overall measurement error lies below 2%.
NASA Astrophysics Data System (ADS)
Srinivasan, V.; Clement, T. P.
2008-02-01
Multi-species reactive transport equations coupled through sorption and sequential first-order reactions are commonly used to model sites contaminated with radioactive wastes, chlorinated solvents and nitrogenous species. Although researchers have been attempting to solve various forms of these reactive transport equations for over 50 years, a general closed-form analytical solution to this problem is not available in the published literature. In Part I of this two-part article, we derive a closed-form analytical solution to this problem for spatially-varying initial conditions. The proposed solution procedure employs a combination of Laplace and linear transform methods to uncouple and solve the system of partial differential equations. Two distinct solutions are derived for Dirichlet and Cauchy boundary conditions each with Bateman-type source terms. We organize and present the final solutions in a common format that represents the solutions to both boundary conditions. In addition, we provide the mathematical concepts for deriving the solution within a generic framework that can be used for solving similar transport problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal
Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information.more » As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.« less
Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal
2010-10-07
Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information. As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.
NASA Astrophysics Data System (ADS)
Lorek, Dariusz
2016-12-01
The article presents a framework for integrating historical sources with elements of the geographical space recorded in unique cartographic materials. The aim of the project was to elaborate a method of integrating spatial data sources that would facilitate studying and presenting the phenomena of economic history. The proposed methodology for multimedia integration of old materials made it possible to demonstrate the successive stages of the transformation which was characteristic of the 19th-century space. The point of reference for this process of integrating information was topographic maps from the first half of the 19th century, while the research area comprised the castle complex in Kórnik together with the small town - the pre-industrial landscape in Wielkopolska (Greater Poland). On the basis of map and plan transformation, graphic processing of the scans of old drawings, texture mapping of the facades of historic buildings, and a 360° panorama, the source material collected was integrated. The final product was a few-minute-long video, composed of nine sequences. It captures the changing form of the castle building together with its facades, the castle park, and its further topographic and urban surroundings, since the beginning of the 19th century till the present day. For a topographic map sheet dating back to the first half of the 19th century, in which the hachuring method had been used to present land relief, a terrain model was generated. The transition from parallel to bird's-eye-view perspective served to demonstrate the distinctive character of the pre-industrial landscape.
NASA Astrophysics Data System (ADS)
Rumpfhuber, E.; Keller, G. R.; Velasco, A. A.
2005-12-01
Many large-scale experiments conduct both controlled-source and passive deployments to investigate the lithospheric structure of a targeted region. Many of these studies utilize each data set independently, resulting in different images of the Earth depending on the data set investigated. In general, formal integration of these data sets, such as joint inversions, with other data has not been performed. The CD-ROM experiment, which included both 2-D controlled-source and passive recording along a profile extending from southern Wyoming to northern New Mexico serves as an excellent data set to develop a formal integration strategy between both controlled source and passive experiments. These data are ideal to develop this strategy because: 1) the analysis of refraction/wide-angle reflection data yields Vp structure, and sometimes Vs structure, of the crust and uppermost mantle; 2) analysis of the PmP phase (Moho reflection) yields estimates of the average Vp of the crust for the crust; and 3) receiver functions contain full-crustal reverberations and yield the Vp/Vs ratio, but do not constrain the absolute P and S velocity. Thus, a simple form of integration involves using the Vp/Vs ratio from receiver functions and the average Vp from refraction measurements, to solve for the average Vs of the crust. When refraction/ wide-angle reflection data and several receiver functions nearby are available, an integrated 2-D model can be derived. In receiver functions, the PS conversion gives the S-wave travel-time (ts) through the crust along the raypath traveled from the Moho to the surface. Since the receiver function crustal reverberation gives the Vp/Vs ratio, it is also possible to use the arrival time of the converted phase, PS, to solve for the travel time of the direct teleseismic P-wave through the crust along the ray path. Raytracing can yield the point where the teleseismic wave intersects the Moho. In this approach, the conversion point is essentially a pseudo-shotpoint, thus the converted arrival at the surface can be jointly modeled with refraction data using a 3-D inversion code. Employing the combined CD-ROM data sets, we will be investigating the joint inversion results of controlled source data and receiver functions.
NASA Astrophysics Data System (ADS)
Darwiche, Mahmoud Khalil M.
The research presented herein is a contribution to the understanding of the numerical modeling of fully nonlinear, transient water waves. The first part of the work involves the development of a time-domain model for the numerical generation of fully nonlinear, transient waves by a piston type wavemaker in a three-dimensional, finite, rectangular tank. A time-domain boundary-integral model is developed for simulating the evolving fluid field. A robust nonsingular, adaptive integration technique for the assembly of the boundary-integral coefficient matrix is developed and tested. A parametric finite-difference technique for calculating the fluid- particle kinematics is also developed and tested. A novel compatibility and continuity condition is implemented to minimize the effect of the singularities that are inherent at the intersections of the various Dirichlet and/or Neumann subsurfaces. Results are presented which demonstrate the accuracy and convergence of the numerical model. The second portion of the work is a study of the interaction of the numerically-generated, fully nonlinear, transient waves with a bottom-mounted, surface-piercing, vertical, circular cylinder. The numerical model developed in the first part of this dissertation is extended to include the presence of the cylinder at the centerline of the basin. The diffraction of the numerically generated waves by the cylinder is simulated, and the particle kinematics of the diffracted flow field are calculated and reported. Again, numerical results showing the accuracy and convergence of the extended model are presented.
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
Neutron Imaging Control Report: FY 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, D. J.
2016-11-30
During the 2016 fiscal year, work began on the supervision and control systems for the neutron source currently under construction in the B194 accelerator caves. This source relies on a deuteron beam colliding with a high-speed stream of deuterium gas to create neutrons, which poses significant technical challenges. To help overcome those challenges, an integrated, operator-focused control architecture is required to collect and assimilate disparate data from a variety of measurement points, as well as provide the means to remotely control the system hardware.
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.
Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng
2014-07-01
Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Mechanical Pulp... mechanical pulp facilities where pulp and paper at groundwood mills are produced through the application of the thermo-mechanical process; mechanical pulp facilities where the integrated production of pulp and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... CATEGORY Pan, Dry Digestion, and Mechanical Reclaimed Rubber Subcategory § 428.92 Effluent limitations... pan, dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion reclaimed process, which may be discharged by a point source subject to the provisions of this...
Code of Federal Regulations, 2012 CFR
2012-07-01
... CATEGORY Pan, Dry Digestion, and Mechanical Reclaimed Rubber Subcategory § 428.92 Effluent limitations... pan, dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion reclaimed process, which may be discharged by a point source subject to the provisions of this...
Code of Federal Regulations, 2014 CFR
2014-07-01
... CATEGORY Pan, Dry Digestion, and Mechanical Reclaimed Rubber Subcategory § 428.92 Effluent limitations... pan, dry digestion, and mechanical reclaimed rubber processes which are integrated with a wet digestion reclaimed process, which may be discharged by a point source subject to the provisions of this...
Determining volume sensitive waters in Beaufort County, SC tidal creeks
Andrew Tweel; Denise Sanger; Anne Blair; John Leffler
2016-01-01
Non-point source pollution from stormwater runoff associated with large-scale land use changes threatens the integrity of ecologically and economically valuable estuarine ecosystems. Beaufort County, SC implemented volume-based stormwater regulations on the rationale that if volume discharge is controlled, contaminant loading will also be controlled.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Mechanical Pulp... mechanical pulp facilities where pulp and paper at groundwood mills are produced through the application of the thermo-mechanical process; mechanical pulp facilities where the integrated production of pulp and...
Experience on Mashup Development with End User Programming Environment
ERIC Educational Resources Information Center
Yue, Kwok-Bun
2010-01-01
Mashups, Web applications integrating data and functionality from other Web sources to provide a new service, have quickly become ubiquitous. Because of their role as a focal point in three important trends (Web 2.0, situational software applications, and end user development), mashups are a crucial emerging technology for information systems…
Javens, Gregory; Jashnsaz, Hossein; Pressé, Steve
2018-04-30
Sharp chemoattractant (CA) gradient variations near food sources may give rise to dramatic behavioral changes of bacteria neighboring these sources. For instance, marine bacteria exhibiting run-reverse motility are known to form distinct bands around patches (large sources) of chemoattractant such as nutrient-soaked beads while run-and-tumble bacteria have been predicted to exhibit a 'volcano effect' (spherical shell-shaped density) around a small (point) source of food. Here we provide the first minimal model of banding for run-reverse bacteria and show that, while banding and the volcano effect may appear superficially similar, they are different physical effects manifested under different source emission rate (and thus effective source size). More specifically, while the volcano effect is known to arise around point sources from a bacterium's temporal differentiation of signal (and corresponding finite integration time), this effect alone is insufficient to account for banding around larger patches as bacteria would otherwise cluster around the patch without forming bands at some fixed radial distance. In particular, our model demonstrates that banding emerges from the interplay of run-reverse motility and saturation of the bacterium's chemoreceptors to CA molecules and our model furthermore predicts that run-reverse bacteria susceptible to banding behavior should also exhibit a volcano effect around sources with smaller emission rates.
A Dirichlet process model for classifying and forecasting epidemic curves.
Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V
2014-01-09
A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.
NASA Astrophysics Data System (ADS)
Eppeldauer, G. P.; Podobedov, V. B.; Cooksey, C. C.
2017-05-01
Calibration of the emitted radiation from UV sources peaking at 365 nm, is necessary to perform the ASTM required 1 mW/cm2 minimum irradiance in certain military material (ships, airplanes etc) tests. These UV "black lights" are applied for crack-recognition using fluorescent liquid penetrant inspection. At present, these nondestructive tests are performed using Hg-lamps. Lack of a proper standard and the different spectral responsivities of the available UV meters cause significant measurement errors even if the same UV-365 source is measured. A pyroelectric radiometer standard with spectrally flat (constant) response in the UV-VIS range has been developed to solve the problem. The response curve of this standard determined from spectral reflectance measurement, is converted into spectral irradiance responsivity with <0.5% (k=2) uncertainty as a result of using an absolute tie point from a Si-trap detector traceable to the primary standard cryogenic radiometer. The flat pyroelectric radiometer standard can be used to perform uniform integrated irradiance measurements from all kinds of UV sources (with different peaks and distributions) without using any source standard. Using this broadband calibration method, yearly spectral calibrations for the reference UV (LED) sources and irradiance meters is not needed. Field UV sources and meters can be calibrated against the pyroelectric radiometer standard for broadband (integrated) irradiance and integrated responsivity. Using the broadband measurement procedure, the UV measurements give uniform results with significantly decreased uncertainties.
NASA Astrophysics Data System (ADS)
Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George
2018-06-01
Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.
An annular superposition integral for axisymmetric radiators.
Kelly, James F; McGough, Robert J
2007-02-01
A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.
Local recovery of the compressional and shear speeds from the hyperbolic DN map
NASA Astrophysics Data System (ADS)
Stefanov, Plamen; Uhlmann, Gunther; Vasy, Andras
2018-01-01
We study the isotropic elastic wave equation in a bounded domain with boundary. We show that local knowledge of the Dirichlet-to-Neumann map determines uniquely the speed of the p-wave locally if there is a strictly convex foliation with respect to it, and similarly for the s-wave speed.
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
ERIC Educational Resources Information Center
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Existence and uniqueness of steady state solutions of a nonlocal diffusive logistic equation
NASA Astrophysics Data System (ADS)
Sun, Linan; Shi, Junping; Wang, Yuwen
2013-08-01
In this paper, we consider a dynamical model of population biology which is of the classical Fisher type, but the competition interaction between individuals is nonlocal. The existence, uniqueness, and stability of the steady state solution of the nonlocal problem on a bounded interval with homogeneous Dirichlet boundary conditions are studied.
Using Dirichlet Priors to Improve Model Parameter Plausibility
ERIC Educational Resources Information Center
Rai, Dovan; Gong, Yue; Beck, Joseph E.
2009-01-01
Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…
ERIC Educational Resources Information Center
Li, Dingcheng
2011-01-01
Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…
Quantum field between moving mirrors: A three dimensional example
NASA Technical Reports Server (NTRS)
Hacyan, S.; Jauregui, Roco; Villarreal, Carlos
1995-01-01
The scalar quantum field uniformly moving plates in three dimensional space is studied. Field equations for Dirichlet boundary conditions are solved exactly. Comparison of the resulting wavefunctions with their instantaneous static counterpart is performed via Bogolubov coefficients. Unlike the one dimensional problem, 'particle' creation as well as squeezing may occur. The time dependent Casimir energy is also evaluated.
Estimation of dynamic load of mercury in a river with BASINS-HSPF model
Ying Ouyang; John Higman; Jeff Hatten
2012-01-01
Purpose Mercury (Hg) is a naturally occurring element and a pervasive toxic pollutant. This study investigated the dynamic loads of Hg from the Cedar-Ortega Rivers watershed into the Lower St. Johns River (LSJR), Florida, USA, using the better assessment science integrating point and nonpoint sources (BASINS)-hydrologic simulation program - FORTRAN (HSPF) model....
Collaborative Action Research on Technology Integration for Science Learning
ERIC Educational Resources Information Center
Wang, Chien-hsing; Ke, Yi-Ting; Wu, Jin-Tong; Hsu, Wen-Hua
2012-01-01
This paper briefly reports the outcomes of an action research inquiry on the use of blogs, MS PowerPoint [PPT], and the Internet as learning tools with a science class of sixth graders for project-based learning. Multiple sources of data were essential to triangulate the key findings articulated in this paper. Corresponding to previous studies,…
Nitrogen (N) removal in watersheds is an important regulating ecosystem service that can help reduce N pollution in the nation’s waterways. However, processes that remove N such as denitrification are generally determined at point locations. Measures that integrate N proc...
NASA Astrophysics Data System (ADS)
Estrany, Joan; Martinez-Carreras, Nuria
2013-04-01
Tracers have been acknowledged as a useful tool to identify sediment sources, based upon a variety of techniques and chemical and physical sediment properties. Sediment fingerprinting supports the notion that changes in sedimentation rates are not just related to increased/reduced erosion and transport in the same areas, but also to the establishment of different pathways increasing sediment connectivity. The Na Borges is a Mediterranean lowland agricultural river basin (319 km2) where traditional soil and water conservation practices have been applied over millennia to provide effective protection of cultivated land. During the twentieth century, industrialisation and pressure from tourism activities have increased urbanised surfaces, which have impacts on the processes that control streamflow. Within this context, source material sampling was focused in Na Borges on obtaining representative samples from potential sediment sources (comprised topsoil; i.e., 0-2 cm) susceptible to mobilisation by water and subsequent routing to the river channel network, while those representing channel bank sources were collected from actively eroding channel margins and ditches. Samples of road dust and of solids from sewage treatment plants were also collected. During two hydrological years (2004-2006), representative suspended sediment samples for use in source fingerprinting studies were collected at four flow gauging stations and at eight secondary sampling points using time-integrating sampling samplers. Likewise, representative bed-channel sediment samples were obtained using the resuspension approach at eight sampling points in the main stem of the Na Borges River. These deposits represent the fine sediment temporarily stored in the bed-channel and were also used for tracing source contributions. A total of 102 individual time-integrated sediment samples, 40 bulk samples and 48 bed-sediment samples were collected. Upon return to the laboratory, source material samples were oven-dried at 40° C, disaggregated using a pestle and mortar, and dry sieved to
Tribushinina, Elena
2013-06-01
The interpretation of size terms involves constructing contextually-relevant reference points by combining visual cues with knowledge of typical object sizes. This study aims to establish at what age children learn to integrate these two sources of information in the interpretation process and tests comprehension of the Dutch adjectives groot 'big' and klein 'small' by 2- to 7-year-old children. The results demonstrate that there is a gradual increase in the ability to inhibit visual cues and to use world knowledge for interpreting size terms. 2- and 3-year-old children only used the extremes of the perceptual range as reference points. From age four onwards children, like adults, used a cut-off point in the mid-zone of a series. From age five on, children were able to integrate world knowledge and perceptual context. Although 7-year-olds could make subtle distinctions between sizes of various object classes, their performance on incongruent items was not yet adult-like.
Greedy feature selection for glycan chromatography data with the generalized Dirichlet distribution
2013-01-01
Background Glycoproteins are involved in a diverse range of biochemical and biological processes. Changes in protein glycosylation are believed to occur in many diseases, particularly during cancer initiation and progression. The identification of biomarkers for human disease states is becoming increasingly important, as early detection is key to improving survival and recovery rates. To this end, the serum glycome has been proposed as a potential source of biomarkers for different types of cancers. High-throughput hydrophilic interaction liquid chromatography (HILIC) technology for glycan analysis allows for the detailed quantification of the glycan content in human serum. However, the experimental data from this analysis is compositional by nature. Compositional data are subject to a constant-sum constraint, which restricts the sample space to a simplex. Statistical analysis of glycan chromatography datasets should account for their unusual mathematical properties. As the volume of glycan HILIC data being produced increases, there is a considerable need for a framework to support appropriate statistical analysis. Proposed here is a methodology for feature selection in compositional data. The principal objective is to provide a template for the analysis of glycan chromatography data that may be used to identify potential glycan biomarkers. Results A greedy search algorithm, based on the generalized Dirichlet distribution, is carried out over the feature space to search for the set of “grouping variables” that best discriminate between known group structures in the data, modelling the compositional variables using beta distributions. The algorithm is applied to two glycan chromatography datasets. Statistical classification methods are used to test the ability of the selected features to differentiate between known groups in the data. Two well-known methods are used for comparison: correlation-based feature selection (CFS) and recursive partitioning (rpart). CFS is a feature selection method, while recursive partitioning is a learning tree algorithm that has been used for feature selection in the past. Conclusions The proposed feature selection method performs well for both glycan chromatography datasets. It is computationally slower, but results in a lower misclassification rate and a higher sensitivity rate than both correlation-based feature selection and the classification tree method. PMID:23651459
Homogenization of Winkler-Steklov spectral conditions in three-dimensional linear elasticity
NASA Astrophysics Data System (ADS)
Gómez, D.; Nazarov, S. A.; Pérez, M. E.
2018-04-01
We consider a homogenization Winkler-Steklov spectral problem that consists of the elasticity equations for a three-dimensional homogeneous anisotropic elastic body which has a plane part of the surface subject to alternating boundary conditions on small regions periodically placed along the plane. These conditions are of the Dirichlet type and of the Winkler-Steklov type, the latter containing the spectral parameter. The rest of the boundary of the body is fixed, and the period and size of the regions, where the spectral parameter arises, are of order ɛ . For fixed ɛ , the problem has a discrete spectrum, and we address the asymptotic behavior of the eigenvalues {β _k^ɛ }_{k=1}^{∞} as ɛ → 0. We show that β _k^ɛ =O(ɛ ^{-1}) for each fixed k, and we observe a common limit point for all the rescaled eigenvalues ɛ β _k^ɛ while we make it evident that, although the periodicity of the structure only affects the boundary conditions, a band-gap structure of the spectrum is inherited asymptotically. Also, we provide the asymptotic behavior for certain "groups" of eigenmodes.
A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis
Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.
2015-01-01
Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324
NASA Technical Reports Server (NTRS)
Parse, Joseph B.; Wert, J. A.
1991-01-01
Inhomogeneities in the spatial distribution of second phase particles in engineering materials are known to affect certain mechanical properties. Progress in this area has been hampered by the lack of a convenient method for quantitative description of the spatial distribution of the second phase. This study intends to develop a broadly applicable method for the quantitative analysis and description of the spatial distribution of second phase particles. The method was designed to operate on a desktop computer. The Dirichlet tessellation technique (geometrical method for dividing an area containing an array of points into a set of polygons uniquely associated with the individual particles) was selected as the basis of an analysis technique implemented on a PC. This technique is being applied to the production of Al sheet by PM processing methods; vacuum hot pressing, forging, and rolling. The effect of varying hot working parameters on the spatial distribution of aluminum oxide particles in consolidated sheet is being studied. Changes in distributions of properties such as through-thickness near-neighbor distance correlate with hot-working reduction.
NASA Astrophysics Data System (ADS)
Bhadauria, Ravi; Aluru, N. R.
2017-05-01
We propose an isothermal, one-dimensional, electroosmotic flow model for slit-shaped nanochannels. Nanoscale confinement effects are embedded into the transport model by incorporating the spatially varying solvent and ion concentration profiles that correspond to the electrochemical potential of mean force. The local viscosity is dependent on the solvent local density and is modeled using the local average density method. Excess contributions to the local viscosity are included using the Onsager-Fuoss expression that is dependent on the local ionic strength. A Dirichlet-type boundary condition is provided in the form of the slip velocity that is dependent on the macroscopic interfacial friction. This solvent-surface specific interfacial friction is estimated using a dynamical generalized Langevin equation based framework. The electroosmotic flow of Na+ and Cl- as single counterions and NaCl salt solvated in Extended Simple Point Charge (SPC/E) water confined between graphene and silicon slit-shaped nanochannels are considered as examples. The proposed model yields a good quantitative agreement with the solvent velocity profiles obtained from the non-equilibrium molecular dynamics simulations.
VizieR Online Data Catalog: ALMA 106GHz continuum observations in Chamaeleon I (Dunham+, 2016)
NASA Astrophysics Data System (ADS)
Dunham, M. M.; Offner, S. S. R.; Pineda, J. E.; Bourke, T. L.; Tobin, J. J.; Arce, H. G.; Chen, X.; di, Francesco J.; Johnstone, D.; Lee, K. I.; Myers, P. C.; Price, D.; Sadavoy, S. I.; Schnee, S.
2018-02-01
We obtained ALMA observations of every source in Chamaleon I detected in the single-dish 870 μm LABOCA survey by Belloche et al. (2011, J/A+A/527/A145), except for those listed as likely artifacts (1 source), residuals from bright sources (7 sources), or detections tentatively associated with YSOs (3 sources). We observed 73 sources from the initial list of 84 objects identified by Belloche et al. (2011, J/A+A/527/A145). We observed the 73 pointings using the ALMA Band 3 receivers during its Cycle 1 campaign between 2013 November 29 and 2014 March 08. Between 25 and 27 antennas were available for our observations, with the array configured in a relatively compact configuration to provide a resolution of approximately 2" FWHM (300 AU at the distance to Chamaeleon I). Each target was observed in a single pointing with approximately 1 minute of on-source integration time. Three out of the four available spectral windows were configured to measure the continuum at 101, 103, and 114 GHz, each with a bandwidth of 2 GHz, for a total continuum bandwidth of 6 GHz (2.8 mm) at a central frequency of 106 GHz. (2 data files).
International Space Station Electric Power System Performance Code-SPACE
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey; McKissock, David; Fincannon, James; Green, Robert; Kerslake, Thomas; Delleur, Ann; Follo, Jeffrey; Trudell, Jeffrey; Hoffman, David J.; Jannette, Anthony;
2005-01-01
The System Power Analysis for Capability Evaluation (SPACE) software analyzes and predicts the minute-by-minute state of the International Space Station (ISS) electrical power system (EPS) for upcoming missions as well as EPS power generation capacity as a function of ISS configuration and orbital conditions. In order to complete the Certification of Flight Readiness (CoFR) process in which the mission is certified for flight each ISS System must thoroughly assess every proposed mission to verify that the system will support the planned mission operations; SPACE is the sole tool used to conduct these assessments for the power system capability. SPACE is an integrated power system model that incorporates a variety of modules tied together with integration routines and graphical output. The modules include orbit mechanics, solar array pointing/shadowing/thermal and electrical, battery performance, and power management and distribution performance. These modules are tightly integrated within a flexible architecture featuring data-file-driven configurations, source- or load-driven operation, and event scripting. SPACE also predicts the amount of power available for a given system configuration, spacecraft orientation, solar-array-pointing conditions, orbit, and the like. In the source-driven mode, the model must assure that energy balance is achieved, meaning that energy removed from the batteries must be restored (or balanced) each and every orbit. This entails an optimization scheme to ensure that energy balance is maintained without violating any other constraints.
NASA Astrophysics Data System (ADS)
Javens, Gregory; Jashnsaz, Hossein; Pressé, Steve
2018-07-01
Sharp chemoattractant (CA) gradient variations near food sources may give rise to dramatic behavioral changes of bacteria neighboring these sources. For instance, marine bacteria exhibiting run-reverse motility are known to form distinct bands around patches (large sources) of chemoattractant such as nutrient-soaked beads while run-and-tumble bacteria have been predicted to exhibit a ‘volcano effect’ (spherical shell-shaped density) around a small (point) source of food. Here we provide the first minimal model of banding for run-reverse bacteria and show that, while banding and the volcano effect may appear superficially similar, they are different physical effects manifested under different source emission rate (and thus effective source size). More specifically, while the volcano effect is known to arise around point sources from a bacterium’s temporal differentiation of signal (and corresponding finite integration time), this effect alone is insufficient to account for banding around larger patches as bacteria would otherwise cluster around the patch without forming bands at some fixed radial distance. In particular, our model demonstrates that banding emerges from the interplay of run-reverse motility and saturation of the bacterium’s chemoreceptors to CA molecules and our model furthermore predicts that run-reverse bacteria susceptible to banding behavior should also exhibit a volcano effect around sources with smaller emission rates.
Exact solutions for sound radiation from a moving monopole above an impedance plane.
Ochmann, Martin
2013-04-01
The acoustic field of a monopole source moving with constant velocity at constant height above an infinite locally reacting plane can be expressed in analytical form by combining the Lorentz transformation with the method of superimposing complex or real point sources. For a plane with masslike response, the solution in Lorentz space consists of a superposition of monopoles only and therefore, does not differ in principle from the solution for the corresponding stationary boundary value problem. However, by considering a frequency independent surface impedance, e.g., with pure absorbing behavior, the half-space Green's function is now comprised of not only a line of monopoles but also of dipoles. For certain field points at a special line g, this solution can be written explicitly by using an exponential integral. For arbitrary field points, the method of stationary phase leads to an asymptotic solution for the reflection coefficient which agrees with prior results from the literature.
Rotationally symmetric viscous gas flows
NASA Astrophysics Data System (ADS)
Weigant, W.; Plotnikov, P. I.
2017-03-01
The Dirichlet boundary value problem for the Navier-Stokes equations of a barotropic viscous compressible fluid is considered. The flow region and the data of the problem are assumed to be invariant under rotations about a fixed axis. The existence of rotationally symmetric weak solutions for all adiabatic exponents from the interval (γ*,∞) with a critical exponent γ* < 4/3 is proved.
Latent Dirichlet Allocation (LDA) for Sentiment Analysis Toward Tourism Review in Indonesia
NASA Astrophysics Data System (ADS)
Putri, IR; Kusumaningrum, R.
2017-01-01
The tourism industry is one of foreign exchange sector, which has considerable potential development in Indonesia. Compared to other Southeast Asia countries such as Malaysia with 18 million tourists and Singapore 20 million tourists, Indonesia which is the largest Southeast Asia’s country have failed to attract higher tourist numbers compared to its regional peers. Indonesia only managed to attract 8,8 million foreign tourists in 2013, with the value of foreign tourists each year which is likely to decrease. Apart from the infrastructure problems, marketing and managing also form of obstacles for tourism growth. An evaluation and self-analysis should be done by the stakeholder to respond toward this problem and capture opportunities that related to tourism satisfaction from tourists review. Recently, one of technology to answer this problem only relying on the subjective of statistical data which collected by voting or grading from user randomly. So the result is still not to be accountable. Thus, we proposed sentiment analysis with probabilistic topic model using Latent Dirichlet Allocation (LDA) method to be applied for reading general tendency from tourist review into certain topics that can be classified toward positive and negative sentiment.
Synthesis and X-ray Crystallography of [Mg(H2O)6][AnO2(C2H5COO)3]2 (An = U, Np, or Pu).
Serezhkin, Viktor N; Grigoriev, Mikhail S; Abdulmyanov, Aleksey R; Fedoseev, Aleksandr M; Savchenkov, Anton V; Serezhkina, Larisa B
2016-08-01
Synthesis and X-ray crystallography of single crystals of [Mg(H2O)6][AnO2(C2H5COO)3]2, where An = U (I), Np (II), or Pu (III), are reported. Compounds I-III are isostructural and crystallize in the trigonal crystal system. The structures of I-III are built of hydrated magnesium cations [Mg(H2O)6](2+) and mononuclear [AnO2(C2H5COO)3](-) complexes, which belong to the AB(01)3 crystallochemical group of uranyl complexes (A = AnO2(2+), B(01) = C2H5COO(-)). Peculiarities of intermolecular interactions in the structures of [Mg(H2O)6][UO2(L)3]2 complexes depending on the carboxylate ion L (acetate, propionate, or n-butyrate) are investigated using the method of molecular Voronoi-Dirichlet polyhedra. Actinide contraction in the series of U(VI)-Np(VI)-Pu(VI) in compounds I-III is reflected in a decrease in the mean An═O bond lengths and in the volume and sphericity degree of Voronoi-Dirichlet polyhedra of An atoms.
Extending information retrieval methods to personalized genomic-based studies of disease.
Ye, Shuyun; Dawson, John A; Kendziorski, Christina
2014-01-01
Genomic-based studies of disease now involve diverse types of data collected on large groups of patients. A major challenge facing statistical scientists is how best to combine the data, extract important features, and comprehensively characterize the ways in which they affect an individual's disease course and likelihood of response to treatment. We have developed a survival-supervised latent Dirichlet allocation (survLDA) modeling framework to address these challenges. Latent Dirichlet allocation (LDA) models have proven extremely effective at identifying themes common across large collections of text, but applications to genomics have been limited. Our framework extends LDA to the genome by considering each patient as a "document" with "text" detailing his/her clinical events and genomic state. We then further extend the framework to allow for supervision by a time-to-event response. The model enables the efficient identification of collections of clinical and genomic features that co-occur within patient subgroups, and then characterizes each patient by those features. An application of survLDA to The Cancer Genome Atlas ovarian project identifies informative patient subgroups showing differential response to treatment, and validation in an independent cohort demonstrates the potential for patient-specific inference.
Diffusion Processes Satisfying a Conservation Law Constraint
Bakosi, J.; Ristorcelli, J. R.
2014-03-04
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Diffusion Processes Satisfying a Conservation Law Constraint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakosi, J.; Ristorcelli, J. R.
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Model for Semantically Rich Point Cloud Data
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Hallot, P.; Billen, R.
2017-10-01
This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.
VizieR Online Data Catalog: Herschel-PACS and -SPIRE spectroscopy of 70 objects (Green+, 2016)
NASA Astrophysics Data System (ADS)
Green, J. D.; Yang, Y.-L.; Evans, N. J., II; Karska, A.; Herczeg, G.; van Dishoeck, E. F.; Lee, J.-E.; Larson, R. L.; Bouwman, J.
2016-10-01
We present the CDF (COPS-DIGIT-FOOSH) archive, with Herschel spectroscopic observations of 70 objects (protostars, young stellar objects, and FU Orionis objects) from the "Dust, Ice, and Gas in Time" (DIGIT) Key Project, FU Orionis Objects Surveyed with Herschel" Open Time Program (FOOSH OT1), and "CO in Protostars" Open Time Program (COPS OT2) Herschel programs. These have been delivered to the Herschel archive and are available. The full source list is shown in Table1. The full DIGIT spectroscopic sample consists of 63 sources: 24 Herbig Ae/Be stars (intermediate mass sources with circumstellar disks), 9 T Tauri stars (low mass young stars with circumstellar disks), and 30 protostars (young stars with significant envelope emission) observed with Photodetector Array Camera and Spectrometer (PACS) spectroscopy. DIGIT also included an additional wTTS (weak-line T Tauri star) sample that was observed photometrically and delivered separately. The wTTS sample is fully described by Cieza et al. 2013ApJ...762..100C. The full DIGIT embedded protostellar sample consisted of 30 Class 0/I targets, drawn from previous studies, focusing on protostars with high-quality Spitzer-IRS 5-40μm spectroscopy (summarized by Lahuis et al. 2006 c2d Spectroscopy Explanatory Supplement; Pasadena, CA: Spitzer Science Center), and UV, optical, infrared, and submillimeter complementary data. These objects are selected from some of the nearest and best-studied molecular clouds: Taurus (140pc; 6 targets), Ophiuchus (125pc; 7 targets), Perseus (230-250pc; 7 targets), R Corona Australis (130pc; 3 targets), Serpens (429pc; 2 targets), Chamaeleon (178pc, 1 target), and 4 additional isolated cores. PACS is a 5*5 array of 9.4''*9.4'' spatial pixels (spaxels) covering the spectral range from 50 to 210μm with λ/Δλ~1000-3000, divided into four segments, covering λ~50-75, 70-105, 100-145, and 140-210μm. The PACS spatial resolution ranges from ~9'' at the shortest wavelengths (50μm) to ~18'' at the longest (210μm), corresponding to 1000-4500AU at the distances of most sources. The nominal pointing rms of the telescope is 2''. For the DIGIT embedded protostars sample we utilized the full range of PACS (50-210μm) in two linked, pointed, chop/nod rangescans: a blue scan covering 50-75 and 100-150μm (SED B2A+short R1); and a red scan covering 70-105 and 140-210μm (SED B2B+long R1). We used 6 and 4 range repetitions respectively, for integration times of 6853 and 9088s (a total of ~16000s per target for the entire 50-210μm scan). Excluding overhead, 50% of the integration time is spent on source and 50% on sky. Thus the effective on-source integration times are 3088 and 4180s, for the blue and red scans, respectively. The total on-source integration time to achieve the entire 50-210μm scan is then 7268s. Most (21 of 33) disk sources were observed with the same procedure as the embedded objects. The other 12 sources have only partial spectral coverage: 8 Herbig Ae/Be sources (HD35187, HD203024, HD245906, HD142666, HD144432, HD141569, HD98922, and HD150193) and 4 T Tauri sources (HT Lup, RU Lup, RY Lup, and RNO90) were observed using only the blue scans (i.e., achieving a wavelength coverage only from SED B2A+short R1, 100-150μm). 9 of these 12 sources (all except HD35187, HD203024, and HD245906) were observed in a further limited wavelength range (60-72+120-134μm; referred to as "forsterite only" scans for their focus on the 69μm forsterite dust feature). The FU Orionis Objects Surveyed with Herschel (FOOSH) program consisted of 21hrs of Herschel observing time: V1057Cyg, V1331Cyg, V1515Cyg, V1735Cyg, and FUOri were observed as part of FOOSH. For the FOOSH sample we again utilized the full range of PACS (50-210μm) in two linked, pointed, chop/nod rangescans: a blue scan covering 50-75 and 100-150μm (SED B2A+short R1); and a red scan covering 70-105 and 140-210μm (SED B2B+long R1). We used 6 and 4 range repetitions respectively, for integration times of 3530 and 4620s (a total of ~8000s per target and off-positions combined, for the entire 50-210μm scan; the on-source integration time is ~3000s). The telescope sky background was subtracted using two nod positions 6' from the source. The Spectral and Photometric Imaging REceiver (SPIRE; 194-670μm)/Fourier Transform Spectrometer (FTS) data were taken in a single pointing with sparse image sampling, high spectral resolution mode, over 1hr of integration time. The spectrum is divided into two orders covering the spectral ranges 194-325μm ("SSW"; Spectrograph Short Wavelengths) and 320-690μm ("SLW"; Spectrograph Long Wavelengths), with a resolution, Δv of 1.44GHz and resolving power, λ/Δλ~300-800, increasing at shorter wavelengths. The sample of 31 COPS (CO in ProtoStars) protostars observed with SPIRE-FTS includes 25 sources from the DIGIT and 6 from the WISH (Water in Star-forming regions with Herschel, PI: E. van Dischoek; van Dishoeck et al. 2011PASP..123..138V; see also Nisini et al. 2010A&A...518L.120N; Kristensen et al. 2012A&A...542A...8K; Karska et al. 2013A&A...552A.141K; Wampfler et al. 2013A&A...552A..56W) key programs. A nearly identical sample was observed in COJ=16->15 with HIFI (PI: L. Kristensen) and is presented in L. Kristensen et al. 2016, (in preparation). This data set (COPS: SPIRE-FTS) is analyzed in a forthcoming paper (J. Green et al. 2016, in preparation). The SPIRE beamsize ranges from 17'' to 40'', equivalent to physical sizes of ~2000-10000AU at the distances of the COPS sources. The COPS SPIRE-FTS data were observed identically to the FOOSH SPIRE data, in a single pointing with sparse image sampling, high spectral resolution, in 1hr of integration time per source, with one exception: the IRS 44/46 data were observed in medium image sampling (e.g., complete spatial coverage within the inner 2 rings of spaxels), in 1.5hr, in order to better distinguish IRS44 (the comparatively brighter IR source; Green et al. 2013ApJ...770..123G, J. Green et al. 2016, in preparation) from IRS46. (2 data files).
A Neumann boundary term for gravity
NASA Astrophysics Data System (ADS)
Krishnan, Chethan; Raju, Avinash
2017-05-01
The Gibbons-Hawking-York (GHY) boundary term makes the Dirichlet problem for gravity well-defined, but no such general term seems to be known for Neumann boundary conditions. In this paper, we view Neumann not as fixing the normal derivative of the metric (“velocity”) at the boundary, but as fixing the functional derivative of the action with respect to the boundary metric (“momentum”). This leads directly to a new boundary term for gravity: the trace of the extrinsic curvature with a specific dimension-dependent coefficient. In three dimensions, this boundary term reduces to a “one-half” GHY term noted in the literature previously, and we observe that our action translates precisely to the Chern-Simons action with no extra boundary terms. In four dimensions, the boundary term vanishes, giving a natural Neumann interpretation to the standard Einstein-Hilbert action without boundary terms. We argue that in light of AdS/CFT, ours is a natural approach for defining a “microcanonical” path integral for gravity in the spirit of the (pre-AdS/CFT) work of Brown and York.
Determinants and conformal anomalies of GJMS operators on spheres
NASA Astrophysics Data System (ADS)
Dowker, J. S.
2011-03-01
The conformal anomalies and functional determinants of the Branson-GJMS operators, P2k, on the d-dimensional sphere are evaluated in explicit terms for any d and k such that k <= d/2 (if d is even). The determinants are given in terms of multiple gamma functions and a rational multiplicative anomaly, which vanishes for odd d. Taking the mode system on the sphere as the union of Neumann and Dirichlet ones on the hemisphere is a basic part of the method and leads to a heuristic explanation of the non-existence of 'super-critical' operators, 2k > d for even d. Significant use is made of the Barnes zeta function. The results are given in terms of ratios of determinants of operators on a (d + 1)-dimensional bulk dual sphere. For odd dimensions, the log determinant is written in terms of multiple sine functions and agreement is found with holographic computations, yielding an integral over a Plancherel measure. The N-D determinant ratio is also found explicitly for even dimensions. Ehrhart polynomials are encountered.
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
Oak Ridge Spallation Neutron Source (ORSNS) target station design integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamy, T.; Booth, R.; Cleaves, J.
1996-06-01
The conceptual design for a 1- to 3-MW short pulse spallation source with a liquid mercury target has been started recently. The design tools and methods being developed to define requirements, integrate the work, and provide early cost guidance will be presented with a summary of the current target station design status. The initial design point was selected with performance and cost estimate projections by a systems code. This code was developed recently using cost estimates from the Brookhaven Pulsed Spallation Neutron Source study and experience from the Advanced Neutron Source Project`s conceptual design. It will be updated and improvedmore » as the design develops. Performance was characterized by a simplified figure of merit based on a ratio of neutron production to costs. A work breakdown structure was developed, with simplified systems diagrams used to define interfaces and system responsibilities. A risk assessment method was used to identify potential problems, to identify required research and development (R&D), and to aid contingency development. Preliminary 3-D models of the target station are being used to develop remote maintenance concepts and to estimate costs.« less
LDA boost classification: boosting by topics
NASA Astrophysics Data System (ADS)
Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li
2012-12-01
AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.
A Possible Magnetar Nature for IGR J16358-4726
NASA Technical Reports Server (NTRS)
Patel, S.; Zurita, J.; DelSanto, M.; Finger, M.; Koueliotou, C.; Eichler, D.; Gogus, E.; Ubertini, P.; Walter, R.; Woods, P.
2006-01-01
We present detailed spectral and timing analysis of the hard x-ray transient IGR J16358-4726 using multi-satellite archival observations. A study of the source flux time history over 6 years, suggests that this transient outbursts can be occurring in intervals of at most 1 year. Joint spectral fits using simultaneous Chandra/ACIS and INTEGRAL/ISGRI data reveal a spectrum well described by an absorbed cut-off power law model plus an Fe line. We detected the pulsations initially reported using Chandra/ACIS also in the INTEGRAL/ISGRI light curve and in subsequent XMM-Newton observations. Using the INTEGRAL data we identified a pulse spin up of 94 s (P = 1.6 x 10(exp -4), which strongly points to a neutron star nature for IGR J16358-4726. Assuming that the spin up is due to disc accretion, we estimate that the source magnetic field ranges between 10(sup 13) approximately 10(sup 15) depending on its distance, possibly supporting a magnetar nature for IGR J16358-4726.
The effect of barriers on wave propagation phenomena: With application for aircraft noise shielding
NASA Technical Reports Server (NTRS)
Mgana, C. V. M.; Chang, I. D.
1982-01-01
The frequency spectrum was divided into high and low frequency regimes and two separate methods were developed and applied to account for physical factors associated with flight conditions. For long wave propagation, the acoustic filed due to a point source near a solid obstacle was treated in terms of an inner region which where the fluid motion is essentially incompressible, and an outer region which is a linear acoustic field generated by hydrodynamic disturbances in the inner region. This method was applied to a case of a finite slotted plate modelled to represent a wing extended flap for both stationary and moving media. Ray acoustics, the Kirchhoff integral formulation, and the stationary phase approximation were combined to study short wave length propagation in many limiting cases as well as in the case of a semi-infinite plate in a uniform flow velocity with a point source above the plate and embedded in a different flow velocity to simulate an engine exhaust jet stream surrounding the source.
NASA Technical Reports Server (NTRS)
Lakota, Barbara Anne
1998-01-01
This thesis develops a method to model the acoustic field generated by a monopole source placed in a moving rectangular duct. The walls of the duct are assumed to be infinitesimally thin and the source is placed at the center of the duct. The total acoustic pressure is written in terms of the free-space pressure, or incident pressure, and the scattered pressure. The scattered pressure is the augmentation to the incident pressure due to the presence of the duct. It satisfies a homogeneous wave equation and is discontinuous across the duct walls. Utilizing an integral representation of the scattered pressure, a set of singular boundary integral equations governing the unknown jump in scattered pressure is derived. This equation is solved by the method of collocation after representing the jump in pressure as a double series of shape functions. The solution obtained is then substituted back into the integral representation to determine the scattered pressure, and the total acoustic pressure at any point in the field. A few examples are included to illustrate the influence of various geometric and kinematic parameters on the radiated sound field.
Deanna Osmond; Mazdak Arabi; Caela O' Connell; Dana Hoag; Dan Line; Marzieh Motallebi; Ali Tasdighi
2016-01-01
Jordan Lake watershed is regulated by state rules in order to reduce nutrient loading from point and both agricultural and urban nonpoint sources. The agricultural community is expected to reduce nutrient loading by specific amounts that range from 35 - 0 percent nitrogen, and 5 - 0 percent phosphorus.
Experimental demonstration of interferometric imaging using photonic integrated circuits.
Su, Tiehui; Scott, Ryan P; Ogden, Chad; Thurman, Samuel T; Kendrick, Richard L; Duncan, Alan; Yu, Runxiang; Yoo, S J B
2017-05-29
This paper reports design, fabrication, and demonstration of a silica photonic integrated circuit (PIC) capable of conducting interferometric imaging with multiple baselines around λ = 1550 nm. The PIC consists of four sets of five waveguides (total of twenty waveguides), each leading to a three-band spectrometer (total of sixty waveguides), after which a tunable Mach-Zehnder interferometer (MZI) constructs interferograms from each pair of the waveguides. A total of thirty sets of interferograms (ten pairs of three spectral bands) is collected by the detector array at the output of the PIC. The optical path difference (OPD) of each interferometer baseline is kept to within 1 µm to maximize the visibility of the interference measurement. We constructed an experiment to utilize the two baselines for complex visibility measurement on a point source and a variable width slit. We used the point source to demonstrate near unity value of the PIC instrumental visibility, and used the variable slit to demonstrate visibility measurement for a simple extended object. The experimental result demonstrates the visibility of baseline 5 and 20 mm for a slit width of 0 to 500 µm in good agreement with theoretical predictions.
Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
Ajello, M.
2016-02-26
The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission towards the Galactic centre (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1 - 100 GeV from a 15° X15° region about the direction of the GC, and implications for the interstellar emissions produced by cosmic ray (CR) particles interacting with the gas and radiation fields in the inner Galaxy and for the point sources detected. Specialised interstellar emission models (IEMs) are constructed that enable separation ofmore » the γ-ray emission from the inner ~ 1 kpc about the GC from the fore- and background emission from the Galaxy. Based on these models, the interstellar emission from CR electrons interacting with the interstellar radiation field via the inverse Compton (IC) process and CR nuclei inelastically scattering off the gas producing γ-rays via π⁰ decays from the inner ~ 1 kpc is determined. The IC contribution is found to be dominant in the region and strongly enhanced compared to previous studies. A catalog of point sources for the 15 °X 15 °region is self-consistently constructed using these IEMs: the First Fermi–LAT Inner Galaxy point source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs, including the Third Fermi–LAT Source Catalog (3FGL). In general, the spatial density of 1FIG sources differs from those in the 3FGL, which is attributed to the different treatments of the interstellar emission and energy ranges used by the respective analyses. Three 1FIG sources are found to spatially overlap with supernova remnants (SNRs) listed in Green’s SNR catalog; these SNRs have not previously been associated with high-energy γ-ray sources. Most 3FGL sources with known multi-wavelength counterparts are also found. However, the majority of 1FIG point sources are unassociated. After subtracting the interstellar emission and point-source contributions from the data a residual is found that is a sub-dominant fraction of the total flux. But, it is brighter than the γ-ray emission associated with interstellar gas in the inner ~ 1 kpc derived for the IEMs used in this paper, and comparable to the integrated brightness of the point sources in the region for energies & 3 GeV. If spatial templates that peak toward the GC are used to model the positive residual and included in the total model for the 1515°X° region, the agreement with the data improves, but they do not account for all the residual structure. The spectrum of the positive residual modelled with these templates has a strong dependence on the choice of IEM.« less
Response Functions for Neutron Skyshine Analyses
NASA Astrophysics Data System (ADS)
Gui, Ah Auu
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.
Design and evaluation of an imaging spectrophotometer incorporating a uniform light source.
Noble, S D; Brown, R B; Crowe, T G
2012-03-01
Accounting for light that is diffusely scattered from a surface is one of the practical challenges in reflectance measurement. Integrating spheres are commonly used for this purpose in point measurements of reflectance and transmittance. This solution is not directly applicable to a spectral imaging application for which diffuse reflectance measurements are desired. In this paper, an imaging spectrophotometer design is presented that employs a uniform light source to provide diffuse illumination. This creates the inverse measurement geometry to the directional illumination/diffuse reflectance mode typically used for point measurements. The final system had a spectral range between 400 and 1000 nm with a 5.2 nm resolution, a field of view of approximately 0.5 m by 0.5 m, and millimeter spatial resolution. Testing results indicate illumination uniformity typically exceeding 95% and reflectance precision better than 1.7%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frayce, D.; Khayat, R.E.; Derdouri, A.
The dual reciprocity boundary element method (DRBEM) is implemented to solve three-dimensional transient heat conduction problems in the presence of arbitrary sources, typically as these problems arise in materials processing. The DRBEM has a major advantage over conventional BEM, since it avoids the computation of volume integrals. These integrals stem from transient, nonlinear, and/or source terms. Thus there is no need to discretize the inner domain, since only a number of internal points are needed for the computation. The validity of the method is assessed upon comparison with results from benchmark problems where analytical solutions exist. There is generally goodmore » agreement. Comparison against finite element results is also favorable. Calculations are carried out in order to assess the influence of the number and location of internal nodes. The influence of the ratio of the numbers of internal to boundary nodes is also examined.« less
Induced Voltage in an Open Wire
NASA Astrophysics Data System (ADS)
Morawetz, K.; Gilbert, M.; Trupp, A.
2017-07-01
A puzzle arising from Faraday's law has been considered and solved concerning the question which voltage will be induced in an open wire with a time-varying homogeneous magnetic field. In contrast to closed wires where the voltage is determined by the time variance of the magnetic field and the enclosed area, in an open wire we have to integrate the electric field along the wire. It is found that the longitudinal electric field with respect to the wave vector contributes with 1/3 and the transverse field with 2/3 to the induced voltage. In order to find the electric fields the sources of the magnetic fields are necessary to know. The representation of a spatially homogeneous and time-varying magnetic field implies unavoidably a certain symmetry point or symmetry line which depend on the geometry of the source. As a consequence the induced voltage of an open wire is found to be the area covered with respect to this symmetry line or point perpendicular to the magnetic field. This in turn allows to find the symmetry points of a magnetic field source by measuring the voltage of an open wire placed with different angles in the magnetic field. We present exactly solvable models of the Maxwell equations for a symmetry point and for a symmetry line, respectively. The results are applicable to open circuit problems like corrosion and for astrophysical applications.
NASA Astrophysics Data System (ADS)
Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.
2017-12-01
Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.
Ravel, André; Hurst, Matt; Petrica, Nicoleta; David, Julie; Mutschall, Steven K; Pintar, Katarina; Taboada, Eduardo N; Pollari, Frank
2017-01-01
Human campylobacteriosis is a common zoonosis with a significant burden in many countries. Its prevention is difficult because humans can be exposed to Campylobacter through various exposures: foodborne, waterborne or by contact with animals. This study aimed at attributing campylobacteriosis to sources at the point of exposure. It combined comparative exposure assessment and microbial subtype comparison with subtypes defined by comparative genomic fingerprinting (CGF). It used isolates from clinical cases and from eight potential exposure sources (chicken, cattle and pig manure, retail chicken, beef, pork and turkey meat, and surface water) collected within a single sentinel site of an integrated surveillance system for enteric pathogens in Canada. Overall, 1518 non-human isolates and 250 isolates from domestically-acquired human cases were subtyped and their subtype profiles analyzed for source attribution using two attribution models modified to include exposure. Exposure values were obtained from a concurrent comparative exposure assessment study undertaken in the same area. Based on CGF profiles, attribution was possible for 198 (79%) human cases. Both models provide comparable figures: chicken meat was the most important source (65-69% of attributable cases) whereas exposure to cattle (manure) ranked second (14-19% of attributable cases), the other sources being minor (including beef meat). In comparison with other attributions conducted at the point of production, the study highlights the fact that Campylobacter transmission from cattle to humans is rarely meat borne, calling for a closer look at local transmission from cattle to prevent campylobacteriosis, in addition to increasing safety along the chicken supply chain.
NASA Astrophysics Data System (ADS)
Cura, Rémi; Perret, Julien; Paparoditis, Nicolas
2017-05-01
In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.
An annular superposition integral for axisymmetric radiators
Kelly, James F.; McGough, Robert J.
2007-01-01
A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a “smooth piston” function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity. PMID:17348500
Multiclass Data Segmentation using Diffuse Interface Methods on Graphs
2014-01-01
37] that performs interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general...izations of the graph Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph...continuous setting carry over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence
Sine-gordon type field in spacetime of arbitrary dimension. II: Stochastic quantization
NASA Astrophysics Data System (ADS)
Kirillov, A. I.
1995-11-01
Using the theory of Dirichlet forms, we prove the existence of a distribution-valued diffusion process such that the Nelson measure of a field with a bounded interaction density is its invariant probability measure. A Langevin equation in mathematically correct form is formulated which is satisfied by the process. The drift term of the equation is interpreted as a renormalized Euclidean current operator.
NASA Technical Reports Server (NTRS)
Chiavassa, G.; Liandrat, J.
1996-01-01
We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.
ERIC Educational Resources Information Center
Kjeldsen, Tinne Hoff; Lützen, Jesper
2015-01-01
In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another…
The accurate solution of Poisson's equation by expansion in Chebyshev polynomials
NASA Technical Reports Server (NTRS)
Haidvogel, D. B.; Zang, T.
1979-01-01
A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.
Manifold Matching: Joint Optimization of Fidelity and Commensurability
2011-11-12
identified separately in p◦m, will be geometrically incommensurate (see Figure 7). Thus the null distribution of the test statistic will be inflated...into the objective function obviates the geometric incommensurability phenomenon. Thus we can es- tablish that, for a range of Dirichlet product model...from the geometric incommensu- rability phenomenon. Then q p implies that cca suffers from the spurious correlation phe- nomenon with high probability
The tunneling effect for a class of difference operators
NASA Astrophysics Data System (ADS)
Klein, Markus; Rosenberger, Elke
We analyze a general class of self-adjoint difference operators H𝜀 = T𝜀 + V𝜀 on ℓ2((𝜀ℤ)d), where V𝜀 is a multi-well potential and 𝜀 is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]).Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H𝜀 is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H𝜀, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H𝜀 converge to the first n eigenvalues of the direct sum of harmonic oscillators on ℝd located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H𝜀. These are obtained from eigenfunctions or quasimodes for the operator H𝜀, acting on L2(ℝd), via restriction to the lattice (𝜀ℤ)d. Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrödinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted ℓ2-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two “wells” (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrödinger operator in [22].
A practical guide to big data research in psychology.
Chen, Eric Evan; Wojcik, Sean P
2016-12-01
The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Leveraging constraints and biotelemetry data to pinpoint repetitively used spatial features
Brost, Brian M.; Hooten, Mevin B.; Small, Robert J.
2016-01-01
Satellite telemetry devices collect valuable information concerning the sites visited by animals, including the location of central places like dens, nests, rookeries, or haul‐outs. Existing methods for estimating the location of central places from telemetry data require user‐specified thresholds and ignore common nuances like measurement error. We present a fully model‐based approach for locating central places from telemetry data that accounts for multiple sources of uncertainty and uses all of the available locational data. Our general framework consists of an observation model to account for large telemetry measurement error and animal movement, and a highly flexible mixture model specified using a Dirichlet process to identify the location of central places. We also quantify temporal patterns in central place use by incorporating ancillary behavioral data into the model; however, our framework is also suitable when no such behavioral data exist. We apply the model to a simulated data set as proof of concept. We then illustrate our framework by analyzing an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that exhibits fidelity to terrestrial haul‐out sites.
Axial charges of N(1535) and N(1650) in lattice QCD with two flavors of dynamical quarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Toru T.; Kunihiro, Teiji
2008-07-01
We show the first lattice QCD results on the axial charge g{sub A}{sup N}*{sup N}* of N*(1535) and N*(1650). The measurements are performed with two flavors of dynamical quarks employing the renormalization-group improved gauge action at {beta}=1.95 and the mean-field improved clover quark action with the hopping parameters, {kappa}=0.1375, 0.1390, and 0.1400. In order to properly separate signals of N*(1535) and N*(1650), we construct 2x2 correlation matrices and diagonalize them. Wraparound contributions in the correlator, which can be another source of signal contaminations, are eliminated by imposing the Dirichlet boundary condition in the temporal direction. We find that the axialmore » charge of N*(1535) takes small values such as g{sub A}{sup N}*{sup N}*{approx}O(0.1), whereas that of N*(1650) is about 0.5, which is found independent of quark masses and consistent with the predictions by the naive nonrelativistic quark model.« less
Integrating data to acquire new knowledge: Three modes of integration in plant science.
Leonelli, Sabina
2013-12-01
This paper discusses what it means and what it takes to integrate data in order to acquire new knowledge about biological entities and processes. Maureen O'Malley and Orkun Soyer have pointed to the scientific work involved in data integration as important and distinct from the work required by other forms of integration, such as methodological and explanatory integration, which have been more successful in captivating the attention of philosophers of science. Here I explore what data integration involves in more detail and with a focus on the role of data-sharing tools, like online databases, in facilitating this process; and I point to the philosophical implications of focusing on data as a unit of analysis. I then analyse three cases of data integration in the field of plant science, each of which highlights a different mode of integration: (1) inter-level integration, which involves data documenting different features of the same species, aims to acquire an interdisciplinary understanding of organisms as complex wholes and is exemplified by research on Arabidopsis thaliana; (2) cross-species integration, which involves data acquired on different species, aims to understand plant biology in all its different manifestations and is exemplified by research on Miscanthus giganteus; and (3) translational integration, which involves data acquired from sources within as well as outside academia, aims at the provision of interventions to improve human health (e.g. by sustaining the environment in which humans thrive) and is exemplified by research on Phytophtora ramorum. Recognising the differences between these efforts sheds light on the dynamics and diverse outcomes of data dissemination and integrative research; and the relations between the social and institutional roles of science, the development of data-sharing infrastructures and the production of scientific knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dosimetry of 192Ir sources used for endovascular brachytherapy
NASA Astrophysics Data System (ADS)
Reynaert, N.; Van Eijkeren, M.; Taeymans, Y.; Thierens, H.
2001-02-01
An in-phantom calibration technique for 192Ir sources used for endovascular brachytherapy is presented. Three different source lengths were investigated. The calibration was performed in a solid phantom using a Farmer-type ionization chamber at source to detector distances ranging from 1 cm to 5 cm. The dosimetry protocol for medium-energy x-rays extended with a volume-averaging correction factor was used to convert the chamber reading to dose to water. The air kerma strength of the sources was determined as well. EGS4 Monte Carlo calculations were performed to determine the depth dose distribution at distances ranging from 0.6 mm to 10 cm from the source centre. In this way we were able to convert the absolute dose rate at 1 cm distance to the reference point chosen at 2 mm distance. The Monte Carlo results were confirmed by radiochromic film measurements, performed with a double-exposure technique. The dwell times to deliver a dose of 14 Gy at the reference point were determined and compared with results given by the source supplier (CORDIS). They determined the dwell times from a Sievert integration technique based on the source activity. The results from both methods agreed to within 2% for the 12 sources that were evaluated. A Visual Basic routine that superimposes dose distributions, based on the Monte Carlo calculations and the in-phantom calibration, onto intravascular ultrasound images is presented. This routine can be used as an online treatment planning program.
The Nature of the X-Ray Binary IGR J19294+1816 from INTEGRAL, RXTE, and Swift Observations
NASA Technical Reports Server (NTRS)
Rodriquez, J.; Tomsick, J. A.; Bodaghee, A.; ZuritaHeras, J.-A.; Chaty, S.; Paizis, A.; Corbel, S.
2009-01-01
We report the results of a high-energy multi-instrumental campaign with INTEGRAL, RXTE, and Swift of the recently discovered INTEGRAL source IGR J19294+ 1816. The Swift/XRT data allow us to refine the position of the source to R.A. (J2000) = 19h 29m 55.9s, Decl. (J2000) = +18 deg 18 feet 38 inches . 4 (+/- 3 inches .5), which in turn permits us to identify a candidate infrared counterpart. The Swift and RXTE spectra are well fitted with absorbed power laws with hard (Gamma approx 1) photon indices. During the longest Swift observation, we obtained evidence of absorption in true excess to the Galactic value, which may indicate some intrinsic absorption in this source. We detected a strong (P = 40%) pulsations at 12.43781 (+/- 0.00003) s that we interpret as the spin period of a pulsar. All these results, coupled with the possible 117 day orbital period, point to IGR J19294+ 1816 being an high-mass X-ray binary (HMXB) with a Be companion star. However, while the long-term INTEGRAL/IBIS/ISGRI 18-40 keV light curve shows that the source spends most of its time in an undetectable state, we detect occurrences of short (2000-3000 s) and intense flares that are more typical of supergiant fast X-ray transients. We therefore cannot make firm conclusions on the type of system, and we discuss the possible implication of IGR J19294+1816 being an Supergiant Fast X-ray Transient (SFXT).
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2017-04-01
Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.
NASA Astrophysics Data System (ADS)
Keawprasert, T.; Anhalt, K.; Taubert, D. R.; Sperling, A.; Schuster, M.; Nevas, S.
2013-09-01
An LP3 radiation thermometer was absolutely calibrated at a newly developed monochromator-based set-up and the TUneable Lasers in Photometry (TULIP) facility of PTB in the wavelength range from 400 nm to 1100 nm. At both facilities, the spectral radiation of the respective sources irradiates an integrating sphere, thus generating uniform radiance across its precision aperture. The spectral irradiance of the integrating sphere is determined via an effective area of a precision aperture and a Si trap detector, traceable to the primary cryogenic radiometer of PTB. Due to the limited output power from the monochromator, the absolute calibration was performed with the measurement uncertainty of 0.17 % (k = 1), while the respective uncertainty at the TULIP facility is 0.14 %. Calibration results obtained by the two facilities were compared in terms of spectral radiance responsivity, effective wavelength and integral responsivity. It was found that the measurement results in integral responsivity at the both facilities are in agreement within the expanded uncertainty (k = 2). To verify the calibration accuracy, the absolutely calibrated radiation thermometer was used to measure the thermodynamic freezing temperatures of the PTB gold fixed-point blackbody.
Physical and Economic Integration of Carbon Capture Methods with Sequestration Sinks
NASA Astrophysics Data System (ADS)
Murrell, G. R.; Thyne, G. D.
2007-12-01
Currently there are several different carbon capture technologies either available or in active development for coal- fired power plants. Each approach has different advantages, limitations and costs that must be integrated with the method of sequestration and the physiochemical properties of carbon dioxide to evaluate which approach is most cost effective. For large volume point sources such as coal-fired power stations, the only viable sequestration sinks are either oceanic or geological in nature. However, the carbon processes and systems under consideration produce carbon dioxide at a variety of pressure and temperature conditions that must be made compatible with the sinks. Integration of all these factors provides a basis for meaningful economic comparisons between the alternatives. The high degree of compatibility between carbon dioxide produced by integrated gasification combined cycle technology and geological sequestration conditions makes it apparent that this coupling currently holds the advantage. Using a basis that includes complete source-to-sink sequestration costs, the relative cost benefit of pre-combustion IGCC compared to other post-combustion methods is on the order of 30%. Additional economic benefits arising from enhanced oil recovery revenues and potential sequestration credits further improve this coupling.
Isotopic Tracers for Delineating Non-Point Source Pollutants in Surface Water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davisson, M L
2001-03-01
This study tested whether isotope measurements of surface water and dissolved constituents in surface water could be used as tracers of non-point source pollution. Oxygen-18 was used as a water tracer, while carbon-14, carbon-13, and deuterium were tested as tracers of DOC. Carbon-14 and carbon-13 were also used as tracers of dissolved inorganic carbon, and chlorine-36 and uranium isotopes were tested as tracers of other dissolved salts. In addition, large databases of water quality measurements were assembled for the Missouri River at St. Louis and the Sacramento-San Joaquin Delta in California to enhance interpretive results of the isotope measurements. Muchmore » of the water quality data has been under-interpreted and provides a valuable resource to investigative research, for which this report exploits and integrates with the isotope measurements.« less
Large-Eddy Simulation of Chemically Reactive Pollutant Transport from a Point Source in Urban Area
NASA Astrophysics Data System (ADS)
Du, Tangzheng; Liu, Chun-Ho
2013-04-01
Most air pollutants are chemically reactive so using inert scalar as the tracer in pollutant dispersion modelling would often overlook their impact on urban inhabitants. In this study, large-eddy simulation (LES) is used to examine the plume dispersion of chemically reactive pollutants in a hypothetical atmospheric boundary layer (ABL) in neutral stratification. The irreversible chemistry mechanism of ozone (O3) titration is integrated into the LES model. Nitric oxide (NO) is emitted from an elevated point source in a rectangular spatial domain doped with O3. The LES results are compared well with the wind tunnel results available in literature. Afterwards, the LES model is applied to idealized two-dimensional (2D) street canyons of unity aspect ratio to study the behaviours of chemically reactive plume over idealized urban roughness. The relation among various time scales of reaction/turbulence and dimensionless number are analysed.
Near-field transport of {sup 129}I from a point source in an in-room disposal vault
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolar, M.; Leneveu, D.M.; Johnson, L.H.
1995-12-31
A very small number of disposal containers of heat generating nuclear waste may have initial manufacturing defects that would lead to pin-hole type failures at the time of or shortly after emplacement. For sufficiently long-lived containers, only the initial defects need to be considered in modeling of release rates from the disposal vault. Two approaches to modeling of near-field mass transport from a single point source within a disposal room have been compared: the finite-element code MOTIF (A Model Of Transport In Fractured/porous media) and a boundary integral method (BIM). These two approaches were found to give identical results formore » a simplified model of the disposal room without groundwater flow. MOTIF has then been used to study the effects of groundwater flow on the mass transport out of the emplacement room.« less
Simulating the evolution of non-point source pollutants in a shallow water environment.
Yan, Min; Kahawita, Rene
2007-03-01
Non-point source pollution originating from surface applied chemicals in either liquid or solid form as part of agricultural activities, appears in the surface runoff caused by rainfall. The infiltration and transport of these pollutants has a significant impact on subsurface and riverine water quality. The present paper describes the development of a unified 2-D mathematical model incorporating individual models for infiltration, adsorption, solubility rate, advection and diffusion, which significantly improve the current practice on mathematical modeling of pollutant evolution in shallow water. The governing equations have been solved numerically using cubic spline integration. Experiments were conducted at the Hydrodynamics Laboratory of the Ecole Polytechnique de Montreal to validate the mathematical model. Good correspondence between the computed results and experimental data has been obtained. The model may be used to predict the ultimate fate of surface applied chemicals by evaluating the proportions that are dissolved, infiltrated into the subsurface or are washed off.
Two frameworks for integrating knowledge in induction
NASA Technical Reports Server (NTRS)
Rosenbloom, Paul S.; Hirsh, Haym; Cohen, William W.; Smith, Benjamin D.
1994-01-01
The use of knowledge in inductive learning is critical for improving the quality of the concept definitions generated, reducing the number of examples required in order to learn effective concept definitions, and reducing the computation needed to find good concept definitions. Relevant knowledge may come in many forms (such as examples, descriptions, advice, and constraints) and from many sources (such as books, teachers, databases, and scientific instruments). How to extract the relevant knowledge from this plethora of possibilities, and then to integrate it together so as to appropriately affect the induction process is perhaps the key issue at this point in inductive learning. Here the focus is on the integration part of this problem; that is, how induction algorithms can, and do, utilize a range of extracted knowledge. Preliminary work on a transformational framework for defining knowledge-intensive inductive algorithms out of relatively knowledge-free algorithms is described, as is a more tentative problems-space framework that attempts to cover all induction algorithms within a single general approach. These frameworks help to organize what is known about current knowledge-intensive induction algorithms, and to point towards new algorithms.
Self-powered integrated microfluidic point-of-care low-cost enabling (SIMPLE) chip
Yeh, Erh-Chia; Fu, Chi-Cheng; Hu, Lucy; Thakur, Rohan; Feng, Jeffrey; Lee, Luke P.
2017-01-01
Portable, low-cost, and quantitative nucleic acid detection is desirable for point-of-care diagnostics; however, current polymerase chain reaction testing often requires time-consuming multiple steps and costly equipment. We report an integrated microfluidic diagnostic device capable of on-site quantitative nucleic acid detection directly from the blood without separate sample preparation steps. First, we prepatterned the amplification initiator [magnesium acetate (MgOAc)] on the chip to enable digital nucleic acid amplification. Second, a simplified sample preparation step is demonstrated, where the plasma is separated autonomously into 224 microwells (100 nl per well) without any hemolysis. Furthermore, self-powered microfluidic pumping without any external pumps, controllers, or power sources is accomplished by an integrated vacuum battery on the chip. This simple chip allows rapid quantitative digital nucleic acid detection directly from human blood samples (10 to 105 copies of methicillin-resistant Staphylococcus aureus DNA per microliter, ~30 min, via isothermal recombinase polymerase amplification). These autonomous, portable, lab-on-chip technologies provide promising foundations for future low-cost molecular diagnostic assays. PMID:28345028
NASA Technical Reports Server (NTRS)
Dawson, K. S.; Holzapfel, W. L.; Carlstrom, J. E.; Joy, M.; LaRoque, S. J.; Reese, E. D.; Rose, M. Franklin (Technical Monitor)
2001-01-01
We have used the Berkeley-Illinois-Maryland-Association (BIMA) array outfitted with sensitive cm-wave receivers to expand our search for minute scale anisotropy of the Cosmic Microwave Background (CMB). The interferometer was placed in a compact configuration to obtain high brightness sensitivity on arcminute scales over its 6.6' FWHM field of view. The sensitivity of this experiment to flat band power peaks at a multipole of 1 = 5530 which corresponds to an angular scale of -2'. We present the analysis of a total of 470 hours of on-source integration time on eleven independent fields which were selected based on their low IR contrast and lack of bright radio sources. Applying a Bayesian analysis to the visibility data, we find CMB anisotropy flat band power Q_flat = 6.1(+2.8/-4.8) microKelvin at 68% confidence. The confidence of a nonzero signal is 76% and we find an upper limit of Q_flat < 12.4 microKelvin at 95% confidence. We have supplemented our BIMA observations with concurrent observations at 4.8 GHz with the VLA to search for and remove point sources. We find the point sources make an insignificant contribution to the observed anisotropy.
NASA Astrophysics Data System (ADS)
Dogon-Yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-09-01
Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.
Multiclass Data Segmentation Using Diffuse Interface Methods on Graphs
2014-01-01
interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general- izations of the graph...Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph-based energy function that...over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence results for two algorithms
Ages of Records in Random Walks
NASA Astrophysics Data System (ADS)
Szabó, Réka; Vető, Bálint
2016-12-01
We consider random walks with continuous and symmetric step distributions. We prove universal asymptotics for the average proportion of the age of the kth longest lasting record for k=1,2,ldots and for the probability that the record of the kth longest age is broken at step n. Due to the relation to the Chinese restaurant process, the ranked sequence of proportions of ages converges to the Poisson-Dirichlet distribution.
1985-05-01
non- zero Dirichlet boundary conditions and/or general mixed type boundary conditions. Note that Neumann type boundary condi- tion enters the problem by...Background ................................. ................... I 1.3 General Description ..... ............ ........... . ....... ...... 2 2. ANATOMICAL...human and varions loading conditions for the definition of a generalized safety guideline of blast exposure. To model the response of a sheep torso
Visibility of quantum graph spectrum from the vertices
NASA Astrophysics Data System (ADS)
Kühn, Christian; Rohleder, Jonathan
2018-03-01
We investigate the relation between the eigenvalues of the Laplacian with Kirchhoff vertex conditions on a finite metric graph and a corresponding Titchmarsh-Weyl function (a parameter-dependent Neumann-to-Dirichlet map). We give a complete description of all real resonances, including multiplicities, in terms of the edge lengths and the connectivity of the graph, and apply it to characterize all eigenvalues which are visible for the Titchmarsh-Weyl function.
A nonlinear ordinary differential equation associated with the quantum sojourn time
NASA Astrophysics Data System (ADS)
Benguria, Rafael D.; Duclos, Pierre; Fernández, Claudio; Sing-Long, Carlos
2010-11-01
We study a nonlinear ordinary differential equation on the half-line, with the Dirichlet boundary condition at the origin. This equation arises when studying the local maxima of the sojourn time for a free quantum particle whose states belong to an adequate subspace of the unit sphere of the corresponding Hilbert space. We establish several results concerning the existence and asymptotic behavior of the solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callias, C.J.
It has been known for a long time that the spectrum of the Sturm-Liouville operator {minus}{partial_derivative}{sub x}{sup 2}+ v(x) on a finite interval does not uniquely determine the potential v(x). In fact there are infinite-dimensional isospectral classes of potentials [PT]. Highly singular problems have been addressed as well, notably the question of the isospectral classes of the harmonic oscillator on the real line [McK-T], and, more recently, of the singular Sturm-Liouville operator {minus}{partial_derivative}{sub x}{sup 2} + {ell}({ell}+1)/x{sup 2} + v(x) on [0,1][GR]. In this paper we examine the question of whether the structure of isolated singularities in the potential ismore » spectrally determined. As an example of the fruits of our efforts we were able to prove the following result for the Dirichlet problem: Suppose that v(x) {epsilon} C{sup {infinity}}([-1,1]/(0)) is real-valued and v{sup (k)}(1) for all k. Suppose that xv(x) is infinitely differentiable at x = 0 from the right and from the left and lim{sub x}{r_arrow}0+ (d/{sub dx}){sup K}xv(x) = (-1){sup k+1}lim{sub x{r_arrow}0}-(d/dx){sup k}xv(x), so that v(x) {approximately} {Sigma}{sub k}{sup {infinity}}=-1{sup vk}{center_dot}{vert_bar}x{vert_bar}{sup k} as x {r_arrow} 0, for some constants v{sub k}. Suppose that v{sub {minus}1}{ne}0. Then the spectrum of the Sturm-Liousville operator with periodic boundary conditions at {plus_minus}1 and Dirichlet conditions at x = 0 uniquely determines the sequence of asymptotic coefficients v{sub {minus}1}, v{sub 0}, v{sub 1},...Potentials with the 1/x singularity arise in the wave equation for a vibrating rod of variable cross-section, when the cross-sectional area of the rod vanishes quadratically (as a function of the distance from the end of the rod) at one point. The main reason why we look at this problem is as a model that will give us an idea of what can be expected when one attempts to get information about singularities from the spectrum.« less
A Dirichlet process model for classifying and forecasting epidemic curves
2014-01-01
Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.
Lu, Yisu; Jiang, Jun; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Gross, Alexander; Murthy, Dhiraj
2014-10-01
This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
MILAMIN 2 - Fast MATLAB FEM solver
NASA Astrophysics Data System (ADS)
Dabrowski, Marcin; Krotkiewski, Marcin; Schmid, Daniel W.
2013-04-01
MILAMIN is a free and efficient MATLAB-based two-dimensional FEM solver utilizing unstructured meshes [Dabrowski et al., G-cubed (2008)]. The code consists of steady-state thermal diffusion and incompressible Stokes flow solvers implemented in approximately 200 lines of native MATLAB code. The brevity makes the code easily customizable. An important quality of MILAMIN is speed - it can handle millions of nodes within minutes on one CPU core of a standard desktop computer, and is faster than many commercial solutions. The new MILAMIN 2 allows three-dimensional modeling. It is designed as a set of functional modules that can be used as building blocks for efficient FEM simulations using MATLAB. The utilities are largely implemented as native MATLAB functions. For performance critical parts we use MUTILS - a suite of compiled MEX functions optimized for shared memory multi-core computers. The most important features of MILAMIN 2 are: 1. Modular approach to defining, tracking, and discretizing the geometry of the model 2. Interfaces to external mesh generators (e.g., Triangle, Fade2d, T3D) and mesh utilities (e.g., element type conversion, fast point location, boundary extraction) 3. Efficient computation of the stiffness matrix for a wide range of element types, anisotropic materials and three-dimensional problems 4. Fast global matrix assembly using a dedicated MEX function 5. Automatic integration rules 6. Flexible prescription (spatial, temporal, and field functions) and efficient application of Dirichlet, Neuman, and periodic boundary conditions 7. Treatment of transient and non-linear problems 8. Various iterative and multi-level solution strategies 9. Post-processing tools (e.g., numerical integration) 10. Visualization primitives using MATLAB, and VTK export functions We provide a large number of examples that show how to implement a custom FEM solver using the MILAMIN 2 framework. The examples are MATLAB scripts of increasing complexity that address a given technical topic (e.g., creating meshes, reordering nodes, applying boundary conditions), a given numerical topic (e.g., using various solution strategies, non-linear iterations), or that present a fully-developed solver designed to address a scientific topic (e.g., performing Stokes flow simulations in synthetic porous medium). References: Dabrowski, M., M. Krotkiewski, and D. W. Schmid MILAMIN: MATLAB-based finite element method solver for large problems, Geochem. Geophys. Geosyst., 9, Q04030, 2008
Path-integral method for the source apportionment of photochemical pollutants
NASA Astrophysics Data System (ADS)
Dunker, A. M.
2015-06-01
A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOCs) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions (CAMx) is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using three or four points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.
Path-integral method for the source apportionment of photochemical pollutants
NASA Astrophysics Data System (ADS)
Dunker, A. M.
2014-12-01
A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOC's) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using 3 or 4 points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.
Power system monitoring and source control of the Space Station Freedom DC power system testbed
NASA Technical Reports Server (NTRS)
Kimnach, Greg L.; Baez, Anastacio N.
1992-01-01
Unlike a terrestrial electric utility which can purchase power from a neighboring utility, the Space Station Freedom (SSF) has strictly limited energy resources; as a result, source control, system monitoring, system protection, and load management are essential to the safe and efficient operation of the SSF Electric Power System (EPS). These functions are being evaluated in the DC Power Management and Distribution (PMAD) Testbed which NASA LeRC has developed at the Power System Facility (PSF) located in Cleveland, Ohio. The testbed is an ideal platform to develop, integrate, and verify power system monitoring and control algorithms. State Estimation (SE) is a monitoring tool used extensively in terrestrial electric utilities to ensure safe power system operation. It uses redundant system information to calculate the actual state of the EPS, to isolate faulty sensors, to determine source operating points, to verify faults detected by subsidiary controllers, and to identify high impedance faults. Source control and monitoring safeguard the power generation and storage subsystems and ensure that the power system operates within safe limits while satisfying user demands with minimal interruptions. System monitoring functions, in coordination with hardware implemented schemes, provide for a complete fault protection system. The objective of this paper is to overview the development and integration of the state estimator and the source control algorithms.
Power system monitoring and source control of the Space Station Freedom dc-power system testbed
NASA Technical Reports Server (NTRS)
Kimnach, Greg L.; Baez, Anastacio N.
1992-01-01
Unlike a terrestrial electric utility which can purchase power from a neighboring utility, the Space Station Freedom (SSF) has strictly limited energy resources; as a result, source control, system monitoring, system protection, and load management are essential to the safe and efficient operation of the SSF Electric Power System (EPS). These functions are being evaluated in the dc Power Management and Distribution (PMAD) Testbed which NASA LeRC has developed at the Power System Facility (PSF) located in Cleveland, Ohio. The testbed is an ideal platform to develop, integrate, and verify power system monitoring and control algorithms. State Estimation (SE) is a monitoring tool used extensively in terrestrial electric utilities to ensure safe power system operation. It uses redundant system information to calculate the actual state of the EPS, to isolate faulty sensors, to determine source operating points, to verify faults detected by subsidiary controllers, and to identify high impedance faults. Source control and monitoring safeguard the power generation and storage subsystems and ensure that the power system operates within safe limits while satisfying user demands with minimal interruptions. System monitoring functions, in coordination with hardware implemented schemes, provide for a complete fault protection system. The objective of this paper is to overview the development and integration of the state estimator and the source control algorithms.
Building an Open Source Framework for Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Jagers, B.; Meijers, E.; Villars, M.
2015-12-01
In order to develop effective strategies and associated policies for environmental management, we need to understand the dynamics of the natural system as a whole and the human role therein. This understanding is gained by comparing our mental model of the world with observations from the field. However, to properly understand the system we should look at dynamics of water, sediments, water quality, and ecology throughout the whole system from catchment to coast both at the surface and in the subsurface. Numerical models are indispensable in helping us understand the interactions of the overall system, but we need to be able to update and adjust them to improve our understanding and test our hypotheses. To support researchers around the world with this challenging task we started a few years ago with the development of a new open source modeling environment DeltaShell that integrates distributed hydrological models with 1D, 2D, and 3D hydraulic models including generic components for the tracking of sediment, water quality, and ecological quantities throughout the hydrological cycle composed of the aforementioned components. The open source approach combined with a modular approach based on open standards, which allow for easy adjustment and expansion as demands and knowledge grow, provides an ideal starting point for addressing challenging integrated environmental questions.
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
NASA Astrophysics Data System (ADS)
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.
Ockenden, M C; Quinton, J N; Favaretto, N; Deasy, C; Surridge, B
2014-07-01
Surface water quality in the UK and much of Western Europe has improved in recent decades, in response to better point source controls and the regulation of fertilizer, manure and slurry use. However, diffuse sources of pollution, such as leaching or runoff of nutrients from agricultural fields, and micro-point sources including farmyards, manure heaps and septic tank sewerage systems, particularly systems without soil adsorption beds, are now hypothesised to contribute a significant proportion of the nutrients delivered to surface watercourses. Tackling such sources in an integrated manner is vital, if improvements in freshwater quality are to continue. In this research, we consider the combined effect of constructing small field wetlands and improving a septic tank system on stream water quality within an agricultural catchment in Cumbria, UK. Water quality in the ditch-wetland system was monitored by manual sampling at fortnightly intervals (April-October 2011 and February-October 2012), with the septic tank improvement taking place in February 2012. Reductions in nutrient concentrations were observed through the catchment, by up to 60% when considering total phosphorus (TP) entering and leaving a wetland with a long residence time. Average fluxes of TP, soluble reactive phosphorus (SRP) and ammonium-N (NH4-N) at the head of the ditch system in 2011 (before septic tank improvement) compared to 2012 (after septic tank improvement) were reduced by 28%, 9% and 37% respectively. However, TP concentration data continue to show a clear dilution with increasing flow, indicating that the system remained point source dominated even after the septic tank improvement.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
NASA Astrophysics Data System (ADS)
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
[Access to health information sources in Spain. how to combat "infoxication"].
Navas-Martin, Miguel Ángel; Albornos-Muñoz, Laura; Escandell-García, Cintia
2012-01-01
Internet has become a priceless source for finding health information for both patients and healthcare professionals. However, the universality and the abundance of information can lead to unfounded conclusions about health issues that can confuse further than clarify the health information. This aspect causes intoxication of information: infoxication. The question lies in knowing how to filter the information that is useful, accurate and relevant for our purposes. In this regard, integrative portals, such as the Biblioteca Virtual de Salud, compile information at different levels (international, national and regional), different types of resources (databases, repositories, bibliographic sources, etc.), becoming a starting point for obtaining quality information. Copyright © 2011 Elsevier España, S.L. All rights reserved.
High frequency sound propagation in a network of interconnecting streets
NASA Astrophysics Data System (ADS)
Hewett, D. P.
2012-12-01
We propose a new model for the propagation of acoustic energy from a time-harmonic point source through a network of interconnecting streets in the high frequency regime, in which the wavelength is small compared to typical macro-lengthscales such as street widths/lengths and building heights. Our model, which is based on geometrical acoustics (ray theory), represents the acoustic power flow from the source along any pathway through the network as the integral of a power density over the launch angle of a ray emanating from the source, and takes into account the key phenomena involved in the propagation, namely energy loss by wall absorption, energy redistribution at junctions, and, in 3D, energy loss to the atmosphere. The model predicts strongly anisotropic decay away from the source, with the power flow decaying exponentially in the number of junctions from the source, except along the axial directions of the network, where the decay is algebraic.
Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array
NASA Astrophysics Data System (ADS)
Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann
2017-04-01
An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.
Aeronautical Systems Division (AFSC) Program Management Resource Document.
1981-09-30
assure that the detail desing adeuately satisfies the requirements contained in the Part I Development Specifications and to allow the PM to formally...disciplines. He becomes a source of integrated information concerning a particular program and an interaction point for coordinating the di- verse...consider cost, schedule, and technical factors not only individually, but also their interaction on each other (5 60). The fact that a program has
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; Baxter, Van D.; Rice, C. Keith
For this study, we authored a new air source integrated heat pump (AS-IHP) model in EnergyPlus, and conducted building energy simulations to demonstrate greater than 50% average energy savings, in comparison to a baseline heat pump with electric water heater, over 10 US cities, based on the EnergyPlus quick-service restaurant template building. We also assessed water heating energy saving potentials using ASIHP versus gas heating, and pointed out climate zones where AS-IHPs are promising.
Ground deposition of liquid droplets released from a point source in the atmospheric surface layer
NASA Astrophysics Data System (ADS)
Panneton, Bernard
1989-01-01
A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.
A Possible Magnetar Nature for IGR J16358-4726
NASA Technical Reports Server (NTRS)
Patel, S. K.; Zurita, J.; DelSanto, M.; Finger, M.; Kouveliotou, C.; Eichler, D.; Gogus, E.; Ubertini, P.; Walter, R.; Woods, P.;
2007-01-01
We present detailed spectral and timing analysis of the hard X-ray transient IGR J16358-4726 using multisatellite archival observations. A study of the source flux time history over 6 yr suggests that lower luminosity transient outbursts can be occurring in intervals of at most 1 yr. Joint spectral fits of the higher luminosity outburst using simultaneous Chandra ACIS and INTEGRAL ISGRI data reveal a spectrum well described by an absorbed power-law model with a high-energy cutoff plus an Fe line. We detected the 1.6 hr pulsations initially reported using Chandra ACIS also in the INTEGRAL ISGRI light curve and in subsequent XMM-Newton observations. Using the INTEGRAL data, we identified a spin-up of 94 s (P(sup(.)) = 1.6 x 10(exp -4), which strongly points to a neutron star nature for IGR J16358-4726. Assuming that the spin-up is due to disk accretion, we estimate that the source magnetic field ranges between 10(exp 13) and 10(exp 15) G, depending on its distance, possibly supporting a magnetar nature for IGR J16358-4726.
Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.
NASA Astrophysics Data System (ADS)
Dodd, Stirling Scott
1995-01-01
Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.
Design and Evaluation of Large-Aperture Gallium Fixed-Point Blackbody
NASA Astrophysics Data System (ADS)
Khromchenko, V. B.; Mekhontsev, S. N.; Hanssen, L. M.
2009-02-01
To complement existing water bath blackbodies that now serve as NIST primary standard sources in the temperature range from 15 °C to 75 °C, a gallium fixed-point blackbody has been recently built. The main objectives of the project included creating an extended-area radiation source with a target emissivity of 0.9999 capable of operating either inside a cryo-vacuum chamber or in a standard laboratory environment. A minimum aperture diameter of 45 mm is necessary for the calibration of radiometers with a collimated input geometry or large spot size. This article describes the design and performance evaluation of the gallium fixed-point blackbody, including the calculation and measurements of directional effective emissivity, estimates of uncertainty due to the temperature drop across the interface between the pure metal and radiating surfaces, as well as the radiometrically obtained spatial uniformity of the radiance temperature and the melting plateau stability. Another important test is the measurement of the cavity reflectance, which was achieved by using total integrated scatter measurements at a laser wavelength of 10.6 μm. The result allows one to predict the performance under the low-background conditions of a cryo-chamber. Finally, results of the spectral radiance comparison with the NIST water-bath blackbody are provided. The experimental results are in good agreement with predicted values and demonstrate the potential of our approach. It is anticipated that, after completion of the characterization, a similar source operating at the water triple point will be constructed.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
Symmetrical group theory for mathematical complexity reduction of digital holograms
NASA Astrophysics Data System (ADS)
Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.
2017-10-01
This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.
BASINs and WEPP Climate Assessment Tools (CAT): Case ...
EPA announced the release of the final report, BASINs and WEPP Climate Assessment Tools (CAT): Case Study Guide to Potential Applications. This report supports application of two recently developed water modeling tools, the Better Assessment Science Integrating point & Non-point Sources (BASINS) and the Water Erosion Prediction Project Climate Assessment Tool (WEPPCAT). The report presents a series of short case studies designed to illustrate the capabilities of these tools for conducting scenario based assessments of the potential effects of climate change on streamflow and water quality. This report presents a series of short, illustrative case studies using the BASINS and WEPP climate assessment tools.
NASA Astrophysics Data System (ADS)
Duperron, Matthieu; Carroll, Lee; Rensing, Marc; Collins, Sean; Zhao, Yan; Li, Yanlu; Baets, Roel; O'Brien, Peter
2017-02-01
The cost-effective integration of laser sources on Silicon Photonic Integrated Circuits (Si-PICs) is a key challenge to realizing the full potential of on-chip photonic solutions for telecommunication and medical applications. Hybrid integration can offer a route to high-yield solutions, using only known-good laser-chips, and simple freespace micro-optics to transport light from a discrete laser-diode to a grating-coupler on the Si-PIC. In this work, we describe a passively assembled micro-optical bench (MOB) for the hybrid integration of a 1550nm 20MHz linewidth laser-diode on a Si-PIC, developed for an on-chip interferometer based medical device. A dual-lens MOB design minimizes aberrations in the laser spot transported to the standard grating-coupler (15 μm x 12 μm) on the Si-PIC, and facilitates the inclusion of a sub-millimeter latched-garnet optical-isolator. The 20dB suppression from the isolator helps ensure the high-frequency stability of the laser-diode, while the high thermal conductivity of the AlN submount (300/W=m.°C), and the close integration of a micro-bead thermistor, ensure the stable and efficient thermo-electric cooling of the laser-diode, which helps minimise low-frequency drift during the approximately 15s of operation needed for the point-of-care measurement. The dual-lens MOB is compatible with cost-effective passively-aligned mass-production, and can be optimised for alternative PIC-based applications.
Integration of optical imaging with a small animal irradiator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An
Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less
Anderson Localization for Schrödinger Operators on with Strongly Mixing Potentials
NASA Astrophysics Data System (ADS)
Bourgain, Jean; Schlag, Wilhelm
In this paper we show that for a.e. x∈[ 0,2 π) the operators defined on as
Wireless Wearable Multisensory Suite and Real-Time Prediction of Obstructive Sleep Apnea Episodes.
Le, Trung Q; Cheng, Changqing; Sangasoongsong, Akkarapol; Wongdhamma, Woranat; Bukkapatnam, Satish T S
2013-01-01
Obstructive sleep apnea (OSA) is a common sleep disorder found in 24% of adult men and 9% of adult women. Although continuous positive airway pressure (CPAP) has emerged as a standard therapy for OSA, a majority of patients are not tolerant to this treatment, largely because of the uncomfortable nasal air delivery during their sleep. Recent advances in wireless communication and advanced ("bigdata") preditive analytics technologies offer radically new point-of-care treatment approaches for OSA episodes with unprecedented comfort and afforadability. We introduce a Dirichlet process-based mixture Gaussian process (DPMG) model to predict the onset of sleep apnea episodes based on analyzing complex cardiorespiratory signals gathered from a custom-designed wireless wearable multisensory suite. Extensive testing with signals from the multisensory suite as well as PhysioNet's OSA database suggests that the accuracy of offline OSA classification is 88%, and accuracy for predicting an OSA episode 1-min ahead is 83% and 3-min ahead is 77%. Such accurate prediction of an impending OSA episode can be used to adaptively adjust CPAP airflow (toward improving the patient's adherence) or the torso posture (e.g., minor chin adjustments to maintain steady levels of the airflow).
Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.
2008-01-01
This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.
Characterization of mercury contamination in the Androscoggin River, Coos County, New Hampshire
Chalmers, Ann; Marvin-DiPasquale, Mark C.; Degnan, James R.; Coles, James; Agee, Jennifer L.; Luce, Darryl
2013-01-01
Concentrations of total mercury (THg) and MeHg in sediment, pore water, and biota in the Androscoggin River were elevated downstream from the former chloralkali facility compared with those upstream from reference sites. Sequential extraction of surface sediment showed a distinct difference in Hg speciation upstream compared with downstream from the contamination site. An upstream site was dominated by potassium hydroxide-extractable forms (for example, organic-Hg or particle-bound Hg(II)), whereas sites downstream from the point source were dominated by more chemically recalcitrant forms (largely concentrated nitric acid-extractable), indicative of elemental mercury or mercurous chloride. At all sites, only a minor fraction (less than 0.1 percent) of THg existed in chemically labile forms (for example, water extractable or weak acid extractable). All metrics indicated that a greater percentage of mercury at an upstream site was available for Hg(II)-methylation compared with sites downstream from the point source, but the absolute concentration of bioavailable Hg(II) was greater downstream from the point source. In addition, the concentration of tin-reducible inorganic reactive mercury, a surrogate measure of bioavailable Hg(II) generally increased with distance downstream from the point source. Whereas concentrations of mercury species on a sediment-dry-weight basis generally reflected the relative location of the sample to the point source, river-reach integrated mercury-species inventories and MeHg production potential (MPP) rates reflected the amount of fine-grained sediment in a given reach. THg concentrations in biota were significantly higher downstream from the point source compared with upstream reference sites for smallmouth bass, white sucker, crayfish, oligochaetes, bat fur, nestling tree swallow blood and feathers, adult tree swallow blood, and tree swallow eggs. As with tin-reducible inorganic reactive mercury, THg in smallmouth bass also increased with distance downstream from the point source. Toxicity tests and invertebrate community assessments suggested that invertebrates were not impaired at the current (2009 and 2010) levels of mercury contamination downstream from the point source. Concentrations of THg and MeHg in most water and sediment samples from the Androscoggin River were below U.S. Environmental Protection Agency (USEPA), the Canadian Council of Ministers of the Environment, and probable effects level guidelines. Surface-water and sediment samples from the Androscoggin River had similar THg concentrations but lower MeHg concentrations compared with other rivers in the region. Concentrations of THg in fish tissue were all above regional and U.S. Environmental Protection Agency guidelines. Moreover, median THg concentrations in smallmouth bass from the Androscoggin River were significantly higher than those reported in regional surveys of river and streams nationwide and in the Northeastern United States and Canada. The higher concentrations of mercury in smallmouth bass suggest conditions may be more favorable for Hg(II)-methylation and bioaccumulation in the Androscoggin River compared with many other rivers in the United States and Canada.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-26
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Terrestrial laser scanning in monitoring of anthropogenic objects
NASA Astrophysics Data System (ADS)
Zaczek-Peplinska, Janina; Kowalska, Maria
2017-12-01
The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan's density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam's incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.
Wu, Yiping; Chen, Ji
2013-01-01
Understanding the physical processes of point source (PS) and nonpoint source (NPS) pollution is critical to evaluate river water quality and identify major pollutant sources in a watershed. In this study, we used the physically-based hydrological/water quality model, Soil and Water Assessment Tool, to investigate the influence of PS and NPS pollution on the water quality of the East River (Dongjiang in Chinese) in southern China. Our results indicate that NPS pollution was the dominant contribution (>94%) to nutrient loads except for mineral phosphorus (50%). A comprehensive Water Quality Index (WQI) computed using eight key water quality variables demonstrates that water quality is better upstream than downstream despite the higher level of ammonium nitrogen found in upstream waters. Also, the temporal (seasonal) and spatial distributions of nutrient loads clearly indicate the critical time period (from late dry season to early wet season) and pollution source areas within the basin (middle and downstream agricultural lands), which resource managers can use to accomplish substantial reduction of NPS pollutant loadings. Overall, this study helps our understanding of the relationship between human activities and pollutant loads and further contributes to decision support for local watershed managers to protect water quality in this region. In particular, the methods presented such as integrating WQI with watershed modeling and identifying the critical time period and pollutions source areas can be valuable for other researchers worldwide.
UTD at TREC 2014: Query Expansion for Clinical Decision Support
2014-11-01
Description: A 62-year-old man sees a neurologist for progressive memory loss and jerking movements of the lower ex- tremities. Neurologic examination confirms...infiltration. Summary: 62-year-old man with progressive memory loss and in- voluntary leg movements. Brain MRI reveals cortical atrophy, and cortical...latent topics produced by the Latent Dirichlet Allocation (LDA) on the TREC-CDS corpus of scientific articles. The position of words “ loss ” and “ memory
Nondestructive Testing and Target Identification
2016-12-21
Dirichlet obstacle coated by a thin layer of non-absorbing media, IMA J. Appl. Math , 80, 1063-1098, (2015). Abstract: We consider the transmission...F. Cakoni, I. De Teresa, H. Haddar and P. Monk, Nondestructive testing of the delami- nated interface between two materials, SIAM J. Appl. Math ., 76...then they form a discrete set. 22. F. Cakoni, D. Colton, S. Meng and P. Monk, Steklov eigenvalues in inverse scattering, SIAM J. Appl. Math . 76, 1737
Single-grid spectral collocation for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte
1988-01-01
The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.
The Smoothed Dirichlet Distribution: Understanding Cross-Entropy Ranking in Information Retrieval
2006-07-01
reflect those of the spon- sor. viii ABSTRACT Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses...the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that uses the same cross-entropy as its ranking function...In addition, the SD based classifier provides more flexibility than RM in modeling documents owing to a consistent generative framework . We
Augmenting Latent Dirichlet Allocation and Rank Threshold Detection with Ontologies
2010-03-01
Probabilistic Latent Semantic Indexing (PLSI) is an automated indexing information retrieval model [20]. It is based on a statistical latent class model which is...uses a statistical foundation that is more accurate in finding hidden semantic relationships [20]. The model uses factor analysis of count data, number...principle of statistical infer- ence which asserts that all of the information in a sample is contained in the likelihood function [20]. The statistical
TIGRESS highly-segmented high-purity germanium clover detector
NASA Astrophysics Data System (ADS)
Scraggs, H. C.; Pearson, C. J.; Hackman, G.; Smith, M. B.; Austin, R. A. E.; Ball, G. C.; Boston, A. J.; Bricault, P.; Chakrawarthy, R. S.; Churchman, R.; Cowan, N.; Cronkhite, G.; Cunningham, E. S.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hyland, B.; Jones, B.; Leslie, J. R.; Martin, J.-P.; Morris, D.; Morton, A. C.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Svensson, C. E.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.; Zimmerman, L.
2005-05-01
The TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS) will consist of twelve units of four high-purity germanium (HPGe) crystals in a common cryostat. The outer contacts of each crystal will be divided into four quadrants and two lateral segments for a total of eight outer contacts. The performance of a prototype HPGe four-crystal unit has been investigated. Integrated noise spectra for all contacts were measured. Energy resolutions, relative efficiencies for both individual crystals and for the entire unit, and peak-to-total ratios were measured with point-like sources. Position-dependent performance was measured by moving a collimated source across the face of the detector.
Boundary-integral modeling of cochlear hydrodynamics
NASA Astrophysics Data System (ADS)
Pozrikidis, C.
2008-04-01
A two-dimensional model that captures the essential features of the vibration of the basilar membrane of the cochlea is proposed. The flow due to the vibration of the stapes footplate and round window is modeled by a point source and a point sink, and the cochlear pressure is computed simultaneously with the oscillations of the basilar membrane. The mathematical formulation relies on the boundary-integral representation of the potential flow established far from the basilar membrane and cochlea side walls, neglecting the thin Stokes boundary layer lining these surfaces. The boundary-integral approach furnishes integral equations for the membrane vibration amplitude and pressure distribution on the upper or lower side of the membrane. Several approaches are discussed, and numerical solutions in the frequency domain are presented for a rectangular cochlea model using different membrane response functions. The numerical results reproduce and extend the theoretical predictions of previous authors and delineate the effect of physical and geometrical parameters. It is found that the membrane vibration depends weakly on the position of the membrane between the upper and lower wall of the cochlear channel and on the precise location of the oval and round windows. Solutions of the initial-value problem with a single-period sinusoidal impulse reveal the formation of a traveling wave packet that eventually disappears at the helicotrema.
Exactly solvable model of the two-dimensional electrical double layer.
Samaj, L; Bajnok, Z
2005-12-01
We consider equilibrium statistical mechanics of a simplified model for the ideal conductor electrode in an interface contact with a classical semi-infinite electrolyte, modeled by the two-dimensional Coulomb gas of pointlike unit charges in the stability-against-collapse regime of reduced inverse temperatures 0< or = beta < 2. If there is a potential difference between the bulk interior of the electrolyte and the grounded electrode, the electrolyte region close to the electrode (known as the electrical double layer) carries some nonzero surface charge density. The model is mappable onto an integrable semi-infinite sine-Gordon theory with Dirichlet boundary conditions. The exact form-factor and boundary state information gained from the mapping provide asymptotic forms of the charge and number density profiles of electrolyte particles at large distances from the interface. The result for the asymptotic behavior of the induced electric potential, related to the charge density via the Poisson equation, confirms the validity of the concept of renormalized charge and the corresponding saturation hypothesis. It is documented on the nonperturbative result for the asymptotic density profile at a strictly nonzero beta that the Debye-Hückel beta-->0 limit is a delicate issue.
Systematic identification of latent disease-gene associations from PubMed articles.
Zhang, Yuji; Shen, Feichen; Mojarad, Majid Rastegar; Li, Dingcheng; Liu, Sijia; Tao, Cui; Yu, Yue; Liu, Hongfang
2018-01-01
Recent scientific advances have accumulated a tremendous amount of biomedical knowledge providing novel insights into the relationship between molecular and cellular processes and diseases. Literature mining is one of the commonly used methods to retrieve and extract information from scientific publications for understanding these associations. However, due to large data volume and complicated associations with noises, the interpretability of such association data for semantic knowledge discovery is challenging. In this study, we describe an integrative computational framework aiming to expedite the discovery of latent disease mechanisms by dissecting 146,245 disease-gene associations from over 25 million of PubMed indexed articles. We take advantage of both Latent Dirichlet Allocation (LDA) modeling and network-based analysis for their capabilities of detecting latent associations and reducing noises for large volume data respectively. Our results demonstrate that (1) the LDA-based modeling is able to group similar diseases into disease topics; (2) the disease-specific association networks follow the scale-free network property; (3) certain subnetwork patterns were enriched in the disease-specific association networks; and (4) genes were enriched in topic-specific biological processes. Our approach offers promising opportunities for latent disease-gene knowledge discovery in biomedical research.
Systematic identification of latent disease-gene associations from PubMed articles
Mojarad, Majid Rastegar; Li, Dingcheng; Liu, Sijia; Tao, Cui; Yu, Yue; Liu, Hongfang
2018-01-01
Recent scientific advances have accumulated a tremendous amount of biomedical knowledge providing novel insights into the relationship between molecular and cellular processes and diseases. Literature mining is one of the commonly used methods to retrieve and extract information from scientific publications for understanding these associations. However, due to large data volume and complicated associations with noises, the interpretability of such association data for semantic knowledge discovery is challenging. In this study, we describe an integrative computational framework aiming to expedite the discovery of latent disease mechanisms by dissecting 146,245 disease-gene associations from over 25 million of PubMed indexed articles. We take advantage of both Latent Dirichlet Allocation (LDA) modeling and network-based analysis for their capabilities of detecting latent associations and reducing noises for large volume data respectively. Our results demonstrate that (1) the LDA-based modeling is able to group similar diseases into disease topics; (2) the disease-specific association networks follow the scale-free network property; (3) certain subnetwork patterns were enriched in the disease-specific association networks; and (4) genes were enriched in topic-specific biological processes. Our approach offers promising opportunities for latent disease-gene knowledge discovery in biomedical research. PMID:29373609
Electromagnetic and scalar diffraction by a right-angled wedge with a uniform surface impedance
NASA Technical Reports Server (NTRS)
Hwang, Y. M.
1974-01-01
The diffraction of an electromagnetic wave by a perfectly-conducting right-angled wedge with one surface covered by a dielectric slab or absorber is considered. The effect of the coated surface is approximated by a uniform surface impedance. The solution of the normally incident electromagnetic problem is facilitated by introducing two scalar fields which satisfy a mixed boundary condition on one surface of the wedge and a Neumann of Dirichlet boundary condition on the other. A functional transformation is employed to simplify the boundary conditions so that eigenfunction expansions can be obtained for the resulting Green's functions. The eigenfunction expansions are transformed into the integral representations which then are evaluated asymptotically by the modified Pauli-Clemmow method of steepest descent. A far zone approximation is made to obtain the scattered field from which the diffraction coefficient is found for scalar plane, cylindrical or sperical wave incident on the edge. With the introduction of a ray-fixed coordinate system, the dyadic diffraction coefficient for plane or cylindrical EM waves normally indicent on the edge is reduced to the sum of two dyads which can be written alternatively as a 2 X 2 diagonal matrix.
Patterns and age distribution of ground-water flow to streams
Modica, E.; Reilly, T.E.; Pollock, D.W.
1997-01-01
Simulations of ground-water flow in a generic aquifer system were made to characterize the topology of ground-water flow in the stream subsystem and to evaluate its relation to deeper ground-water flow. The flow models are patterned after hydraulic characteristics of aquifers of the Atlantic Coastal Plain and are based on numerical solutions to three-dimensional, steady-state, unconfined flow. The models were used to evaluate the effects of aquifer horizontal-to-vertical hydraulic conductivity ratios, aquifer thickness, and areal recharge rates on flow in the stream subsystem. A particle tracker was used to determine flow paths in a stream subsystem, to establish the relation between ground-water seepage to points along a simulated stream and its source area of flow, and to determine ground-water residence time in stream subsystems. In a geometrically simple aquifer system with accretion, the source area of flow to streams resembles an elongated ellipse that tapers in the downgradient direction. Increased recharge causes an expansion of the stream subsystem. The source area of flow to the stream expands predominantly toward the stream headwaters. Baseflow gain is also increased along the reach of the stream. A thin aquifer restricts ground-water flow and causes the source area of flow to expand near stream headwaters and also shifts the start-of-flow to the drainage basin divide. Increased aquifer anisotropy causes a lateral expansion of the source area of flow to streams. Ground-water seepage to the stream channel originates both from near- and far-recharge locations. The range in the lengths of flow paths that terminate at a point on a stream increase in the downstream direction. Consequently, the age distribution of ground water that seeps into the stream is skewed progressively older with distance downstream. Base flow ia an integration of ground water with varying age and potentially different water quality, depending on the source within the drainage basin. The quantitative results presented indicate that this integration can have a wide and complex residence time range and source distribution.
"Paper Machine" for Molecular Diagnostics.
Connelly, John T; Rolland, Jason P; Whitesides, George M
2015-08-04
Clinical tests based on primer-initiated amplification of specific nucleic acid sequences achieve high levels of sensitivity and specificity. Despite these desirable characteristics, these tests have not reached their full potential because their complexity and expense limit their usefulness to centralized laboratories. This paper describes a device that integrates sample preparation and loop-mediated isothermal amplification (LAMP) with end point detection using a hand-held UV source and camera phone. The prototype device integrates paper microfluidics (to enable fluid handling) and a multilayer structure, or a "paper machine", that allows a central patterned paper strip to slide in and out of fluidic path and thus allows introduction of sample, wash buffers, amplification master mix, and detection reagents with minimal pipetting, in a hand-held, disposable device intended for point-of-care use in resource-limited environments. This device creates a dynamic seal that prevents evaporation during incubation at 65 °C for 1 h. This interval is sufficient to allow a LAMP reaction for the Escherichia coli malB gene to proceed with an analytical sensitivity of 1 double-stranded DNA target copy. Starting with human plasma spiked with whole, live E. coli cells, this paper demonstrates full integration of sample preparation with LAMP amplification and end point detection with a limit of detection of 5 cells. Further, it shows that the method used to prepare sample enables concentration of DNA from sample volumes commonly available from fingerstick blood draw.
Compression-based integral curve data reuse framework for flow visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Fan; Bi, Chongke; Guo, Hanqi
Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less
NASA Astrophysics Data System (ADS)
Castillo-Cabrera, G.; García-Lamont, J.; Reyes-Barranca, M. A.; Moreno-Cadenas, J. A.; Escobosa-Echavarría, A.
2011-03-01
In this report, the performance of a particular pixel's architecture is evaluated. It consists mainly of an optical sensor coupled to an amplifier. The circuit contains photoreceptors such as phototransistors and photodiodes. The circuit integrates two main blocks: (a) the pixel architecture, containing four p-channel transistors and a photoreceptor, and (b) a current source for biasing the signal conditioning amplifier. The generated photocurrent is integrated through the gate capacitance of the input p-channel MOS transistor, then converted to voltage and amplified. Both input transistor and current source are implemented as a voltage amplifier having variable gain (between 10dB and 32dB). Considering characterisation purposes, this last fact is relevant since it gives a degree of freedom to the measurement of different kinds of photo-devices and is not limited to either a single operating point of the circuit or one kind and size of photo-sensor. The gain of the amplifier can be adjusted with an external DC power supply that also sets the DC quiescent point of the circuit. Design of the row-select transistor's aspect ratio used in the matrix array is critical for the pixel's amplifier performance. Based on circuit design data such as capacitance magnitude, time and voltage integration, and amplifier gain, characterisation of all the architecture can be readily carried out and evaluated. For the specific technology used in this work, the spectral response of photo-sensors reveals performance differences between phototransistors and photodiodes. Good approximation between simulation and measurement was obtained.
Assessment of Li/SOCL2 Battery Technology; Reserve, Thin-Cell Design. Volume 3
1990-06-01
power density and efficiency of an operating electrochemical system . The method is general - the examples to illustrate the selected points pertain to... System : Design, Manufacturing and QC Considerations), S. Szpak, P. A. Mosier-Boss, and J. J. Smith, 34th International Power Sources Symposium, Cherry...I) the computer time required to evaluate the integral in Eqn. Ill, and (iii the lack of generality in the attainable lineshapes. However, since this
Space-based Solar Power: Possible Defense Applications and Opportunities for NRL Contributions
2009-10-23
missions. At the spacecraft system level, a two-phase system can be used to transfer heat from a heat source (such as solar collectors and power...The solar arrays’ position allows them to radiate waste heat from both faces, as in conventional spacecraft practice. Both the antenna structure...Brayton cycle engine heated by a point-focus solar concentrator. NRL worked with NASA Glenn Research Center in developing means to integrate their
An Integrated Suite of Text and Data Mining Tools - Phase II
2005-08-30
Riverside, CA, USA Mazda Motor Corp, Jpn Univ of Darmstadt, Darmstadt, Ger Navy Center for Applied Research in Artificial Intelligence Univ of...with Georgia Tech Research Corporation developed a desktop text-mining software tool named TechOASIS (known commercially as VantagePoint). By the...of this dataset and groups the Corporate Source items that co-occur with the found items. He decides he is only interested in the institutions
Kim, Jonathan J; Comstock, Jeff; Ryan, Peter; Heindel, Craig; Koenigsberger, Stephan
2016-11-01
In 2000, elevated nitrate concentrations ranging from 12 to 34mg/L NO3N were discovered in groundwater from numerous domestic bedrock wells adjacent to a large dairy farm in central Vermont. Long-term plots and contours of nitrate vs. time for bedrock wells showed "little/no", "moderate", and "large" change patterns that were spatially separable. The metasedimentary bedrock aquifer is strongly anisotropic and groundwater flow is controlled by fractures, bedding/foliation, and basins and ridges in the bedrock surface. Integration of the nitrate concentration vs. time data and the physical and chemical aquifer characterization suggest two nitrate sources: a point source emanating from a waste ravine and a non-point source that encompasses the surrounding fields. Once removed, the point source of NO3 (manure deposited in a ravine) was exhausted and NO3 dropped from 34mg/L to <10mg/L after ~10years; however, persistence of NO3 in the 3 to 8mg/L range (background) reflects the long term flux of nitrates from nutrients applied to the farm fields surrounding the ravine over the years predating and including this study. Inferred groundwater flow rates from the waste ravine to either moderate change wells in basin 2 or to the shallow bedrock zone beneath the large change wells are 0.05m/day, well within published bedrock aquifer flow rates. Enrichment of (15)N and (18)O in nitrate is consistent with lithotrophic denitrification of NO3 in the presence of dissolved Mn and Fe. Once the ravine point-source was removed, denitrification and dilution collectively were responsible for the down-gradient decrease of nitrate in this bedrock aquifer. Denitrification was most influential when NO3N was >10mg/L. Our multidisciplinary methods of aquifer characterization are applicable to groundwater contamination in any complexly-deformed and metamorphosed bedrock aquifer. Copyright © 2016 Elsevier B.V. All rights reserved.
Huang, Yu; Sun, Jie; Li, Aimin; Xie, Xianchuan
2018-05-01
In this study, an integrated approach named the '333' strategy was applied to pollution control in the Jialu River, in northern China, which is heavily burdened with anthropogenic pollution. Due to a deficiency of the natural ecological inflow, the Jialu River receives predominantly industrial and municipal effluent. The '333' strategy is composed of three steps of pollution control including industrial point-source pollution control, advanced treatment of municipal wastewater, and ecological restoration; three increased stringency emission standards; and three stages of reclamation. Phase 1 of the '333' strategy focuses on industrial point-source pollution control; phase 2 aims to harness municipal wastewater and minimize sewage effluents using novel techniques for advanced water purification; phase 3 of the '333' strategy focuses on the further purification of effluents flowing into Jialu River with the employment of an engineering-based ecological restoration project. The application of the '333' strategy resulted in the development of novel techniques for water purification including modified magnetic resins (NDMP resin), a two-stage internal circulation anaerobic reactor (IC reactor) and an ecological restoration system. The results indicate that water quality in the river was significantly improved, with increased concentrations of dissolved oxygen (DO), as well as reduction of COD by 42.8% and NH 3 -N by 61.4%. In addition, it was observed that the total population of phytoplankton in treated river water notably increased from only one prior to restoration to 8 following restoration. This system also provides a tool for pollution control of other similar industrial and anthropogenic source polluted rivers.
NASA Astrophysics Data System (ADS)
Messier, K. P.; Kane, E.; Bolich, R.; Serre, M. L.
2014-12-01
Nitrate (NO3-) is a widespread contaminant of groundwater and surface water across the United States that has deleterious effects to human and ecological health. Legacy contamination, or past releases of NO3-, is thought to be impacting current groundwater and surface water of North Carolina. This study develops a model for predicting point-level groundwater NO3- at a state scale for monitoring wells and private wells of North Carolina. A land use regression (LUR) model selection procedure known as constrained forward nonlinear regression and hyperparameter optimization (CFN-RHO) is developed for determining nonlinear model explanatory variables when they are known to be correlated. Bayesian Maximum Entropy (BME) is then used to integrate the LUR model to create a LUR-BME model of spatial/temporal varying groundwater NO3- concentrations. LUR-BME results in a leave-one-out cross-validation r2 of 0.74 and 0.33 for monitoring and private wells, effectively predicting within spatial covariance ranges. The major finding regarding legacy sources NO3- in this study is that the LUR-BME models show the geographical extent of low-level contamination of deeper drinking-water aquifers is beyond that of the shallower monitoring well. Groundwater NO3- in monitoring wells is highly variable with many areas predicted above the current Environmental Protection Agency standard of 10 mg/L. Contrarily, the private well results depict widespread, low-level NO3-concentrations. This evidence supports that in addition to downward transport, there is also a significant outward transport of groundwater NO3- in the drinking water aquifer to areas outside the range of sources. Results indicate that the deeper aquifers are potentially acting as a reservoir that is not only deeper, but also covers a larger geographical area, than the reservoir formed by the shallow aquifers. Results are of interest to agencies that regulate surface water and drinking water sources impacted by the effects of legacy NO3- sources. Additionally, the results can provide guidance on factors affecting the point-level variability of groundwater NO3- and areas where monitoring is needed to reduce uncertainty. Lastly, LUR-BME predictions can be integrated into surface water models for more accurate management of non-point sources of nitrogen.
NASA Astrophysics Data System (ADS)
Shuler, Christopher K.; El-Kadi, Aly I.; Dulai, Henrietta; Glenn, Craig R.; Fackrell, Joseph
2017-12-01
This study presents a modeling framework for quantifying human impacts and for partitioning the sources of contamination related to water quality in the mixed-use landscape of a small tropical volcanic island. On Tutuila, the main island of American Samoa, production wells in the most populated region (the Tafuna-Leone Plain) produce most of the island's drinking water. However, much of this water has been deemed unsafe to drink since 2009. Tutuila has three predominant anthropogenic non-point-groundwater-pollution sources of concern: on-site disposal systems (OSDS), agricultural chemicals, and pig manure. These sources are broadly distributed throughout the landscape and are located near many drinking-water wells. Water quality analyses show a link between elevated levels of total dissolved groundwater nitrogen (TN) and areas with high non-point-source pollution density, suggesting that TN can be used as a tracer of groundwater contamination from these sources. The modeling framework used in this study integrates land-use information, hydrological data, and water quality analyses with nitrogen loading and transport models. The approach utilizes a numerical groundwater flow model, a nitrogen-loading model, and a multi-species contaminant transport model. Nitrogen from each source is modeled as an independent component in order to trace the impact from individual land-use activities. Model results are calibrated and validated with dissolved groundwater TN concentrations and inorganic δ15N values, respectively. Results indicate that OSDS contribute significantly more TN to Tutuila's aquifers than other sources, and thus should be prioritized in future water-quality management efforts.