DISTRIBUTED RC NETWORKS WITH RATIONAL TRANSFER FUNCTIONS,
A distributed RC circuit analogous to a continuously tapped transmission line can be made to have a rational short-circuit transfer admittance and...one rational shortcircuit driving-point admittance. A subcircuit of the same structure has a rational open circuit transfer impedance and one rational ...open circuit driving-point impedance. Hence, rational transfer functions may be obtained while considering either generator impedance or load
Discriminating topology in galaxy distributions using network analysis
NASA Astrophysics Data System (ADS)
Hong, Sungryong; Coutinho, Bruno C.; Dey, Arjun; Barabási, Albert-L.; Vogelsberger, Mark; Hernquist, Lars; Gebhardt, Karl
2016-07-01
The large-scale distribution of galaxies is generally analysed using the two-point correlation function. However, this statistic does not capture the topology of the distribution, and it is necessary to resort to higher order correlations to break degeneracies. We demonstrate that an alternate approach using network analysis can discriminate between topologically different distributions that have similar two-point correlations. We investigate two galaxy point distributions, one produced by a cosmological simulation and the other by a Lévy walk. For the cosmological simulation, we adopt the redshift z = 0.58 slice from Illustris and select galaxies with stellar masses greater than 108 M⊙. The two-point correlation function of these simulated galaxies follows a single power law, ξ(r) ˜ r-1.5. Then, we generate Lévy walks matching the correlation function and abundance with the simulated galaxies. We find that, while the two simulated galaxy point distributions have the same abundance and two-point correlation function, their spatial distributions are very different; most prominently, filamentary structures, absent in Lévy fractals. To quantify these missing topologies, we adopt network analysis tools and measure diameter, giant component, and transitivity from networks built by a conventional friends-of-friends recipe with various linking lengths. Unlike the abundance and two-point correlation function, these network quantities reveal a clear separation between the two simulated distributions; therefore, the galaxy distribution simulated by Illustris is not a Lévy fractal quantitatively. We find that the described network quantities offer an efficient tool for discriminating topologies and for comparing observed and theoretical distributions.
Rocket measurement of auroral partial parallel distribution functions
NASA Astrophysics Data System (ADS)
Lin, C.-A.
1980-01-01
The auroral partial parallel distribution functions are obtained by using the observed energy spectra of electrons. The experiment package was launched by a Nike-Tomahawk rocket from Poker Flat, Alaska over a bright auroral band and covered an altitude range of up to 180 km. Calculated partial distribution functions are presented with emphasis on their slopes. The implications of the slopes are discussed. It should be pointed out that the slope of the partial parallel distribution function obtained from one energy spectra will be changed by superposing another energy spectra on it.
NASA Astrophysics Data System (ADS)
Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe
2016-08-01
In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension.
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing
2010-01-01
Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.
Empirical study on human acupuncture point network
NASA Astrophysics Data System (ADS)
Li, Jian; Shen, Dan; Chang, Hui; He, Da-Ren
2007-03-01
Chinese medical theory is ancient and profound, however is confined by qualitative and faint understanding. The effect of Chinese acupuncture in clinical practice is unique and effective, and the human acupuncture points play a mysterious and special role, however there is no modern scientific understanding on human acupuncture points until today. For this reason, we attend to use complex network theory, one of the frontiers in the statistical physics, for describing the human acupuncture points and their connections. In the network nodes are defined as the acupuncture points, two nodes are connected by an edge when they are used for a medical treatment of a common disease. A disease is defined as an act. Some statistical properties have been obtained. The results certify that the degree distribution, act degree distribution, and the dependence of the clustering coefficient on both of them obey SPL distribution function, which show a function interpolating between a power law and an exponential decay. The results may be helpful for understanding Chinese medical theory.
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1978-01-01
Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.
Statistical methods for investigating quiescence and other temporal seismicity patterns
Matthews, M.V.; Reasenberg, P.A.
1988-01-01
We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
NASA Astrophysics Data System (ADS)
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Particle detection and non-detection in a quantum time of arrival measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sombillo, Denny Lane B., E-mail: dsombillo@nip.upd.edu.ph; Galapon, Eric A.
2016-01-15
The standard time-of-arrival distribution cannot reproduce both the temporal and the spatial profile of the modulus squared of the time-evolved wave function for an arbitrary initial state. In particular, the time-of-arrival distribution gives a non-vanishing probability even if the wave function is zero at a given point for all values of time. This poses a problem in the standard formulation of quantum mechanics where one quantizes a classical observable and uses its spectral resolution to calculate the corresponding distribution. In this work, we show that the modulus squared of the time-evolved wave function is in fact contained in one ofmore » the degenerate eigenfunctions of the quantized time-of-arrival operator. This generalizes our understanding of quantum arrival phenomenon where particle detection is not a necessary requirement, thereby providing a direct link between time-of-arrival quantization and the outcomes of the two-slit experiment. -- Highlights: •The time-evolved position density is contained in the standard TOA distribution. •Particle may quantum mechanically arrive at a given point without being detected. •The eigenstates of the standard TOA operator are linked to the two-slit experiment.« less
NASA Astrophysics Data System (ADS)
Qian, Shang-Wu; Gu, Zhi-Yu
2001-12-01
Using the Feynman's path integral with topological constraints arising from the presence of one singular line, we find the homotopic probability distribution P_L^n for the winding number n and the partition function P_L of the entangled system around a ribbon segment chain. We find that when the width of the ribbon segment chain 2a increases,the partition function exponentially decreases, whereas the free energy increases an amount, which is proportional to the square of the width. When the width tends to zero we obtain the same results as those of a single chain with one singular point.
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Ploetz, Elizabeth A; Karunaweera, Sadish; Smith, Paul E
2015-01-28
Fluctuation solution theory has provided an alternative view of many liquid mixture properties in terms of particle number fluctuations. The particle number fluctuations can also be related to integrals of the corresponding two body distribution functions between molecular pairs in order to provide a more physical picture of solution behavior and molecule affinities. Here, we extend this type of approach to provide expressions for higher order triplet and quadruplet fluctuations, and thereby integrals over the corresponding distribution functions, all of which can be obtained from available experimental thermodynamic data. The fluctuations and integrals are then determined using the International Association for the Properties of Water and Steam Formulation 1995 (IAPWS-95) equation of state for the liquid phase of pure water. The results indicate small, but significant, deviations from a Gaussian distribution for the molecules in this system. The pressure and temperature dependence of the fluctuations and integrals, as well as the limiting behavior as one approaches both the triple point and the critical point, are also examined.
NASA Astrophysics Data System (ADS)
Ploetz, Elizabeth A.; Karunaweera, Sadish; Smith, Paul E.
2015-01-01
Fluctuation solution theory has provided an alternative view of many liquid mixture properties in terms of particle number fluctuations. The particle number fluctuations can also be related to integrals of the corresponding two body distribution functions between molecular pairs in order to provide a more physical picture of solution behavior and molecule affinities. Here, we extend this type of approach to provide expressions for higher order triplet and quadruplet fluctuations, and thereby integrals over the corresponding distribution functions, all of which can be obtained from available experimental thermodynamic data. The fluctuations and integrals are then determined using the International Association for the Properties of Water and Steam Formulation 1995 (IAPWS-95) equation of state for the liquid phase of pure water. The results indicate small, but significant, deviations from a Gaussian distribution for the molecules in this system. The pressure and temperature dependence of the fluctuations and integrals, as well as the limiting behavior as one approaches both the triple point and the critical point, are also examined.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
NASA Astrophysics Data System (ADS)
Reimberg, Paulo; Bernardeau, Francis
2018-01-01
We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.
Distributed intrusion monitoring system with fiber link backup and on-line fault diagnosis functions
NASA Astrophysics Data System (ADS)
Xu, Jiwei; Wu, Huijuan; Xiao, Shunkun
2014-12-01
A novel multi-channel distributed optical fiber intrusion monitoring system with smart fiber link backup and on-line fault diagnosis functions was proposed. A 1× N optical switch was intelligently controlled by a peripheral interface controller (PIC) to expand the fiber link from one channel to several ones to lower the cost of the long or ultra-long distance intrusion monitoring system and also to strengthen the intelligent monitoring link backup function. At the same time, a sliding window auto-correlation method was presented to identify and locate the broken or fault point of the cable. The experimental results showed that the proposed multi-channel system performed well especially whenever any a broken cable was detected. It could locate the broken or fault point by itself accurately and switch to its backup sensing link immediately to ensure the security system to operate stably without a minute idling. And it was successfully applied in a field test for security monitoring of the 220-km-length national borderline in China.
Structural frequency functions for an impulsive, distributed forcing function
NASA Technical Reports Server (NTRS)
Bateman, Vesta I.
1987-01-01
The response of a penetrator structure to a spatially distributed mechanical impulse with a magnitude approaching field test force levels (1-2 Mlb) were measured. The frequency response function calculated from the response to this unique forcing function is compared to frequency response functions calculated from response to point forces of about 2000 pounds. The results show that the strain gages installed on the penetrator case respond similiarly to a point, axial force and to a spatially distributed, axial force. This result suggests that the distributed axial force generated in a penetration event may be reconstructed as a point axial force when the penetrator behaves in linear manner.
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
An adaptive grid scheme using the boundary element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munipalli, R.; Anderson, D.A.
1996-09-01
A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less
External calibration of polarimetric radars using point and distributed targets
NASA Technical Reports Server (NTRS)
Yueh, S. H.; Kong, J. A.; Shin, R. T.
1991-01-01
Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.
External calibration of polarimetric radars using point and distributed targets
NASA Astrophysics Data System (ADS)
Yueh, S. H.; Kong, J. A.; Shin, R. T.
1991-08-01
Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.
Bose-Einstein condensation and independent production of pions
NASA Astrophysics Data System (ADS)
Bialas, A.; Zalewski, K.
1998-09-01
The influence of the HBT effect on the momentum spectra of independently produced pions is studied using the method developed earlier for discussion of multiplicity distributions. It is shown that in this case all the spectra and multiparticle correlation functions are expressible in terms of one function of two momenta. It is also shown that at the critical point all pions are attracted into one quantum state and thus form a Bose-Einstein condensate.
Methods and limitations in radar target imagery
NASA Astrophysics Data System (ADS)
Bertrand, P.
An analytical examination of the reflectivity of radar targets is presented for the two-dimensional case of flat targets. A complex backscattering coefficient is defined for the amplitude and phase of the received field in comparison with the emitted field. The coefficient is dependent on the frequency of the emitted signal and the orientation of the target with respect to the transmitter. The target reflection is modeled in terms of the density of illumined, colored points independent from one another. The target therefore is represented as an infinite family of densities indexed by the observational angle. Attention is given to the reflectivity parameters and their distribution function, and to the conjunct distribution function for the color, position, and the directivity of bright points. It is shown that a fundamental ambiguity exists between the localization of the illumined points and the determination of their directivity and color.
Calculating the n-point correlation function with general and efficient python code
NASA Astrophysics Data System (ADS)
Genier, Fred; Bellis, Matthew
2018-01-01
There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.
Application of two procedures for dual-point design of transonic airfoils
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Campbell, Richard L.; Allison, Dennis O.
1994-01-01
Two dual-point design procedures were developed to reduce the objective function of a baseline airfoil at two design points. The first procedure to develop a redesigned airfoil used a weighted average of the shapes of two intermediate airfoils redesigned at each of the two design points. The second procedure used a weighted average of two pressure distributions obtained from an intermediate airfoil redesigned at each of the two design points. Each procedure was used to design a new airfoil with reduced wave drag at the cruise condition without increasing the wave drag or pitching moment at the climb condition. Two cycles of the airfoil shape-averaging procedure successfully designed a new airfoil that reduced the objective function and satisfied the constraints. One cycle of the target (desired) pressure-averaging procedure was used to design two new airfoils that reduced the objective function and came close to satisfying the constraints.
Thermalization of Wightman functions in AdS/CFT and quasinormal modes
NASA Astrophysics Data System (ADS)
Keränen, Ville; Kleinert, Philipp
2016-07-01
We study the time evolution of Wightman two-point functions of scalar fields in AdS3 -Vaidya, a spacetime undergoing gravitational collapse. In the boundary field theory, the collapse corresponds to a quench process where the dual 1 +1 -dimensional CFT is taken out of equilibrium and subsequently thermalizes. From the two-point function, we extract an effective occupation number in the boundary theory and study how it approaches the thermal Bose-Einstein distribution. We find that the Wightman functions, as well as the effective occupation numbers, thermalize with a rate set by the lowest quasinormal mode of the scalar field in the BTZ black hole background. We give a heuristic argument for the quasinormal decay, which is expected to apply to more general Vaidya spacetimes also in higher dimensions. This suggests a unified picture in which thermalization times of one- and two-point functions are determined by the lowest quasinormal mode. Finally, we study how these results compare to previous calculations of two-point functions based on the geodesic approximation.
Aeroacoustic catastrophes: upstream cusp beaming in Lilley's equation.
Stone, J T; Self, R H; Howls, C J
2017-05-01
The downstream propagation of high-frequency acoustic waves from a point source in a subsonic jet obeying Lilley's equation is well known to be organized around the so-called 'cone of silence', a fold catastrophe across which the amplitude may be modelled uniformly using Airy functions. Here we show that acoustic waves not only unexpectedly propagate upstream, but also are organized at constant distance from the point source around a cusp catastrophe with amplitude modelled locally by the Pearcey function. Furthermore, the cone of silence is revealed to be a cross-section of a swallowtail catastrophe. One consequence of these discoveries is that the peak acoustic field upstream is not only structurally stable but also at a similar level to the known downstream field. The fine structure of the upstream cusp is blurred out by distributions of symmetric acoustic sources, but peak upstream acoustic beaming persists when asymmetries are introduced, from either arrays of discrete point sources or perturbed continuum ring source distributions. These results may pose interesting questions for future novel jet-aircraft engine designs where asymmetric source distributions arise.
NASA Astrophysics Data System (ADS)
Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre
2006-12-01
The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
NASA Astrophysics Data System (ADS)
Dudek, Mirosław R.; Mleczko, Józef
Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.
One-loop gravitational wave spectrum in de Sitter spacetime
NASA Astrophysics Data System (ADS)
Fröb, Markus B.; Roura, Albert; Verdaguer, Enric
2012-08-01
The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.
NASA Astrophysics Data System (ADS)
Dmochowski, Jacek P.; Bikson, Marom; Parra, Lucas C.
2012-10-01
Rational development of transcranial current stimulation (tCS) requires solving the ‘forward problem’: the computation of the electric field distribution in the head resulting from the application of scalp currents. Derivation of forward models has represented a major effort in brain stimulation research, with model complexity ranging from spherical shells to individualized head models based on magnetic resonance imagery. Despite such effort, an easily accessible benchmark head model is greatly needed when individualized modeling is either undesired (to observe general population trends as opposed to individual differences) or unfeasible. Here, we derive a closed-form linear system which relates the applied current to the induced electric potential. It is shown that in the spherical harmonic (Fourier) domain, a simple scalar multiplication relates the current density on the scalp to the electric potential in the brain. Equivalently, the current density in the head follows as the spherical convolution between the scalp current distribution and the point spread function of the head, which we derive. Thus, if one knows the spherical harmonic representation of the scalp current (i.e. the electrode locations and current intensity to be employed), one can easily compute the resulting electric field at any point inside the head. Conversely, one may also readily determine the scalp current distribution required to generate an arbitrary electric field in the brain (the ‘backward problem’ in tCS). We demonstrate the simplicity and utility of the model with a series of characteristic curves which sweep across a variety of stimulation parameters: electrode size, depth of stimulation, head size and anode-cathode separation. Finally, theoretically optimal montages for targeting an infinitesimal point in the brain are shown.
The Center for Astrophysics Redshift Survey - Recent results
NASA Technical Reports Server (NTRS)
Geller, Margaret J.; Huchra, John P.
1989-01-01
Six strips of the CfA redshift survey extension are now complete. The data continue to support a picture in which galaxies are on thin sheets which nearly surround vast low-density voids. The largest structures are comparable with the extent of the survey. Voids like the one in Bootes are a common feature of the large-scale distribution of galaxies. The issue of fair samples of the galaxy distribution is discussed, examining statistical measures of the galaxy distribution including the two-point correlation functions.
Application of the mobility power flow approach to structural response from distributed loading
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
The problem of the vibration power flow through coupled substructures when one of the substructures is subjected to a distributed load is addressed. In all the work performed thus far, point force excitation was considered. However, in the case of the excitation of an aircraft fuselage, distributed loading on the whole surface of a panel can be as important as the excitation from directly applied forces at defined locations on the structures. Thus using a mobility power flow approach, expressions are developed for the transmission of vibrational power between two coupled plate substructures in an L configuration, with one of the surfaces of one of the plate substructures being subjected to a distributed load. The types of distributed loads that are considered are a force load with an arbitrary function in space and a distributed load similar to that from acoustic excitation.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Quantum Field Theory on Spacetimes with a Compactly Generated Cauchy Horizon
NASA Astrophysics Data System (ADS)
Kay, Bernard S.; Radzikowski, Marek J.; Wald, Robert M.
1997-02-01
We prove two theorems which concern difficulties in the formulation of the quantum theory of a linear scalar field on a spacetime, (M,g_{ab}), with a compactly generated Cauchy horizon. These theorems demonstrate the breakdown of the theory at certain base points of the Cauchy horizon, which are defined as 'past terminal accumulation points' of the horizon generators. Thus, the theorems may be interpreted as giving support to Hawking's 'Chronology Protection Conjecture', according to which the laws of physics prevent one from manufacturing a 'time machine'. Specifically, we prove: Theorem 1. There is no extension to (M,g_{ab}) of the usual field algebra on the initial globally hyperbolic region which satisfies the condition of F-locality at any base point. In other words, any extension of the field algebra must, in any globally hyperbolic neighbourhood of any base point, differ from the algebra one would define on that neighbourhood according to the rules for globally hyperbolic spacetimes. Theorem 2. The two-point distribution for any Hadamard state defined on the initial globally hyperbolic region must (when extended to a distributional bisolution of the covariant Klein-Gordon equation on the full spacetime) be singular at every base point x in the sense that the difference between this two point distribution and a local Hadamard distribution cannot be given by a bounded function in any neighbourhood (in M 2 M) of (x,x). In consequence of Theorem 2, quantities such as the renormalized expectation value of J2 or of the stress-energy tensor are necessarily ill-defined or singular at any base point. The proof of these theorems relies on the 'Propagation of Singularities' theorems of Duistermaat and Hörmander.
1981-02-01
monotonic increasing function of true ability or performance score. A cumulative probability function is * then very convenient for describiny; one’s...possible outcomes such as test scores, grade-point averages or other common outcome variables. Utility is usually a monotonic increasing function of true ...r(0) is negative for 8 <i and positive for 0 > M, U(o) is risk-prone for low 0 values and risk-averse for high 0 values. This property is true for
Probability distribution of the entanglement across a cut at an infinite-randomness fixed point
NASA Astrophysics Data System (ADS)
Devakul, Trithep; Majumdar, Satya N.; Huse, David A.
2017-03-01
We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.
Leherte, Laurence; Vercauteren, Daniel P
2014-02-01
Reduced point charge models of amino acids are designed, (i) from local extrema positions in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions, and (ii) from local maxima positions in promolecular electron density distribution functions. Corresponding charge values are fitted versus all-atom Amber99 MEPs. To easily generate reduced point charge models for protein structures, libraries of amino acid templates are built. The program GROMACS is used to generate stable Molecular Dynamics trajectories of an Ubiquitin-ligand complex (PDB: 1Q0W), under various implementation schemes, solvation, and temperature conditions. Point charges that are not located on atoms are considered as virtual sites with a nul mass and radius. The results illustrate how the intra- and inter-molecular H-bond interactions are affected by the degree of reduction of the point charge models and give directions for their implementation; a special attention to the atoms selected to locate the virtual sites and to the Coulomb-14 interactions is needed. Results obtained at various temperatures suggest that the use of reduced point charge models allows to probe local potential hyper-surface minima that are similar to the all-atom ones, but are characterized by lower energy barriers. It enables to generate various conformations of the protein complex more rapidly than the all-atom point charge representation. Copyright © 2013 Elsevier Inc. All rights reserved.
Spacing distribution functions for 1D point island model with irreversible attachment
NASA Astrophysics Data System (ADS)
Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto
2011-03-01
We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Weak values of a quantum observable and the cross-Wigner distribution.
de Gosson, Maurice A; de Gosson, Serge M
2012-01-09
We study the weak values of a quantum observable from the point of view of the Wigner formalism. The main actor here is the cross-Wigner transform of two functions, which is in disguise the cross-ambiguity function familiar from radar theory and time-frequency analysis. It allows us to express weak values using a complex probability distribution. We suggest that our approach seems to confirm that the weak value of an observable is, as conjectured by several authors, due to the interference of two wavefunctions, one coming from the past, and the other from the future.
Dynamics of a durable commodity market involving trade at disequilibrium
NASA Astrophysics Data System (ADS)
Panchuk, A.; Puu, T.
2018-05-01
The present work considers a simple model of a durable commodity market involving two agents who trade stocks of two different types. Stock commodities, in contrast to flow commodities, remain on the market from period to period and, consequently, there is neither unique demand function nor unique supply function exists. We also set up exact conditions for trade at disequilibrium, the issue being usually neglected, though a fact of reality. The induced iterative system has infinite number of fixed points and path dependent dynamics. We show that a typical orbit is either attracted to one of the fixed points or eventually sticks at a no-trade point. For the latter the stock distribution always remains the same while the price displays periodic or chaotic oscillations.
NASA Astrophysics Data System (ADS)
Maćkowiak-Pawłowska, Maja; Przybyła, Piotr
2018-05-01
The incomplete particle identification limits the experimentally-available phase space region for identified particle analysis. This problem affects ongoing fluctuation and correlation studies including the search for the critical point of strongly interacting matter performed on SPS and RHIC accelerators. In this paper we provide a procedure to obtain nth order moments of the multiplicity distribution using the identity method, generalising previously published solutions for n=2 and n=3. Moreover, we present an open source software implementation of this computation, called Idhim, that allows one to obtain the true moments of identified particle multiplicity distributions from the measured ones provided the response function of the detector is known.
Evaluation of the image quality of telescopes using the star test
NASA Astrophysics Data System (ADS)
Vazquez y Monteil, Sergio; Salazar Romero, Marcos A.; Gale, David M.
2004-10-01
The Point Spread Function (PSF) or star test is one of the main criteria to be considered in the quality of the image formed by a telescope. In a real system the distribution of irradiance in the image of a point source is given by the PSF, a function which is highly sensitive to aberrations. The PSF of a telescope may be determined by measuring the intensity distribution in the image of a star. Alternatively, if we already know the aberrations present in the optical system, then we may use diffraction theory to calculate the function. In this paper we propose a method for determining the wavefront aberrations from the PSF, using Genetic Algorithms to perform an optimization process starting from the PSF instead of the more traditional method of adjusting an aberration polynomial. We show that this method of phase recuperation is immune to noise-induced errors arising during image aquisition and registration. Some practical results are shown.
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
2013-12-15
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
Electron Distribution Functions in the Diffusion Region of Asymmetric Magnetic Reconnection
NASA Technical Reports Server (NTRS)
Bessho, N.; Chen, L.-J.; Hesse, M.
2016-01-01
We study electron distribution functions in a diffusion region of antiparallel asymmetric reconnection by means of particle-in-cell simulations and analytical theory. At the electron stagnation point, the electron distribution comprises a crescent-shaped population and a core component. The crescent-shaped distribution is due to electrons coming from the magnetosheath toward the stagnation point and accelerated mainly by electric field normal to the current sheet. Only a part of magnetosheath electrons can reach the stagnation point and form the crescent-shaped distribution that has a boundary of a parabolic curve. The penetration length of magnetosheath electrons into the magnetosphere is derived. We expect that satellite observations can detect crescent-shaped electron distributions during magnetopause reconnection.
Beyond Poisson-Boltzmann: Fluctuation effects and correlation functions
NASA Astrophysics Data System (ADS)
Netz, R. R.; Orland, H.
2000-02-01
We formulate the exact non-linear field theory for a fluctuating counter-ion distribution in the presence of a fixed, arbitrary charge distribution. The Poisson-Boltzmann equation is obtained as the saddle-point of the field-theoretic action, and the effects of counter-ion fluctuations are included by a loop-wise expansion around this saddle point. The Poisson equation is obeyed at each order in this loop expansion. We explicitly give the expansion of the Gibbs potential up to two loops. We then apply our field-theoretic formalism to the case of a single impenetrable wall with counter ions only (in the absence of salt ions). We obtain the fluctuation corrections to the electrostatic potential and the counter-ion density to one-loop order without further approximations. The relative importance of fluctuation corrections is controlled by a single parameter, which is proportional to the cube of the counter-ion valency and to the surface charge density. The effective interactions and correlation functions between charged particles close to the charged wall are obtained on the one-loop level.
Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl
2016-09-15
We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Peculiar velocity effect on galaxy correlation functions in nonlinear clustering regime
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
1994-03-01
We studied the distortion of the apparent distribution of galaxies in redshift space contaminated by the peculiar velocity effect. Specifically we obtained the expressions for N-point correlation functions in redshift space with given functional form for velocity distribution f(v) and evaluated two- and three-point correlation functions quantitatively. The effect of velocity correlations is also discussed. When the two-point correlation function in real space has a power-law form, Xir(r) is proportional to r(-gamma), the redshift-space counterpart on small scales also has a power-law form but with an increased power-law index: Xis(s) is proportional to s(1-gamma). When the three-point correlation function has the hierarchical form and the two-point correlation function has the power-law form in real space, the hierarchical form of the three-point correlation function is almost preserved in redshift space. The above analytic results are compared with the direct analysis based on N-body simulation data for cold dark matter models. Implications on the hierarchical clustering ansatz are discussed in detail.
SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, A; Ahmad, M; Chen, Z
2014-06-01
Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilitiesmore » of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions. Conclusion: LOOP can cooperate with any fitting routine functioning as a “robust fit”. In addition, it can be set as a benchmark for film-dose calibration fitting performance.« less
An efficient distribution method for nonlinear transport problems in stochastic porous media
NASA Astrophysics Data System (ADS)
Ibrahima, F.; Tchelepi, H.; Meyer, D. W.
2015-12-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.
Are fractal dimensions of the spatial distribution of mineral deposits meaningful?
Raines, G.L.
2008-01-01
It has been proposed that the spatial distribution of mineral deposits is bifractal. An implication of this property is that the number of deposits in a permissive area is a function of the shape of the area. This is because the fractal density functions of deposits are dependent on the distance from known deposits. A long thin permissive area with most of the deposits in one end, such as the Alaskan porphyry permissive area, has a major portion of the area far from known deposits and consequently a low density of deposits associated with most of the permissive area. On the other hand, a more equi-dimensioned permissive area, such as the Arizona porphyry permissive area, has a more uniform density of deposits. Another implication of the fractal distribution is that the Poisson assumption typically used for estimating deposit numbers is invalid. Based on datasets of mineral deposits classified by type as inputs, the distributions of many different deposit types are found to have characteristically two fractal dimensions over separate non-overlapping spatial scales in the range of 5-1000 km. In particular, one typically observes a local dimension at spatial scales less than 30-60 km, and a regional dimension at larger spatial scales. The deposit type, geologic setting, and sample size influence the fractal dimensions. The consequence of the geologic setting can be diminished by using deposits classified by type. The crossover point between the two fractal domains is proportional to the median size of the deposit type. A plot of the crossover points for porphyry copper deposits from different geologic domains against median deposit sizes defines linear relationships and identifies regions that are significantly underexplored. Plots of the fractal dimension can also be used to define density functions from which the number of undiscovered deposits can be estimated. This density function is only dependent on the distribution of deposits and is independent of the definition of the permissive area. Density functions for porphyry copper deposits appear to be significantly different for regions in the Andes, Mexico, United States, and western Canada. Consequently, depending on which regional density function is used, quite different estimates of numbers of undiscovered deposits can be obtained. These fractal properties suggest that geologic studies based on mapping at scales of 1:24,000 to 1:100,000 may not recognize processes that are important in the formation of mineral deposits at scales larger than the crossover points at 30-60 km. ?? 2008 International Association for Mathematical Geology.
Density functional theory and molecular dynamics study of the uranyl ion (UO₂)²⁺.
Rodríguez-Jeangros, Nicolás; Seminario, Jorge M
2014-03-01
The detection of uranium is very important, especially in water and, more importantly, in the form of uranyl ion (UO₂)²⁺, which is one of its most abundant moieties. Here, we report analyses and simulations of uranyl in water using ab initio modified force fields for water with improved parameters and charges of uranyl. We use a TIP4P model, which allows us to obtain accurate water properties such as the boiling point and the second and third shells of water molecules in the radial distribution function thanks to a fictitious charge that corrects the 3-point models by reproducing the exact dipole moment of the water molecule. We also introduced non-bonded interaction parameters for the water-uranyl intermolecular force field. Special care was taken in testing the effect of a range of uranyl charges on the structure of uranyl-water complexes. Atomic charges of the solvated ion in water were obtained using density functional theory (DFT) calculations taking into account the presence of nitrate ions in the solution, forming a neutral ensemble. DFT-based force fields were calculated in such a way that water properties, such as the boiling point or the pair distribution function stand. Finally, molecular dynamics simulations of a water box containing uranyl cations and nitrate anions are performed at room temperature. The three peaks in the oxygen-oxygen radial distribution function for water were found to be kept in the presence of uranyl thanks to the improvement of interaction parameters and charges. Also, we found three shells of water molecules surrounding the uranyl ion instead of two as was previously thought.
Obtaining the phase in the star test using genetic algorithms
NASA Astrophysics Data System (ADS)
Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro
2004-10-01
The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.
Statistics of primordial density perturbations from discrete seed masses
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.; Bertschinger, Edmund
1991-01-01
The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.
Expected Number of Fixed Points in Boolean Networks with Arbitrary Topology.
Mori, Fumito; Mochizuki, Atsushi
2017-07-14
Boolean network models describe genetic, neural, and social dynamics in complex networks, where the dynamics depend generally on network topology. Fixed points in a genetic regulatory network are typically considered to correspond to cell types in an organism. We prove that the expected number of fixed points in a Boolean network, with Boolean functions drawn from probability distributions that are not required to be uniform or identical, is one, and is independent of network topology if only a feedback arc set satisfies a stochastic neutrality condition. We also demonstrate that the expected number is increased by the predominance of positive feedback in a cycle.
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).
EFFECT OF CORRELATIONS ON THE TRANSPORT COEFFICIENTS OF A PLASMA (in French)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.; de Gottal, Ph.
1961-01-01
A closed formula is obtained for the long-lived corre1ations in an inhomogeneous plasma; it is expressed in terms of the one particle distribution function. This forms an appropriate starting point for a rigorous theory of transport phenomena in plasmas, including the effect of molecular corrclations. An expressson is obtained for the thermal conductivity. (auth)
The distribution of first-passage times and durations in FOREX and future markets
NASA Astrophysics Data System (ADS)
Sazuka, Naoya; Inoue, Jun-ichi; Scalas, Enrico
2009-07-01
Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely a distribution derived from the Mittag-Leffler survival function and the Weibull distribution. For the Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter tmax, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other hand, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time in a market: The Weibull distribution with a power-law tail. This distribution compensates the gap between theoretical and empirical results more efficiently than a simple Weibull distribution. It should be stressed that a Weibull distribution with a power-law tail is more flexible than the Mittag-Leffler distribution, which itself can be approximated by a Weibull distribution and a power-law. Indeed, the key point is that in the former case there is freedom of choice for the exponent of the power-law attached to the Weibull distribution, which can exceed 1 in order to reproduce decays faster than possible with a Mittag-Leffler distribution. We also give a useful formula to determine an optimal crossover point minimizing the difference between the empirical average waiting time and the one predicted from renewal theory. Moreover, we discuss the limitation of our distributions by applying our distribution to the analysis of the BTP future and calculating the average waiting time. We find that our distribution is applicable as long as durations follow a Weibull law for short times and do not have too heavy a tail.
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2016-10-03
A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kravtsov, V.E., E-mail: kravtsov@ictp.it; Landau Institute for Theoretical Physics, 2 Kosygina st., 117940 Moscow; Yudson, V.I., E-mail: yudson@isan.troitsk.ru
Highlights: > Statistics of normalized eigenfunctions in one-dimensional Anderson localization at E = 0 is studied. > Moments of inverse participation ratio are calculated. > Equation for generating function is derived at E = 0. > An exact solution for generating function at E = 0 is obtained. > Relation of the generating function to the phase distribution function is established. - Abstract: The one-dimensional (1d) Anderson model (AM), i.e. a tight-binding chain with random uncorrelated on-site energies, has statistical anomalies at any rational point f=(2a)/({lambda}{sub E}) , where a is the lattice constant and {lambda}{sub E} is the demore » Broglie wavelength. We develop a regular approach to anomalous statistics of normalized eigenfunctions {psi}(r) at such commensurability points. The approach is based on an exact integral transfer-matrix equation for a generating function {Phi}{sub r}(u, {phi}) (u and {phi} have a meaning of the squared amplitude and phase of eigenfunctions, r is the position of the observation point). This generating function can be used to compute local statistics of eigenfunctions of 1d AM at any disorder and to address the problem of higher-order anomalies at f=p/q with q > 2. The descender of the generating function P{sub r}({phi}){identical_to}{Phi}{sub r}(u=0,{phi}) is shown to be the distribution function of phase which determines the Lyapunov exponent and the local density of states. In the leading order in the small disorder we derived a second-order partial differential equation for the r-independent ('zero-mode') component {Phi}(u, {phi}) at the E = 0 (f=1/2 ) anomaly. This equation is nonseparable in variables u and {phi}. Yet, we show that due to a hidden symmetry, it is integrable and we construct an exact solution for {Phi}(u, {phi}) explicitly in quadratures. Using this solution we computed moments I{sub m} = N< vertical bar {psi} vertical bar {sup 2m}> (m {>=} 1) for a chain of the length N {yields} {infinity} and found an essential difference between their m-behavior in the center-of-band anomaly and for energies outside this anomaly. Outside the anomaly the 'extrinsic' localization length defined from the Lyapunov exponent coincides with that defined from the inverse participation ratio ('intrinsic' localization length). This is not the case at the E = 0 anomaly where the extrinsic localization length is smaller than the intrinsic one. At E = 0 one also observes an anomalous enhancement of large moments compatible with existence of yet another, much smaller characteristic length scale.« less
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
Geometry-dependent distributed polarizability models for the water molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loboda, Oleksandr; Ingrosso, Francesca; Ruiz-López, Manuel F.
2016-01-21
Geometry-dependent distributed polarizability models have been constructed by fits to ab initio calculations at the coupled cluster level of theory with up to noniterative triple excitations in an augmented triple-zeta quality basis set for the water molecule in the field of a point charge. The investigated models include (i) charge-flow polarizabilities between chemically bonded atoms, (ii) isotropic or anisotropic dipolar polarizabilities on oxygen atom or on all atoms, and (iii) combinations of models (i) and (ii). For each model, the polarizability parameters have been optimized to reproduce the induction energy of a water molecule polarized by a point charge successivelymore » occupying a grid of points surrounding the molecule. The quality of the models is ascertained by examining their ability to reproduce these induction energies as well as the molecular dipolar and quadrupolar polarizabilities. The geometry dependence of the distributed polarizability models has been explored by changing bond lengths and HOH angle to generate 125 molecular structures (reduced to 75 symmetry-unique ones). For each considered model, the distributed polarizability components have been fitted as a function of the geometry by a Taylor expansion in monomer coordinate displacements up to the sum of powers equal to 4.« less
Dynamical topology and statistical properties of spatiotemporal chaos.
Zhuang, Quntao; Gao, Xun; Ouyang, Qi; Wang, Hongli
2012-12-01
For spatiotemporal chaos described by partial differential equations, there are generally locations where the dynamical variable achieves its local extremum or where the time partial derivative of the variable vanishes instantaneously. To a large extent, the location and movement of these topologically special points determine the qualitative structure of the disordered states. We analyze numerically statistical properties of the topologically special points in one-dimensional spatiotemporal chaos. The probability distribution functions for the number of point, the lifespan, and the distance covered during their lifetime are obtained from numerical simulations. Mathematically, we establish a probabilistic model to describe the dynamics of these topologically special points. In spite of the different definitions in different spatiotemporal chaos, the dynamics of these special points can be described in a uniform approach.
Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution
NASA Astrophysics Data System (ADS)
Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa
2018-03-01
The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.
Schoville, Benjamin J; Brown, Kyle S; Harris, Jacob A; Wilkins, Jayne
2016-01-01
The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages-Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed.
Statistical representation of a spray as a point process
NASA Astrophysics Data System (ADS)
Subramaniam, S.
2000-10-01
The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed.
Schoville, Benjamin J.; Brown, Kyle S.; Harris, Jacob A.; Wilkins, Jayne
2016-01-01
The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages—Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed. PMID:27736886
NASA Technical Reports Server (NTRS)
Holms, A. G.
1980-01-01
Population model coefficients were chosen to simulate a saturated 2 to the 4th fixed-effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F-distribution, on an order statistics distribution of Cochran's, and on a combination of the two. The strategies were compared under the criterion of minimizing the maximum prediction error, wherever it occurred, among the two-level factorial points. The strategies were evaluated for each of the conditions of 0, 1, 2, 3, 4, 5, or 6 center points. Three classes of strategies were identified as being appropriate, depending on the extent of the experimenter's prior knowledge. In almost every case the best strategy was found to be unique according to the number of center points. Among the three classes of strategies, a security regret class of strategy was demonstrated as being widely useful in that over a range of coefficients of variation from 4 to 65%, the maximum predictive error was never increased by more than 12% over what it would have been if the best strategy had been used for the particular coefficient of variation. The relative efficiency of the experiment, when using the security regret strategy, was examined as a function of the number of center points, and was found to be best when the design used one center point.
NASA Astrophysics Data System (ADS)
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
NASA Astrophysics Data System (ADS)
Klemm, Richard A.; Davis, Andrew E.; Wang, Qing X.; Yamamoto, Takashi; Cerkoney, Daniel P.; Reid, Candy; Koopman, Maximiliaan L.; Minami, Hidetoshi; Kashiwagi, Takanari; Rain, Joseph R.; Doty, Constance M.; Sedlack, Michael A.; Morales, Manuel A.; Watanabe, Chiharu; Tsujimoto, Manabu; Delfanazari, Kaveh; Kadowaki, Kazuo
2017-12-01
We show for high-symmetry disk, square, or equilateral triangular thin microstrip antennas of any composition respectively obeying C ∞v , C 4v , and C 3v point group symmetries, that the transverse magnetic electromagnetic cavity mode wave functions are restricted in form to those that are one-dimensional representations of those point groups. Plots of the common nodal points of the ten lowest-energy non-radiating two-dimensional representations of each of these three symmetries are presented. For comparison with symmetry-broken disk intrinsic Josephson junction microstrip antennas constructed from the highly anisotropic layered superconductor Bi2Sr2CaCu2O8+δ (BSCCO), we present plots of the ten lowest frequency orthonormal wave functions and of their emission power angular distributions. These results are compared with previous results for square and equilateral triangular thin microstrip antennas.
Distribution functions of probabilistic automata
NASA Technical Reports Server (NTRS)
Vatan, F.
2001-01-01
Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.
The probability density function (PDF) of Lagrangian Turbulence
NASA Astrophysics Data System (ADS)
Birnir, B.
2012-12-01
The statistical theory of Lagrangian turbulence is derived from the stochastic Navier-Stokes equation. Assuming that the noise in fully-developed turbulence is a generic noise determined by the general theorems in probability, the central limit theorem and the large deviation principle, we are able to formulate and solve the Kolmogorov-Hopf equation for the invariant measure of the stochastic Navier-Stokes equations. The intermittency corrections to the scaling exponents of the structure functions require a multiplicative (multipling the fluid velocity) noise in the stochastic Navier-Stokes equation. We let this multiplicative noise, in the equation, consists of a simple (Poisson) jump process and then show how the Feynmann-Kac formula produces the log-Poissonian processes, found by She and Leveque, Waymire and Dubrulle. These log-Poissonian processes give the intermittency corrections that agree with modern direct Navier-Stokes simulations (DNS) and experiments. The probability density function (PDF) plays a key role when direct Navier-Stokes simulations or experimental results are compared to theory. The statistical theory of turbulence is determined, including the scaling of the structure functions of turbulence, by the invariant measure of the Navier-Stokes equation and the PDFs for the various statistics (one-point, two-point, N-point) can be obtained by taking the trace of the corresponding invariant measures. Hopf derived in 1952 a functional equation for the characteristic function (Fourier transform) of the invariant measure. In distinction to the nonlinear Navier-Stokes equation, this is a linear functional differential equation. The PDFs obtained from the invariant measures for the velocity differences (two-point statistics) are shown to be the four parameter generalized hyperbolic distributions, found by Barndorff-Nilsen. These PDF have heavy tails and a convex peak at the origin. A suitable projection of the Kolmogorov-Hopf equations is the differential equation determining the generalized hyperbolic distributions. Then we compare these PDFs with DNS results and experimental data.
Characterization of intermittency in renewal processes: Application to earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji
2010-03-15
We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less
NASA Astrophysics Data System (ADS)
Jasiulewicz-Kaczmarek, Małgorzata; Wyczółkowski, Ryszard; Gładysiak, Violetta
2017-12-01
Water distribution systems are one of the basic elements of contemporary technical infrastructure of urban and rural areas. It is a complex engineering system composed of transmission networks and auxiliary equipment (e.g. controllers, checkouts etc.), scattered territorially over a large area. From the water distribution system operation point of view, its basic features are: functional variability, resulting from the need to adjust the system to temporary fluctuations in demand for water and territorial dispersion. The main research questions are: What external factors should be taken into account when developing an effective water distribution policy? Does the size and nature of the water distribution system significantly affect the exploitation policy implemented? These questions have shaped the objectives of research and the method of research implementation.
Statistical approach to partial equilibrium analysis
NASA Astrophysics Data System (ADS)
Wang, Yougui; Stanley, H. E.
2009-04-01
A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.
NASA Astrophysics Data System (ADS)
Bovy Jo; Hogg, David W.; Roweis, Sam T.
2011-06-01
We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.
Third law of thermodynamics in the presence of a heat flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camacho, J.
1995-01-01
Following a maximum entropy formalism, we study a one-dimensional crystal under a heat flux. We obtain the phonon distribution function and evaluate the nonequilibrium temperature, the specific heat, and the entropy as functions of the internal energy and the heat flux, in both the quantum and the classical limits. Some analogies between the behavior of equilibrium systems at low absolute temperature and nonequilibrium steady states under high values of the heat flux are shown, which point to a possible generalization of the third law in nonequilibrium situations.
Spatial Point Pattern Analysis of Neurons Using Ripley's K-Function in 3D
Jafari-Mamaghani, Mehrdad; Andersson, Mikael; Krieger, Patrik
2010-01-01
The aim of this paper is to apply a non-parametric statistical tool, Ripley's K-function, to analyze the 3-dimensional distribution of pyramidal neurons. Ripley's K-function is a widely used tool in spatial point pattern analysis. There are several approaches in 2D domains in which this function is executed and analyzed. Drawing consistent inferences on the underlying 3D point pattern distributions in various applications is of great importance as the acquisition of 3D biological data now poses lesser of a challenge due to technological progress. As of now, most of the applications of Ripley's K-function in 3D domains do not focus on the phenomenon of edge correction, which is discussed thoroughly in this paper. The main goal is to extend the theoretical and practical utilization of Ripley's K-function and corresponding tests based on bootstrap resampling from 2D to 3D domains. PMID:20577588
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
Chord-length and free-path distribution functions for many-body systems
NASA Astrophysics Data System (ADS)
Lu, Binglin; Torquato, S.
1993-04-01
We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.
NASA Astrophysics Data System (ADS)
Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.
2018-01-01
This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.
Experimental design for dynamics identification of cellular processes.
Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T
2014-03-01
We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.
Time-frequency analysis of backscattered signals from diffuse radar targets
NASA Astrophysics Data System (ADS)
Kenny, O. P.; Boashash, B.
1993-06-01
The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.
Gluon and Wilson loop TMDs for hadrons of spin ≤ 1
NASA Astrophysics Data System (ADS)
Boer, Daniël; Cotogno, Sabrina; van Daal, Tom; Mulders, Piet J.; Signori, Andrea; Zhou, Ya-Jin
2016-10-01
In this paper we consider the parametrizations of gluon transverse momentum dependent (TMD) correlators in terms of TMD parton distribution functions (PDFs). These functions, referred to as TMDs, are defined as the Fourier transforms of hadronic matrix elements of nonlocal combinations of gluon fields. The nonlocality is bridged by gauge links, which have characteristic paths (future or past pointing), giving rise to a process dependence that breaks universality. For gluons, the specific correlator with one future and one past pointing gauge link is, in the limit of small x, related to a correlator of a single Wilson loop. We present the parametrization of Wilson loop correlators in terms of Wilson loop TMDs and discuss the relation between these functions and the small- x `dipole' gluon TMDs. This analysis shows which gluon TMDs are leading or suppressed in the small- x limit. We discuss hadronic targets that are unpolarized, vector polarized (relevant for spin-1 /2 and spin-1 hadrons), and tensor polarized (relevant for spin-1 hadrons). The latter are of interest for studies with a future Electron-Ion Collider with polarized deuterons.
First On-Site True Gamma-Ray Imaging-Spectroscopy of Contamination near Fukushima Plant
Tomono, Dai; Mizumoto, Tetsuya; Takada, Atsushi; Komura, Shotaro; Matsuoka, Yoshihiro; Mizumura, Yoshitaka; Oda, Makoto; Tanimori, Toru
2017-01-01
We have developed an Electron Tracking Compton Camera (ETCC), which provides a well-defined Point Spread Function (PSF) by reconstructing a direction of each gamma as a point and realizes simultaneous measurement of brightness and spectrum of MeV gamma-rays for the first time. Here, we present the results of our on-site pilot gamma-imaging-spectroscopy with ETCC at three contaminated locations in the vicinity of the Fukushima Daiichi Nuclear Power Plants in Japan in 2014. The obtained distribution of brightness (or emissivity) with remote-sensing observations is unambiguously converted into the dose distribution. We confirm that the dose distribution is consistent with the one taken by conventional mapping measurements with a dosimeter physically placed at each grid point. Furthermore, its imaging spectroscopy, boosted by Compton-edge-free spectra, reveals complex radioactive features in a quantitative manner around each individual target point in the background-dominated environment. Notably, we successfully identify a “micro hot spot” of residual caesium contamination even in an already decontaminated area. These results show that the ETCC performs exactly as the geometrical optics predicts, demonstrates its versatility in the field radiation measurement, and reveals potentials for application in many fields, including the nuclear industry, medical field, and astronomy. PMID:28155883
Representations and uses of light distribution functions
NASA Astrophysics Data System (ADS)
Lalonde, Paul Albert
1998-11-01
At their lowest level, all rendering algorithms depend on models of local illumination to define the interplay of light with the surfaces being rendered. These models depend both on the representations of light scattering at a surface due to reflection and to an equal extent on the representation of light sources and light fields. Both emission and reflection have in common that they describe how light leaves a surface as a function of direction. Reflection also depends on an incident light direction. Emission can depend on the position on the light source We call the functions representing emission and reflection light distribution functions (LDF's). There are some difficulties to using measured light distribution functions. The data sets are very large-the size of the data grows with the fourth power of the sampling resolution. For example, a bidirectional reflectance distribution function (BRDF) sampled at five degrees angular resolution, which is arguably insufficient to capture highlights and other high frequency effects in the reflection, can easily require one and a half million samples. Once acquired this data requires some form of interpolation to use them. Any compression method used must be efficient, both in space and in the time required to evaluate the function at a point or over a range of points. This dissertation examines a wavelet representation of light distribution functions that addresses these issues. A data structure is presented that allows efficient reconstruction of LDFs for a given set of parameters, making the wavelet representation feasible for rendering tasks. Texture mapping methods that take advantage of our LDF representations are examined, as well as techniques for filtering LDFs, and methods for using wavelet compressed bidirection reflectance distribution functions (BRDFs) and light sources with Monte Carlo path tracing algorithms. The wavelet representation effectively compresses BRDF and emission data while inducing only a small error in the reconstructed signal. The representation can be used to evaluate efficiently some integrals that appear in shading computation which allows fast, accurate computation of local shading. The representation can be used to represent light fields and is used to reconstruct views of environments interactively from a precomputed set of views. The representation of the BRDF also allows the efficient generation of reflected directions for Monte Carlo array tracing applications. The method can be integrated into many different global illumination algorithms, including ray tracers and wavelet radiosity systems.
Fan, Yuting; Li, Jianqiang; Xu, Kun; Chen, Hao; Lu, Xun; Dai, Yitang; Yin, Feifei; Ji, Yuefeng; Lin, Jintong
2013-09-09
In this paper, we analyze the performance of IEEE 802.11 distributed coordination function in simulcast radio-over-fiber-based distributed antenna systems (RoF-DASs) where multiple remote antenna units (RAUs) are connected to one wireless local-area network (WLAN) access point (AP) with different-length fiber links. We also present an analytical model to evaluate the throughput of the systems in the presence of both the inter-RAU hidden-node problem and fiber-length difference effect. In the model, the unequal delay induced by different fiber length is involved both in the backoff stage and in the calculation of Ts and Tc, which are the period of time when the channel is sensed busy due to a successful transmission or a collision. The throughput performances of WLAN-RoF-DAS in both basic access and request to send/clear to send (RTS/CTS) exchange modes are evaluated with the help of the derived model.
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Tsuchiya, Y
2001-08-01
A concise theoretical treatment has been developed to describe the optical responses of a highly scattering inhomogeneous medium using functions of the photon path distribution (PPD). The treatment is based on the microscopic Beer-Lambert law and has been found to yield a complete set of optical responses by time- and frequency-domain measurements. The PPD is defined for possible photons having a total zigzag pathlength of l between the points of light input and detection. Such a distribution is independent of the absorption properties of the medium and can be uniquely determined for the medium under quantification. Therefore, the PPD can be calculated with an imaginary reference medium having the same optical properties as the medium under quantification except for the absence of absorption. One of the advantages of this method is that the optical responses, the total attenuation, the mean pathlength, etc are expressed by functions of the PPD and the absorption distribution.
A Unimodal Model for Double Observer Distance Sampling Surveys.
Becker, Earl F; Christ, Aaron M
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.
correlcalc: Two-point correlation function from redshift surveys
NASA Astrophysics Data System (ADS)
Rohin, Yeluripati
2017-11-01
correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.
Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K
2015-10-15
We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.
Circuit Design Optimization Using Genetic Algorithm with Parameterized Uniform Crossover
NASA Astrophysics Data System (ADS)
Bao, Zhiguo; Watanabe, Takahiro
Evolvable hardware (EHW) is a new research field about the use of Evolutionary Algorithms (EAs) to construct electronic systems. EHW refers in a narrow sense to use evolutionary mechanisms as the algorithmic drivers for system design, while in a general sense to the capability of the hardware system to develop and to improve itself. Genetic Algorithm (GA) is one of typical EAs. We propose optimal circuit design by using GA with parameterized uniform crossover (GApuc) and with fitness function composed of circuit complexity, power, and signal delay. Parameterized uniform crossover is much more likely to distribute its disruptive trials in an unbiased manner over larger portions of the space, then it has more exploratory power than one and two-point crossover, so we have more chances of finding better solutions. Its effectiveness is shown by experiments. From the results, we can see that the best elite fitness, the average value of fitness of the correct circuits and the number of the correct circuits of GApuc are better than that of GA with one-point crossover or two-point crossover. The best case of optimal circuits generated by GApuc is 10.18% and 6.08% better in evaluating value than that by GA with one-point crossover and two-point crossover, respectively.
Distributed memory approaches for robotic neural controllers
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.
NASA Astrophysics Data System (ADS)
Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.
1995-05-01
Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.
NASA Astrophysics Data System (ADS)
Delfani, M. R.; Latifi Shahandashti, M.
2017-09-01
In this paper, within the complete form of Mindlin's second strain gradient theory, the elastic field of an isolated spherical inclusion embedded in an infinitely extended homogeneous isotropic medium due to a non-uniform distribution of eigenfields is determined. These eigenfields, in addition to eigenstrain, comprise eigen double and eigen triple strains. After the derivation of a closed-form expression for Green's function associated with the problem, two different cases of non-uniform distribution of the eigenfields are considered as follows: (i) radial distribution, i.e. the distributions of the eigenfields are functions of only the radial distance of points from the centre of inclusion, and (ii) polynomial distribution, i.e. the distributions of the eigenfields are polynomial functions in the Cartesian coordinates of points. While the obtained solution for the elastic field of the latter case takes the form of an infinite series, the solution to the former case is represented in a closed form. Moreover, Eshelby's tensors associated with the two mentioned cases are obtained.
Antenna reconfiguration verification and validation
NASA Technical Reports Server (NTRS)
Becker, Robert C. (Inventor); Meyers, David W. (Inventor); Muldoon, Kelly P. (Inventor); Carlson, Douglas R. (Inventor); Drexler, Jerome P. (Inventor)
2009-01-01
A method of testing the electrical functionality of an optically controlled switch in a reconfigurable antenna is provided. The method includes configuring one or more conductive paths between one or more feed points and one or more test point with switches in the reconfigurable antenna. Applying one or more test signals to the one or more feed points. Monitoring the one or more test points in response to the one or more test signals and determining the functionality of the switch based upon the monitoring of the one or more test points.
Kohut, Sviataslau V; Staroverov, Viktor N
2013-10-28
The exchange-correlation potential of Kohn-Sham density-functional theory, vXC(r), can be thought of as an electrostatic potential produced by the static charge distribution qXC(r) = -(1∕4π)∇(2)vXC(r). The total exchange-correlation charge, QXC = ∫qXC(r) dr, determines the rate of the asymptotic decay of vXC(r). If QXC ≠ 0, the potential falls off as QXC∕r; if QXC = 0, the decay is faster than coulombic. According to this rule, exchange-correlation potentials derived from standard generalized gradient approximations (GGAs) should have QXC = 0, but accurate numerical calculations give QXC ≠ 0. We resolve this paradox by showing that the charge density qXC(r) associated with every GGA consists of two types of contributions: a continuous distribution and point charges arising from the singularities of vXC(r) at each nucleus. Numerical integration of qXC(r) accounts for the continuous charge but misses the point charges. When the point-charge contributions are included, one obtains the correct QXC value. These findings provide an important caveat for attempts to devise asymptotically correct Kohn-Sham potentials by modeling the distribution qXC(r).
Exact short-time height distribution for the flat Kardar-Parisi-Zhang interface
NASA Astrophysics Data System (ADS)
Smith, Naftali R.; Meerson, Baruch
2018-05-01
We determine the exact short-time distribution -lnPf(" close=")H ,t )">H ,t =Sf(H )/√{t } of the one-point height H =h (x =0 ,t ) of an evolving 1 +1 Kardar-Parisi-Zhang (KPZ) interface for flat initial condition. This is achieved by combining (i) the optimal fluctuation method, (ii) a time-reversal symmetry of the KPZ equation in 1 +1 dimension, and (iii) the recently determined exact short-time height distribution -lnPst(H ) of the latter, one encounters two branches: an analytic and a nonanalytic. The analytic branch is nonphysical beyond a critical value of H where a second-order dynamical phase transition occurs. Here we show that, remarkably, it is the analytic branch of Sst(H ) which determines the large-deviation function Sf(H ) of the flat interface via a simple mapping Sf(H )=2-3 /2Sst
Di Vito, Alessia; Fanfoni, Massimo; Tomellini, Massimo
2010-12-01
Starting from a stochastic two-dimensional process we studied the transformation of points in disks and squares following a protocol according to which at any step the island size increases proportionally to the corresponding Voronoi tessera. Two interaction mechanisms among islands have been dealt with: coalescence and impingement. We studied the evolution of the island density and of the island size distribution functions, in dependence on island collision mechanisms for both Poissonian and correlated spatial distributions of points. The island size distribution functions have been found to be invariant with the fraction of transformed phase for a given stochastic process. The n(Θ) curve describing the island decay has been found to be independent of the shape (apart from high correlation degrees) and interaction mechanism.
A short note on the maximal point-biserial correlation under non-normality.
Cheng, Ying; Liu, Haiyan
2016-11-01
The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.
A Gibbs point field model for the spatial pattern of coronary capillaries
NASA Astrophysics Data System (ADS)
Karch, R.; Neumann, M.; Neumann, F.; Ullrich, R.; Neumüller, J.; Schreiner, W.
2006-09-01
We propose a Gibbs point field model for the pattern of coronary capillaries in transverse histologic sections from human hearts, based on the physiology of oxygen supply from capillaries to tissue. To specify the potential energy function of the Gibbs point field, we draw on an analogy between the equation of steady-state oxygen diffusion from an array of parallel capillaries to the surrounding tissue and Poisson's equation for the electrostatic potential of a two-dimensional distribution of identical point charges. The influence of factors other than diffusion is treated as a thermal disturbance. On this basis, we arrive at the well-known two-dimensional one-component plasma, a system of identical point charges exhibiting a weak (logarithmic) repulsive interaction that is completely characterized by a single dimensionless parameter. By variation of this parameter, the model is able to reproduce many characteristics of real capillary patterns.
Temperature distribution model for the semiconductor dew point detector
NASA Astrophysics Data System (ADS)
Weremczuk, Jerzy; Gniazdowski, Z.; Jachowicz, Ryszard; Lysko, Jan M.
2001-08-01
The simulation results of temperature distribution in the new type silicon dew point detector are presented in this paper. Calculations were done with use of the SMACEF simulation program. Fabricated structures, apart from the impedance detector used to the dew point detection, contained the resistive four terminal thermometer and two heaters. Two detector structures, the first one located on the silicon membrane and the second one placed on the bulk materials were compared in this paper.
Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory
NASA Astrophysics Data System (ADS)
Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.
2008-11-01
It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.
Testing the anisotropy in the angular distribution of Fermi/GBM gamma-ray bursts
NASA Astrophysics Data System (ADS)
Tarnopolski, M.
2017-12-01
Gamma-ray bursts (GRBs) were confirmed to be of extragalactic origin due to their isotropic angular distribution, combined with the fact that they exhibited an intensity distribution that deviated strongly from the -3/2 power law. This finding was later confirmed with the first redshift, equal to at least z = 0.835, measured for GRB970508. Despite this result, the data from CGRO/BATSE and Swift/BAT indicate that long GRBs are indeed distributed isotropically, but the distribution of short GRBs is anisotropic. Fermi/GBM has detected 1669 GRBs up to date, and their sky distribution is examined in this paper. A number of statistical tests are applied: nearest neighbour analysis, fractal dimension, dipole and quadrupole moments of the distribution function decomposed into spherical harmonics, binomial test and the two-point angular correlation function. Monte Carlo benchmark testing of each test is performed in order to evaluate its reliability. It is found that short GRBs are distributed anisotropically in the sky, and long ones have an isotropic distribution. The probability that these results are not a chance occurrence is equal to at least 99.98 per cent and 30.68 per cent for short and long GRBs, respectively. The cosmological context of this finding and its relation to large-scale structures is discussed.
NASA Astrophysics Data System (ADS)
Okuwaki, R.; Kasahara, A.; Yagi, Y.
2017-12-01
The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
The statistics of peaks of Gaussian random fields. [cosmological density fluctuations
NASA Technical Reports Server (NTRS)
Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.
1986-01-01
A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.
Zeng, Xiaodong; Bao, Xiaoyi; Chhoa, Chia Yee; Bremner, Theodore W; Brown, Anthony W; DeMerchant, Michael D; Ferrier, Graham; Kalamkarov, Alexander L; Georgiades, Anastasis V
2002-08-20
The strain measurement of a 1.65-m reinforced concrete beam by use of a distributed fiber strain sensor with a 50-cm spatial resolution and 5-cm readout resolution is reported. The strain-measurement accuracy is +/-15 microepsilon (microm/m) according to the system calibration in the laboratory environment with non-uniform-distributed strain and +/-5 microepsilon with uniform strain distribution. The strain distribution has been measured for one-point and two-point loading patterns for optical fibers embedded in pultruded glass fiber reinforced polymer (GFRP) rods and those bonded to steel reinforcing bars. In the one-point loading case, the strain deviations are +/-7 and +/-15 microepsilon for fibers embedded in the GFRP rods and fibers bonded to steel reinforcing bars, respectively, whereas the strain deviation is +/-20 microepsilon for the two-point loading case.
Asymptotic One-Point Functions in Gauge-String Duality with Defects.
Buhl-Mortensen, Isak; de Leeuw, Marius; Ipsen, Asger C; Kristjansen, Charlotte; Wilhelm, Matthias
2017-12-29
We take the first step in extending the integrability approach to one-point functions in AdS/dCFT to higher loop orders. More precisely, we argue that the formula encoding all tree-level one-point functions of SU(2) operators in the defect version of N=4 supersymmetric Yang-Mills theory, dual to the D5-D3 probe-brane system with flux, has a natural asymptotic generalization to higher loop orders. The asymptotic formula correctly encodes the information about the one-loop correction to the one-point functions of nonprotected operators once dressed by a simple flux-dependent factor, as we demonstrate by an explicit computation involving a novel object denoted as an amputated matrix product state. Furthermore, when applied to the Berenstein-Maldacena-Nastase vacuum state, the asymptotic formula gives a result for the one-point function which in a certain double-scaling limit agrees with that obtained in the dual string theory up to wrapping order.
Behavior of Triple Langmuir Probes in Non-Equilibrium Plasmas
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Ratcliffe, Alicia C.
2018-01-01
The triple Langmuir probe is an electrostatic probe in which three probe tips collect current when inserted into a plasma. The triple probe differs from a simple single Langmuir probe in the nature of the voltage applied to the probe tips. In the single probe, a swept voltage is applied to the probe tip to acquire a waveform showing the collected current as a function of applied voltage (I-V curve). In a triple probe three probe tips are electrically coupled to each other with constant voltages applied between each of the tips. The voltages are selected such that they would represent three points on the single Langmuir probe I-V curve. Elimination of the voltage sweep makes it possible to measure time-varying plasma properties in transient plasmas. Under the assumption of a Maxwellian plasma, one can determine the time-varying plasma temperature T(sub e)(t) and number density n(sub e)(t) from the applied voltage levels and the time-histories of the collected currents. In the present paper we examine the theory of triple probe operation, specifically focusing on the assumption of a Maxwellian plasma. Triple probe measurements have been widely employed for a number of pulsed and timevarying plasmas, including pulsed plasma thrusters (PPTs), dense plasma focus devices, plasma flows, and fusion experiments. While the equilibrium assumption may be justified for some applications, it is unlikely that it is fully justifiable for all pulsed and time-varying plasmas or for all times during the pulse of a plasma device. To examine a simple non-equilibrium plasma case, we return to basic governing equations of probe current collection and compute the current to the probes for a distribution function consisting of two Maxwellian distributions with different temperatures (the two-temperature Maxwellian). A variation of this method is also employed, where one of the Maxwellians is offset from zero (in velocity space) to add a suprathermal beam of electrons to the tail of the main Maxwellian distribution (the bump-on-the-tail distribution function). For a range of parameters in these non-Maxwellian distributions, we compute the current collection to the probes. We compare the distribution function that was assumed a priori with the distribution function one would infer when applying standard triple probe theory to analyze the collected currents. For the assumed class of non-Maxwellian distribution functions this serves to illustrate the effect a non-Maxwellian plasma would have on results interpreted using the equilibrium triple probe current collection theory, allowing us to state the magnitudes of these deviations as a function of the assumed distribution function properties.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
The correlation function for density perturbations in an expanding universe. II - Nonlinear theory
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1977-01-01
A formalism is developed to find the two-point and higher-order correlation functions for a given distribution of sizes and shapes of perturbations which are randomly placed in three-dimensional space. The perturbations are described by two parameters such as central density and size, and the two-point correlation function is explicitly related to the luminosity function of groups and clusters of galaxies
Octanol-water distribution of engineered nanomaterials.
Hristovski, Kiril D; Westerhoff, Paul K; Posner, Jonathan D
2011-01-01
The goal of this study was to examine the effects of pH and ionic strength on octanol-water distribution of five model engineered nanomaterials. Distribution experiments resulted in a spectrum of three broadly classified scenarios: distribution in the aqueous phase, distribution in the octanol, and distribution into the octanol-water interface. Two distribution coefficients were derived to describe the distribution of nanoparticles among octanol, water and their interface. The results show that particle surface charge, surface functionalization, and composition, as well as the solvent ionic strength and presence of natural organic matter, dramatically impact this distribution. Distributions of nanoparticles into the interface were significant for nanomaterials that exhibit low surface charge in natural pH ranges. Increased ionic strengths also contributed to increased distributions of nanoparticle into the interface. Similarly to the octanol-water distribution coefficients, which represent a starting point in predicting the environmental fate, bioavailability and transport of organic pollutants, distribution coefficients such as the ones described in this study could help to easily predict the fate, bioavailability, and transport of engineered nanomaterials in the environment.
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.
Features of development process displacement of earth’s surface when dredging coal in Eastern Donbas
NASA Astrophysics Data System (ADS)
Posylniy, Yu V.; Versilov, S. O.; Shurygin, D. N.; Kalinchenko, V. M.
2017-10-01
The results of studies of the process of the earth’s surface displacement due to the influence of the adjacent longwalls are presented. It is established that the actual distributions of soil subsidence in the fall and revolt of the reservoir with the same boundary settlement processes differ both from each other and by the distribution of subsidence, recommended by the rules of structures protection. The application of the new boundary criteria - the relative subsidence of 0.03 - allows one to go from two distributions to one distribution, which is also different from the sedimentation distribution of protection rules. The use of a new geometrical element - a virtual point of the mould - allows one to transform the actual distribution of subsidence in the model distribution of rules of constructions protection. When transforming the curves of subsidence, the boundary points vary and, consequently, the boundary corners do.
Trading efficiency for effectiveness in similarity-based indexing for image databases
NASA Astrophysics Data System (ADS)
Barros, Julio E.; French, James C.; Martin, Worthy N.; Kelly, Patrick M.
1995-11-01
Image databases typically manage feature data that can be viewed as points in a feature space. Some features, however, can be better expressed as a collection of points or described by a probability distribution function (PDF) rather than as a single point. In earlier work we introduced a similarity measure and a method for indexing and searching the PDF descriptions of these items that guarantees an answer equivalent to sequential search. Unfortunately, certain properties of the data can restrict the efficiency of that method. In this paper we extend that work and examine trade-offs between efficiency and answer quality or effectiveness. These trade-offs reduce the amount of work required during a search by reducing the number of undesired items fetched without excluding an excessive number of the desired ones.
HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2005-01-01
System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.
Hybrid Neural Network and Support Vector Machine Method for Optimization
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2007-01-01
System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.
Theory of Random Copolymer Fractionation in Columns
NASA Astrophysics Data System (ADS)
Enders, Sabine
Random copolymers show polydispersity both with respect to molecular weight and with respect to chemical composition, where the physical and chemical properties depend on both polydispersities. For special applications, the two-dimensional distribution function must adjusted to the application purpose. The adjustment can be achieved by polymer fractionation. From the thermodynamic point of view, the distribution function can be adjusted by the successive establishment of liquid-liquid equilibria (LLE) for suitable solutions of the polymer to be fractionated. The fractionation column is divided into theoretical stages. Assuming an LLE on each theoretical stage, the polymer fractionation can be modeled using phase equilibrium thermodynamics. As examples, simulations of stepwise fractionation in one direction, cross-fractionation in two directions, and two different column fractionations (Baker-Williams fractionation and continuous polymer fractionation) have been investigated. The simulation delivers the distribution according the molecular weight and chemical composition in every obtained fraction, depending on the operative properties, and is able to optimize the fractionation effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanty, Soumya D.; Nayak, Rajesh K.
The space based gravitational wave detector LISA (Laser Interferometer Space Antenna) is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10{sup -4} to 10{sup -3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude--the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only Doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population inmore » frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta-function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.« less
Modelling population distribution using remote sensing imagery and location-based data
NASA Astrophysics Data System (ADS)
Song, J.; Prishchepov, A. V.
2017-12-01
Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...
2016-05-10
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less
TURBULENCE-GENERATED PROTON-SCALE STRUCTURES IN THE TERRESTRIAL MAGNETOSHEATH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vörös, Zoltán; Narita, Yasuhito; Yordanova, Emiliya
2016-03-01
Recent results of numerical magnetohydrodynamic simulations suggest that in collisionless space plasmas, turbulence can spontaneously generate thin current sheets. These coherent structures can partially explain the intermittency and the non-homogenous distribution of localized plasma heating in turbulence. In this Letter, Cluster multi-point observations are used to investigate the distribution of magnetic field discontinuities and the associated small-scale current sheets in the terrestrial magnetosheath downstream of a quasi-parallel bow shock. It is shown experimentally, for the first time, that the strongest turbulence-generated current sheets occupy the long tails of probability distribution functions associated with extremal values of magnetic field partial derivatives.more » During the analyzed one-hour time interval, about a hundred strong discontinuities, possibly proton-scale current sheets, were observed.« less
A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study
NASA Astrophysics Data System (ADS)
Onut, S.; Kamber, M. R.; Altay, G.
2014-03-01
Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.
Modeling a space-based quantum link that includes an adaptive optics system
NASA Astrophysics Data System (ADS)
Duchane, Alexander W.; Hodson, Douglas D.; Mailloux, Logan O.
2017-10-01
Quantum Key Distribution uses optical pulses to generate shared random bit strings between two locations. If a high percentage of the optical pulses are comprised of single photons, then the statistical nature of light and information theory can be used to generate secure shared random bit strings which can then be converted to keys for encryption systems. When these keys are incorporated along with symmetric encryption techniques such as a one-time pad, then this method of key generation and encryption is resistant to future advances in quantum computing which will significantly degrade the effectiveness of current asymmetric key sharing techniques. This research first reviews the transition of Quantum Key Distribution free-space experiments from the laboratory environment to field experiments, and finally, ongoing space experiments. Next, a propagation model for an optical pulse from low-earth orbit to ground and the effects of turbulence on the transmitted optical pulse is described. An Adaptive Optics system is modeled to correct for the aberrations caused by the atmosphere. The long-term point spread function of the completed low-earth orbit to ground optical system is explored in the results section. Finally, the impact of this optical system and its point spread function on an overall quantum key distribution system as well as the future work necessary to show this impact is described.
NASA Astrophysics Data System (ADS)
Orlov, Timofey; Sadkov, Sergey; Panchenko, Evgeniy; Zverev, Andrey
2017-04-01
Peatlands occupy a significant share of the cryolithozone area. They are currently experiencing an intense affection by oil and gas field development, as well as by the construction of infrastructure. That poses the importance of the peatland studies, including those dealing with the forecast of peatland evolution. Earlier we conducted a similar probabilistic modelling for the areas of thermokarst development. Principle points of that were: 1. Appearance of a thermokarst depression within an area given is the random event which probability is directly proportional to the size of the area ( Δs). For small sites the probability of one thermokarst depression to appear is much greater than that for several ones, i.e. p1 = γ Δs + o (Δs) pk = o (Δs) \\quad k=2,3 ... 2. Growth of a new thermokarst depression is a random variable independent on other depressions' growth. It happens due to thermoabrasion and, hence, is directly proportional to the amount of heat in the lake and is inversely proportional to the lateral surface area of the lake depression. By using this model, we are able to get analytically two main laws of the morphological pattern for lake thermokarst plains. First, the distribution of a number of thermokarst depressions (centers) at a random plot obey the Poisson law: P(k,s) = (γ s)^k/k! e-γ s. where γ is an average number of depressions per area unit, s is a square of a trial sites. Second, lognormal distribution of diameters of thermokarst lakes is true at any time, i.e. density distribution is given by the equation: fd (x,t)=1/√{2πσ x √{t}} e-
NASA Astrophysics Data System (ADS)
Sienkiewicz, J.; Holyst, J. A.
2005-05-01
We have examined a topology of 21 public transport networks in Poland. Our data exhibit several universal features in considered systems when they are analyzed from the point of view of evolving networks. Depending on the assumed definition of a network topology the degree distribution can follow a power law p(k) ˜ k-γ or can be described by an exponential function p(k) ˜ exp (-α k). In the first case one observes that mean distances between two nodes are a linear function of logarithms of their degrees product.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
Local gravity field modeling using spherical radial basis functions and a genetic algorithm
NASA Astrophysics Data System (ADS)
Mahbuby, Hany; Safari, Abdolreza; Foroughi, Ismael
2017-05-01
Spherical Radial Basis Functions (SRBFs) can express the local gravity field model of the Earth if they are parameterized optimally on or below the Bjerhammar sphere. This parameterization is generally defined as the shape of the base functions, their number, center locations, bandwidths, and scale coefficients. The number/location and bandwidths of the base functions are the most important parameters for accurately representing the gravity field; once they are determined, the scale coefficients can then be computed accordingly. In this study, the point-mass kernel, as the simplest shape of SRBFs, is chosen to evaluate the synthesized free-air gravity anomalies over the rough area in Auvergne and GNSS/Leveling points (synthetic height anomalies) are used to validate the results. A two-step automatic approach is proposed to determine the optimum distribution of the base functions. First, the location of the base functions and their bandwidths are found using the genetic algorithm; second, the conjugate gradient least squares method is employed to estimate the scale coefficients. The proposed methodology shows promising results. On the one hand, when using the genetic algorithm, the base functions do not need to be set to a regular grid and they can move according to the roughness of topography. In this way, the models meet the desired accuracy with a low number of base functions. On the other hand, the conjugate gradient method removes the bias between derived quasigeoid heights from the model and from the GNSS/leveling points; this means there is no need for a corrector surface. The numerical test on the area of interest revealed an RMS of 0.48 mGal for the differences between predicted and observed gravity anomalies, and a corresponding 9 cm for the differences in GNSS/leveling points.
A second generation distributed point polarizable water model.
Kumar, Revati; Wang, Fang-Fang; Jenness, Glen R; Jordan, Kenneth D
2010-01-07
A distributed point polarizable model (DPP2) for water, with explicit terms for charge penetration, induction, and charge transfer, is introduced. The DPP2 model accurately describes the interaction energies in small and large water clusters and also gives an average internal energy per molecule and radial distribution functions of liquid water in good agreement with experiment. A key to the success of the model is its accurate description of the individual terms in the n-body expansion of the interaction energies.
NASA Astrophysics Data System (ADS)
Hautmann, F.; Jung, H.; Krämer, M.; Mulders, P. J.; Nocera, E. R.; Rogers, T. C.; Signori, A.
2014-12-01
Transverse-momentum-dependent distributions (TMDs) are extensions of collinear parton distributions and are important in high-energy physics from both theoretical and phenomenological points of view. In this manual we introduce the library , a tool to collect transverse-momentum-dependent parton distribution functions (TMD PDFs) and fragmentation functions (TMD FFs) together with an online plotting tool, TMDplotter. We provide a description of the program components and of the different physical frameworks the user can access via the available parameterisations.
Hautmann, F; Jung, H; Krämer, M; Mulders, P J; Nocera, E R; Rogers, T C; Signori, A
Transverse-momentum-dependent distributions (TMDs) are extensions of collinear parton distributions and are important in high-energy physics from both theoretical and phenomenological points of view. In this manual we introduce the library [Formula: see text], a tool to collect transverse-momentum-dependent parton distribution functions (TMD PDFs) and fragmentation functions (TMD FFs) together with an online plotting tool, TMDplotter. We provide a description of the program components and of the different physical frameworks the user can access via the available parameterisations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-26
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Bonetti, Marco; Pagano, Marcello
2005-03-15
The topic of this paper is the distribution of the distance between two points distributed independently in space. We illustrate the use of this interpoint distance distribution to describe the characteristics of a set of points within some fixed region. The properties of its sample version, and thus the inference about this function, are discussed both in the discrete and in the continuous setting. We illustrate its use in the detection of spatial clustering by application to a well-known leukaemia data set, and report on the results of a simulation experiment designed to study the power characteristics of the methods within that study region and in an artificial homogenous setting. Copyright (c) 2004 John Wiley & Sons, Ltd.
Variations in plasma wave intensity with distance along the electron foreshock boundary at Venus
NASA Technical Reports Server (NTRS)
Crawford, G. K.; Strangeway, R. J.; Russell, C. T.
1991-01-01
Plasma waves are observed in the solar wind upstream of the Venus bow shock by the Pioneer Venus Orbiter. These wave signatures occur during periods when the interplanetary magnetic field through the spacecraft position intersects the bow shock, thereby placing the spacecraft in the foreshock region. Wave intensity is analyzed as a function of distance along the electron foreshock boundary. It is found that the peak wave intensity may increase along the foreshock boundary from the tangent point to a maximum value at several Venus radii, then decrease in intensity with subsequent increase in distance. These observations could be associated with the instability process: the instability of the distribution function increasing with distance from the tangent point to saturation at the peak. Thermalization of the beam for distances beyond this point could reduce the distribution function instability resulting in weaker wave signatures.
The energy associated with MHD waves generation in the solar wind plasma
NASA Technical Reports Server (NTRS)
delaTorre, A.
1995-01-01
Gyrotropic symmetry is usually assumed in measurements of electron distribution functions in the heliosphere. This prevents the calculation of a net current perpendicular to the magnetic field lines. Previous theoretical results derived by one of the authors for a collisionless plasma with isotropic electrons in a strong magnetic field have shown that the excitation of MHD modes becomes possible when the external perpendicular current is non-zero. We consider then that any anisotropic electron population can be thought of as 'external', interacting with the remaining plasma through the self-consistent electromagnetic field. From this point of view any perpendicular current may be due to the anisotropic electrons, or to an external source like a stream, or to both. As perpendicular currents cannot be derived from the measured distribution functions, we resort to Ampere's law and experimental data of magnetic field fluctuations. The transfer of energy between MHD modes and external currents is then discussed.
Scattering and the Point Spread Function of the New Generation Space Telescope
NASA Technical Reports Server (NTRS)
Schreur, Julian J.
1996-01-01
Preliminary design work on the New Generation Space Telescope (NGST) is currently under way. This telescope is envisioned as a lightweight, deployable Cassegrain reflector with an aperture of 8 meters, and an effective focal length of 80 meters. It is to be folded into a small-diameter package for launch by an Atlas booster, and unfolded in orbit. The primary is to consist of an octagon with a hole at the center, and with eight segments arranged in a flower petal configuration about the octagon. The comers of the petal-shaped segments are to be trimmed so that the package will fit atop the Atlas booster. This mirror, along with its secondary will focus the light from a point source into an image which is spread from a point by diffraction effects, figure errors, and scattering of light from the surface. The distribution of light in the image of a point source is called a point spread function (PSF). The obstruction of the incident light by the secondary mirror and its support structure, the trimmed corners of the petals, and the grooves between the segments all cause the diffraction pattern characterizing an ideal point spread function to be changed, with the trimmed comers causing the rings of the Airy pattern to become broken up, and the linear grooves causing diffraction spikes running radially away from the central spot, or Airy disk. Any figure errors the mirror segments may have, or any errors in aligning the petals with the central octagon will also spread the light out from the ideal point spread function. A point spread function for a mirror the size of the NGST and having an incident wavelength of 900 nm is considered. Most of the light is confined in a circle with a diameter of 0.05 arc seconds. The ring pattern ranges in intensity from 10(exp -2) near the center to 10(exp -6) near the edge of the plotted field, and can be clearly discerned in a log plot of the intensity. The total fraction of the light scattered from this point spread function is called the total integrated scattering (TIS), and the fraction remaining is called the Strehl ratio. The angular distribution of the scattered light is called the angle resolved scattering (ARS), and it shows a strong spike centered on a scattering angle of zero, and a broad , less intense distribution at larger angles. It is this scattered light, and its effect on the point spread function which is the focus of this study.
Linear dispersion properties of ring velocity distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandas, Marek, E-mail: marek.vandas@asu.cas.cz; Hellinger, Petr; Institute of Atmospheric Physics, AS CR, Bocni II/1401, CZ-14100 Prague
2015-06-15
Linear properties of ring velocity distribution functions are investigated. The dispersion tensor in a form similar to the case of a Maxwellian distribution function, but for a general distribution function separable in velocities, is presented. Analytical forms of the dispersion tensor are derived for two cases of ring velocity distribution functions: one obtained from physical arguments and one for the usual, ad hoc ring distribution. The analytical expressions involve generalized hypergeometric, Kampé de Fériet functions of two arguments. For a set of plasma parameters, the two ring distribution functions are compared. At the parallel propagation with respect to the ambientmore » magnetic field, the two ring distributions give the same results identical to the corresponding bi-Maxwellian distribution. At oblique propagation, the two ring distributions give similar results only for strong instabilities, whereas for weak growth rates their predictions are significantly different; the two ring distributions have different marginal stability conditions.« less
Occupation times and ergodicity breaking in biased continuous time random walks
NASA Astrophysics Data System (ADS)
Bel, Golan; Barkai, Eli
2005-12-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.
Identification and Characterization of Domesticated Bacterial Transposases
Gallie, Jenna; Rainey, Paul B.
2017-01-01
Abstract Selfish genetic elements, such as insertion sequences and transposons are found in most genomes. Transposons are usually identifiable by their high copy number within genomes. In contrast, REP-associated tyrosine transposases (RAYTs), a recently described class of bacterial transposase, are typically present at just one copy per genome. This suggests that RAYTs no longer copy themselves and thus they no longer function as a typical transposase. Motivated by this possibility we interrogated thousands of fully sequenced bacterial genomes in order to determine patterns of RAYT diversity, their distribution across chromosomes and accessory elements, and rate of duplication. RAYTs encompass exceptional diversity and are divisible into at least five distinct groups. They possess features more similar to housekeeping genes than insertion sequences, are predominantly vertically transmitted and have persisted through evolutionary time to the point where they are now found in 24% of all species for which at least one fully sequenced genome is available. Overall, the genomic distribution of RAYTs suggests that they have been coopted by host genomes to perform a function that benefits the host cell. PMID:28910967
Axion excursions of the landscape during inflation
NASA Astrophysics Data System (ADS)
Palma, Gonzalo A.; Riquelme, Walter
2017-07-01
Because of their quantum fluctuations, axion fields had a chance to experience field excursions traversing many minima of their potentials during inflation. We study this situation by analyzing the dynamics of an axion field ψ , present during inflation, with a periodic potential given by v (ψ )=Λ4[1 -cos (ψ /f )]. By assuming that the vacuum expectation value of the field is stabilized at one of its minima, say, ψ =0 , we compute every n -point correlation function of ψ up to first order in Λ4 using the in-in formalism. This computation allows us to identify the distribution function describing the probability of measuring ψ at a particular amplitude during inflation. Because ψ is able to tunnel between the barriers of the potential, we find that the probability distribution function consists of a non-Gaussian multimodal distribution such that the probability of measuring ψ at a minimum of v (ψ ) different from ψ =0 increases with time. As a result, at the end of inflation, different patches of the Universe are characterized by different values of the axion field amplitude, leading to important cosmological phenomenology: (a) Isocurvature fluctuations induced by the axion at the end of inflation could be highly non-Gaussian. (b) If the axion defines the strength of standard model couplings, then one is led to a concrete realization of the multiverse. (c) If the axion corresponds to dark matter, one is led to the possibility that, within our observable Universe, dark matter started with a nontrivial initial condition, implying novel signatures for future surveys.
Ray tracing the Wigner distribution function for optical simulations
NASA Astrophysics Data System (ADS)
Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Joerg; Urbach, Paul
2018-01-01
We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems but produces unphysical results in the presence of aberrations. The cause of these anomalies is explained using an analytical model.
van Maanen, Leendert; de Jong, Ritske; van Rijn, Hedderik
2014-01-01
When multiple strategies can be used to solve a type of problem, the observed response time distributions are often mixtures of multiple underlying base distributions each representing one of these strategies. For the case of two possible strategies, the observed response time distributions obey the fixed-point property. That is, there exists one reaction time that has the same probability of being observed irrespective of the actual mixture proportion of each strategy. In this paper we discuss how to compute this fixed-point, and how to statistically assess the probability that indeed the observed response times are generated by two competing strategies. Accompanying this paper is a free R package that can be used to compute and test the presence or absence of the fixed-point property in response time data, allowing for easy to use tests of strategic behavior. PMID:25170893
An Experiment on Thermionic Emission: Back to the Good Old Triode
ERIC Educational Resources Information Center
Azooz, A. A.
2007-01-01
A simple experiment to study thermionic emission, the Richardson-Dushman equation and the energy distribution function of thermionic electrons emitted from a hot cathode using a triode vacuum tube is described. It is pointed out that such a distribution function is directly proportional to the first derivative of the Edison anode current with…
NASA Astrophysics Data System (ADS)
Ackerman, T. R.; Pizzuto, J. E.
2016-12-01
Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.
M-Estimation for Discrete Data. Asymptotic Distribution Theory and Implications.
1985-10-01
outlying data points, can be specified in a direct way since the influence function of an IM-estimator is proportional to its score function; see HamDel...consistently estimates - when the model is correct. Suppose now that ac RI. The influence function at F of an M-estimator for 3 has the form 2(x,S) = d/ P ("e... influence function at F . This is assuming, of course, that the estimator is asymototically normal at Fe. The truncation point c(f) determines the bounds
CDF-XL: computing cumulative distribution functions of reaction time data in Excel.
Houghton, George; Grange, James A
2011-12-01
In experimental psychology, central tendencies of reaction time (RT) distributions are used to compare different experimental conditions. This emphasis on the central tendency ignores additional information that may be derived from the RT distribution itself. One method for analysing RT distributions is to construct cumulative distribution frequency plots (CDFs; Ratcliff, Psychological Bulletin 86:446-461, 1979). However, this method is difficult to implement in widely available software, severely restricting its use. In this report, we present an Excel-based program, CDF-XL, for constructing and analysing CDFs, with the aim of making such techniques more readily accessible to researchers, including students (CDF-XL can be downloaded free of charge from the Psychonomic Society's online archive). CDF-XL functions as an Excel workbook and starts from the raw experimental data, organised into three columns (Subject, Condition, and RT) on an Input Data worksheet (a point-and-click utility is provided for achieving this format from a broader data set). No further preprocessing or sorting of the data is required. With one click of a button, CDF-XL will generate two forms of cumulative analysis: (1) "standard" CDFs, based on percentiles of participant RT distributions (by condition), and (2) a related analysis employing the participant means of rank-ordered RT bins. Both analyses involve partitioning the data in similar ways, but the first uses a "median"-type measure at the participant level, while the latter uses the mean. The results are presented in three formats: (i) by participants, suitable for entry into further statistical analysis; (ii) grand means by condition; and (iii) completed CDF plots in Excel charts.
Transforming Functions by Rescaling Axes
ERIC Educational Resources Information Center
Ferguson, Robert
2017-01-01
Students are often asked to plot a generalised parent function from their knowledge of a parent function. One approach is to sketch the parent function, choose a few points on the parent function curve, transform and plot these points, and use the transformed points as a guide to sketching the generalised parent function. Another approach is to…
Design of ultrasonically-activatable nanoparticles using low boiling point perfluorocarbons.
Sheeran, Paul S; Luois, Samantha H; Mullin, Lee B; Matsunaga, Terry O; Dayton, Paul A
2012-04-01
Recently, an interest has developed in designing biomaterials for medical ultrasonics that can provide the acoustic activity of microbubbles, but with improved stability in vivo and a smaller size distribution for extravascular interrogation. One proposed alternative is the phase-change contrast agent. Phase-change contrast agents (PCCAs) consist of perfluorocarbons (PFCs) that are initially in liquid form, but can then be vaporized with acoustic energy. Crucial parameters for PCCAs include their sensitivity to acoustic energy, their size distribution, and their stability, and this manuscript provides insight into the custom design of PCCAs for balancing these parameters. Specifically, the relationship between size, thermal stability and sensitivity to ultrasound as a function of PFC boiling point and ambient temperature is illustrated. Emulsion stability and sensitivity can be 'tuned' by mixing PFCs in the gaseous state prior to condensation. Novel observations illustrate that stable droplets can be generated from PFCs with extremely low boiling points, such as octafluoropropane (b.p. -36.7 °C), which can be vaporized with acoustic parameters lower than previously observed. Results demonstrate the potential for low boiling point PFCs as a useful new class of compounds for activatable agents, which can be tailored to the desired application. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rotondi, Renata; Varini, Elisa
2016-04-01
The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katanin, A. A., E-mail: katanin@mail.ru
We consider formulations of the functional renormalization-group (fRG) flow for correlated electronic systems with the dynamical mean-field theory as a starting point. We classify the corresponding renormalization-group schemes into those neglecting one-particle irreducible six-point vertices (with respect to the local Green’s functions) and neglecting one-particle reducible six-point vertices. The former class is represented by the recently introduced DMF{sup 2}RG approach [31], but also by the scale-dependent generalization of the one-particle irreducible representation (with respect to local Green’s functions, 1PI-LGF) of the generating functional [20]. The second class is represented by the fRG flow within the dual fermion approach [16, 32].more » We compare formulations of the fRG approach in each of these cases and suggest their further application to study 2D systems within the Hubbard model.« less
Estimate feedstock processability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amorelli, A.; Amos, Y.D.; Halsig, C.P.
1992-06-01
Currently, one of the major environmental pressures is to further reduce sulfur levels in middle distillate products. This paper reports that the key to this is understanding reactivities of individual sulfur components in the feedstocks to be treated. The major sulfur species in middle distillates is aromatic compounds, predominantly benzothiophenes and dibenzothiophenes. However, in straight run materials, significant quantities of aliphatic sulfur compounds and further higher boiling benzothiophenes are also expected. Simultaneous simulated distillation with a gas chromatograph microwave-induced plasma atomic emission detector (SIMDIS/AED) is used for middle distillate characterization of sulfur distribution as a function of boiling point. Itmore » is able to discriminate between middle distillate feed types such as cracked and straight run gas oils, and has shown that similar feeds, with different total sulfur contents (unevenly distributed throughout a feedstock), have the same normalized sulfur distribution.« less
Kota, V K B; Chavda, N D; Sahu, R
2006-04-01
Interacting many-particle systems with a mean-field one-body part plus a chaos generating random two-body interaction having strength lambda exhibit Poisson to Gaussian orthogonal ensemble and Breit-Wigner (BW) to Gaussian transitions in level fluctuations and strength functions with transition points marked by lambda = lambda c and lambda = lambda F, respectively; lambda F > lambda c. For these systems a theory for the matrix elements of one-body transition operators is available, as valid in the Gaussian domain, with lambda > lambda F, in terms of orbital occupation numbers, level densities, and an integral involving a bivariate Gaussian in the initial and final energies. Here we show that, using a bivariate-t distribution, the theory extends below from the Gaussian regime to the BW regime up to lambda = lambda c. This is well tested in numerical calculations for 6 spinless fermions in 12 single-particle states.
NASA Astrophysics Data System (ADS)
Henriquez, Miguel F.; Thompson, Derek S.; Kenily, Shane; Khaziev, Rinat; Good, Timothy N.; McIlvain, Julianne; Siddiqui, M. Umair; Curreli, Davide; Scime, Earl E.
2016-10-01
Understanding particle distributions in plasma boundary regions is critical to predicting plasma-surface interactions. Ions in the presheath exhibit complex behavior because of collisions and due to the presence of boundary-localized electric fields. Complete understanding of particle dynamics is necessary for understanding the critical problems of tokamak wall loading and Hall thruster channel wall erosion. We report measurements of 3D argon ion velocity distribution functions (IVDFs) in the vicinity of an absorbing boundary oriented obliquely to a background magnetic field. Measurements were obtained via argon ion laser induced fluorescence throughout a spatial volume upstream of the boundary. These distribution functions reveal kinetic details that provide a point-to-point check on particle-in-cell and 1D3V Boltzmann simulations. We present the results of this comparison and discuss some implications for plasma boundary interaction physics.
Injection System for Multi-Well Injection Using a Single Pump
Wovkulich, Karen; Stute, Martin; Protus, Thomas J.; Mailloux, Brian J.; Chillrud, Steven N.
2015-01-01
Many hydrological and geochemical studies rely on data resulting from injection of tracers and chemicals into groundwater wells. The even distribution of liquids to multiple injection points can be challenging or expensive, especially when using multiple pumps. An injection system was designed using one chemical metering pump to evenly distribute the desired influent simultaneously to 15 individual injection points through an injection manifold. The system was constructed with only one metal part contacting the fluid due to the low pH of the injection solutions. The injection manifold system was used during a three-month pilot scale injection experiment at the Vineland Chemical Company Superfund site. During the two injection phases of the experiment (Phase I = 0.27 L/min total flow, Phase II = 0.56 L/min total flow), flow measurements were made 20 times over three months; an even distribution of flow to each injection well was maintained (RSD <4%). This durable system is expandable to at least 16 injection points and should be adaptable to other injection experiments that require distribution of air-stable liquids to multiple injection points with a single pump. PMID:26140014
Formation and distribution of fragments in the spontaneous fission of 240Pu
NASA Astrophysics Data System (ADS)
Sadhukhan, Jhilam; Zhang, Chunli; Nazarewicz, Witold; Schunck, Nicolas
2017-12-01
Background: Fission is a fundamental decay mode of heavy atomic nuclei. The prevalent theoretical approach is based on mean-field theory and its extensions where fission is modeled as a large amplitude motion of a nucleus in a multidimensional collective space. One of the important observables characterizing fission is the charge and mass distribution of fission fragments. Purpose: The goal of this Rapid Communication is to better understand the structure of fission fragment distributions by investigating the competition between the static structure of the collective manifold and the stochastic dynamics. In particular, we study the characteristics of the tails of yield distributions, which correspond to very asymmetric fission into a very heavy and a very light fragment. Methods: We use the stochastic Langevin framework to simulate the nuclear evolution after the system tunnels through the multidimensional potential barrier. For a representative sample of different initial configurations along the outer turning-point line, we define effective fission paths by computing a large number of Langevin trajectories. We extract the relative contribution of each such path to the fragment distribution. We then use nucleon localization functions along effective fission pathways to analyze the characteristics of prefragments at prescission configurations. Results: We find that non-Newtonian Langevin trajectories, strongly impacted by the random force, produce the tails of the fission fragment distribution of 240Pu. The prefragments deduced from nucleon localizations are formed early and change little as the nucleus evolves towards scission. On the other hand, the system contains many nucleons that are not localized in the prefragments even near the scission point. Such nucleons are distributed rapidly at scission to form the final fragments. Fission prefragments extracted from direct integration of the density and from the localization functions typically differ by more than 30 nucleons even near scission. Conclusions: Our Rapid Communication shows that only theoretical models of fission that account for some form of stochastic dynamics can give an accurate description of the structure of fragment distributions. In particular, it should be nearly impossible to predict the tails of these distributions within the standard formulation of time-dependent density-functional theory. At the same time, the large number of nonlocalized nucleons during fission suggests that adiabatic approaches where the interplay between intrinsic excitations and collective dynamics is neglected are ill suited to describe fission fragment properties, in particular, their excitation energy.
NASA Astrophysics Data System (ADS)
Gsponer, Andre
2009-01-01
The objective of this introduction to Colombeau algebras of generalized functions (in which distributions can be freely multiplied) is to explain in elementary terms the essential concepts necessary for their application to basic nonlinear problems in classical physics. Examples are given in hydrodynamics and electrodynamics. The problem of the self-energy of a point electric charge is worked out in detail: the Coulomb potential and field are defined as Colombeau generalized functions, and integrals of nonlinear expressions corresponding to products of distributions (such as the square of the Coulomb field and the square of the delta function) are calculated. Finally, the methods introduced in Gsponer (2007 Eur. J. Phys. 28 267, 2007 Eur. J. Phys. 28 1021 and 2007 Eur. J. Phys. 28 1241), to deal with point-like singularities in classical electrodynamics are confirmed.
Analysis of the proton longitudinal structure function from the gluon distribution function
NASA Astrophysics Data System (ADS)
Boroun, G. R.; Rezaei, B.
2012-11-01
We make a critical, next-to-leading order, study of the relationship between the longitudinal structure function F L and the gluon distribution proposed in Cooper-Sarkar et al. (Z. Phys. C 39:281, 1988; Acta Phys. Pol. B 34:2911 2003), which is frequently used to extract the gluon distribution from the proton longitudinal structure function at small x. The gluon density is obtained by expanding at particular choices of the point of expansion and compared with the hard Pomeron behavior for the gluon density. Comparisons with H1 data are made and predictions for the proposed best approach are also provided.
The general Lie group and similarity solutions for the one-dimensional Vlasov-Maxwell equations
NASA Technical Reports Server (NTRS)
Roberts, D.
1985-01-01
The general Lie point transformation group and the associated reduced differential equations and similarity forms for the solutions are derived here for the coupled (nonlinear) Vlasov-Maxwell equations in one spatial dimension. The case of one species in a background is shown to admit a larger group than the multispecies case. Previous exact solutions are shown to be special cases of the above solutions, and many of the new solutions are found to constrain the form of the distribution function much more than, for example, the BGK solutions do. The individual generators of the Lie group are used to find the possible subgroups. Finally, a simple physical argument is given to show that the asymptotic solution for a one-species, one-dimensional plasma is one of the general similarity solutions.
Statistics of voids in hierarchical universes
NASA Technical Reports Server (NTRS)
Fry, J. N.
1986-01-01
As one alternative to the N-point galaxy correlation function statistics, the distribution of holes or the probability that a volume of given size and shape be empty of galaxies can be considered. The probability of voids resulting from a variety of hierarchical patterns of clustering is considered, and these are compared with the results of numerical simulations and with observations. A scaling relation required by the hierarchical pattern of higher order correlation functions is seen to be obeyed in the simulations, and the numerical results show a clear difference between neutrino models and cold-particle models; voids are more likely in neutrino universes. Observational data do not yet distinguish but are close to being able to distinguish between models.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345
Electromagnetic Compatibility Testing Studies
NASA Technical Reports Server (NTRS)
Trost, Thomas F.; Mitra, Atindra K.
1996-01-01
This report discusses the results on analytical models and measurement and simulation of statistical properties from a study of microwave reverberation (mode-stirred) chambers performed at Texas Tech University. Two analytical models of power transfer vs. frequency in a chamber, one for antenna-to-antenna transfer and the other for antenna to D-dot sensor, were experimentally validated in our chamber. Two examples are presented of the measurement and calculation of chamber Q, one for each of the models. Measurements of EM power density validate a theoretical probability distribution on and away from the chamber walls and also yield a distribution with larger standard deviation at frequencies below the range of validity of the theory. Measurements of EM power density at pairs of points which validate a theoretical spatial correlation function on the chamber walls and also yield a correlation function with larger correlation length, R(sub corr), at frequencies below the range of validity of the theory. A numerical simulation, employing a rectangular cavity with a moving wall shows agreement with the measurements. The determination that the lowest frequency at which the theoretical spatial correlation function is valid in our chamber is considerably higher than the lowest frequency recommended by current guidelines for utilizing reverberation chambers in EMC testing. Two suggestions have been made for future studies related to EMC testing.
Murgich, Juan; Franco, Héctor J; San-Blas, Gioconda
2006-08-24
The molecular charge distribution of flucytosine (4-amino-5-fluoro-2-pyrimidone), uracil, 5-fluorouracil, and thymine was studied by means of density functional theory calculations (DFT). The resulting distributions were analyzed by means of the atoms in molecules (AIM) theory. Bonds were characterized through vectors formed with the charge density value, its Laplacian, and the bond ellipticity calculated at the bond critical point (BCP). Within each set of C=O, C-H, and N-H bonds, these vectors showed little dispersion. C-C bonds formed three different subsets, one with a significant degree of double bonding, a second corresponding to single bonds with a finite ellipticity produced by hyperconjugation, and a third one formed by a pure single bond. In N-C bonds, a decrease in bond length (an increase in double bond character) was not reflected as an increase in their ellipticity, as in all C-C bonds studied. It was also found that substitution influenced the N-C, C-O, and C-C bond ellipticity much more than density and its Laplacian at the BCP. The Laplacian of charge density pointed to the existence of both bonding and nonbonding maxima in the valence shell charge concentration of N, O, and F, while only bonding ones were found for the C atoms. The nonbonding maxima related to the sites for electrophilic attack and H bonding in O and N, while sites of nucleophilic attack were suggested by the holes in the valence shell of the C atoms of the carbonyl groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEneaney, William M.
2004-08-15
Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information.A function of a conditional probability distribution is proposed as an information state.In the problem form here, the payoff is only a function of the terminal state of the system,and the initial information state is either linear ora sum of max-plus delta functions.When the initial information state belongs to these classes, its propagation is finite-dimensional.The state feedback value function is also finite-dimensional,and obtained via dynamic programming,but has a nonstandard form due to the necessity ofmore » an expanded state variable.Under a saddle point assumption,Certainty Equivalence is obtained and the proposed function is indeed an information state.« less
Income distribution dependence of poverty measure: A theoretical analysis
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Mallick, Sushanta K.
2007-04-01
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.
NASA Astrophysics Data System (ADS)
Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.
2018-04-01
We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.
Factors Associated with the Risk of Falls of Nursing Home Residents Aged 80 or Older.
Álvarez Barbosa, Francisco; Del Pozo-Cruz, Borja; Del Pozo-Cruz, Jesús; Alfonso-Rosa, Rosa M; Sañudo Corrales, Borja; Rogers, Michael E
2016-01-01
Falls are the leading cause of mortality and morbidity in older and represents one of the major and most costly public health problems worldwide. Evaluate the influences of lower limb muscle performance, static balance, functional independence and quality of life on fall risk as assessed with the timed up and go (TUG) test. Cross-sectional study. Fifty-two residents aged 80 or older were assessed and distributed in one of the two study groups (no risk of falls; risk of falls) according to the time to complete the TUG test. A Kistler force platform and linear transducer was used to determinate lower limb muscle performance. Postural Stability (static balance) was measured by recording the center of pressure. The EuroQol-5 dimension was used to assess Health-related quality of life and the Barthel index was used to examine functional status. Student's t-test was performed to evaluate the differences between groups. Correlations between variables were analyzed using Spearman or Pearson coefficient. ROC (receiver operating charasteristic) analysis was used to determine the cut-off points related to a decrease in the risk of a fall. Participants of no-fall risk group showed better lower limb performance, quality of life, and functional status. Cut-off points were determined for each outcome. Risk of falls in nursing home residents over the age of 80 is associated with lower limb muscle performance, functional status, and quality of Life. Cut-off points can be used by clinicians when working toward fall prevention and could help in determining the optimal lower limb muscle performance level for preventing falls. © 2015 Association of Rehabilitation Nurses.
Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto
2005-08-01
Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.
Binder model system to be used for determination of prepolymer functionality
NASA Technical Reports Server (NTRS)
Martinelli, F. J.; Hodgkin, J. H.
1971-01-01
Development of a method for determining the functionality distribution of prepolymers used for rocket binders is discussed. Research has been concerned with accurately determining the gel point of a model polyester system containing a single trifunctional crosslinker, and the application of these methods to more complicated model systems containing a second trifunctional crosslinker, monofunctional ingredients, or a higher functionality crosslinker. Correlations of observed with theoretical gel points for these systems would allow the methods to be applied directly to prepolymers.
Advanced Inverter Functions and Communication Protocols for Distribution Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Palmintier, Bryan; Baggu, Murali
2016-05-05
This paper aims at identifying the advanced features required by distribution management systems (DMS) service providers to bring inverter-connected distributed energy resources into use as an intelligent grid resource. This work explores the standard functions needed in the future DMS for enterprise integration of distributed energy resources (DER). The important DMS functionalities such as DER management in aggregate groups, including the discovery of capabilities, status monitoring, and dispatch of real and reactive power are addressed in this paper. It is intended to provide the industry with a point of reference for DER integration with other utility applications and to providemore » guidance to research and standards development organizations.« less
A New Method for Calculating Counts in Cells
NASA Astrophysics Data System (ADS)
Szapudi, István
1998-04-01
In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.
Visualizing Distributions from Multi-Return Lidar Data to Understand Forest Structure
NASA Technical Reports Server (NTRS)
Kao, David L.; Kramer, Marc; Luo, Alison; Dungan, Jennifer; Pang, Alex
2004-01-01
Spatially distributed probability density functions (pdfs) are becoming relevant to the Earth scientists and ecologists because of stochastic models and new sensors that provide numerous realizations or data points per unit area. One source of these data is from multi-return airborne lidar, a type of laser that records multiple returns for each pulse of light sent towards the ground. Data from multi-return lidar is a vital tool in helping us understand the structure of forest canopies over large extents. This paper presents several new visualization tools that allow scientists to rapidly explore, interpret and discover characteristic distributions within the entire spatial field. The major contribution from-this work is a paradigm shift which allows ecologists to think of and analyze their data in terms of the distribution. This provides a way to reveal information on the modality and shape of the distribution previously not possible. The tools allow the scientists to depart from traditional parametric statistical analyses and to associate multimodal distribution characteristics to forest structures. Examples are given using data from High Island, southeast Alaska.
Resilience-based optimal design of water distribution network
NASA Astrophysics Data System (ADS)
Suribabu, C. R.
2017-11-01
Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1990-01-01
The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.
Simulation study of entropy production in the one-dimensional Vlasov system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie
2016-07-15
The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.
Gravitational lensing, time delay, and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1992-01-01
The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.
Nonlinear response from transport theory and quantum field theory at finite temperature
NASA Astrophysics Data System (ADS)
Carrington, M. E.; Defu, Hou; Kobes, R.
2001-07-01
We study the nonlinear response in weakly coupled hot φ4 theory. We obtain an expression for a quadratic shear viscous response coefficient using two different formalisms: transport theory and response theory. The transport theory calculation is done by assuming a local equilibrium form for the distribution function and expanding in the gradient of the local four dimensional velocity field. By performing a Chapman-Enskog expansion on the Boltzmann equation we obtain a hierarchy of equations for the coefficients of the expanded distribution function. To do the response theory calculation we use Zubarev's techniques in nonequilibrium statistical mechanics to derive a generalized Kubo formula. Using this formula allows us to obtain the quadratic shear viscous response from the three-point retarded Green function of the viscous shear stress tensor. We use the closed time path formalism of real time finite temperature field theory to show that this three-point function can be calculated by writing it as an integral equation involving a four-point vertex. This four-point vertex can in turn be obtained from an integral equation which represents the resummation of an infinite series of ladder and extended-ladder diagrams. The connection between transport theory and response theory is made when we show that the integral equation for this four-point vertex has exactly the same form as the equation obtained from the Boltzmann equation for the coefficient of the quadratic term of the gradient expansion of the distribution function. We conclude that calculating the quadratic shear viscous response using transport theory and keeping terms that are quadratic in the gradient of the velocity field in the Chapman-Enskog expansion of the Boltzmann equation is equivalent to calculating the quadratic shear viscous response from response theory using the next-to-linear response Kubo formula, with a vertex given by an infinite resummation of ladder and extended-ladder diagrams.
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel
2011-06-01
Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.
NASA Astrophysics Data System (ADS)
Zhang, J.; Liu, Q.; Li, X.; Niu, H.; Cai, E.
2015-12-01
In recent years, wireless sensor network (WSN) emerges to collect Earth observation data at relatively low cost and light labor load, while its observations are still point-data. To learn the spatial distribution of a land surface parameter, interpolating the point data is necessary. Taking soil moisture (SM) for example, its spatial distribution is critical information for agriculture management, hydrological and ecological researches. This study developed a method to interpolate the WSN-measured SM to acquire the spatial distribution in a 5km*5km study area, located in the middle reaches of HEIHE River, western China. As SM is related to many factors such as topology, soil type, vegetation and etc., even the WSN observation grid is not dense enough to reflect the SM distribution pattern. Our idea is to revise the traditional Kriging algorithm, introducing spectral variables, i.e., vegetation index (VI) and abledo, from satellite imagery as supplementary information to aid the interpolation. Thus, the new Extended-Kriging algorithm operates on the spatial & spectral combined space. To run the algorithm, first we need to estimate the SM variance function, which is also extended to the combined space. As the number of WSN samples in the study area is not enough to gather robust statistics, we have to assume that the SM variance function is invariant over time. So, the variance function is estimated from a SM map, derived from the airborne CASI/TASI images acquired in July 10, 2012, and then applied to interpolate WSN data in that season. Data analysis indicates that the new algorithm can provide more details to the variation of land SM. Then, the Leave-one-out cross-validation is adopted to estimate the interpolation accuracy. Although a reasonable accuracy can be achieved, the result is not yet satisfactory. Besides improving the algorithm, the uncertainties in WSN measurements may also need to be controlled in our further work.
BINARY CORRELATIONS IN IONIZED GASES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.; Taylor, H.S.
1961-01-01
An equation of evolution for the binary distribution function in a classical homogeneous, nonequilibrium plasma was derived. It is shown that the asymptotic (long-time) solution of this equation is the Debye distribution, thus providing a rigorous dynamical derivation of the equilibrium distribution. This proof is free from the fundamental conceptual difficulties of conventional equilibrium derivations. Out of equilibrium, a closed formula was obtained for the long living correlations, in terms of the momentum distribution function. These results should form an appropriate starting point for a rigorous theory of transport phenomena in plasmas, including the effect of molecular correlations. (auth)
NASA Astrophysics Data System (ADS)
Kjellander, Roland
2006-04-01
It is shown that the nature of the non-electrostatic part of the pair interaction potential in classical Coulomb fluids can have a profound influence on the screening behaviour. Two cases are compared: (i) when the non-electrostatic part equals an arbitrary finite-ranged interaction and (ii) when a dispersion r-6 interaction potential is included. A formal analysis is done in exact statistical mechanics, including an investigation of the bridge function. It is found that the Coulombic r-1 and the dispersion r-6 potentials are coupled in a very intricate manner as regards the screening behaviour. The classical one-component plasma (OCP) is a particularly clear example due to its simplicity and is investigated in detail. When the dispersion r-6 potential is turned on, the screened electrostatic potential from a particle goes from a monotonic exponential decay, exp(-κr)/r, to a power-law decay, r-8, for large r. The pair distribution function acquire, at the same time, an r-10 decay for large r instead of the exponential one. There still remains exponentially decaying contributions to both functions, but these contributions turn oscillatory when the r-6 interaction is switched on. When the Coulomb interaction is turned off but the dispersion r-6 pair potential is kept, the decay of the pair distribution function for large r goes over from the r-10 to an r-6 behaviour, which is the normal one for fluids of electroneutral particles with dispersion interactions. Differences and similarities compared to binary electrolytes are pointed out.
Research on distributed optical fiber sensing data processing method based on LabVIEW
NASA Astrophysics Data System (ADS)
Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing
2018-01-01
The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.
Analysis of liquid-metal-jet impingement cooling in a corner region and for a row of jets
NASA Technical Reports Server (NTRS)
Siegel, R.
1975-01-01
A conformal mapping method was used to analyze liquid-metal-jet impingement heat transfer. The jet flow region and energy equation are transformed to correspond to uniform flow in a parallel plate channel with nonuniform heat addition along a portion of one wall. The exact solution for the wall-temperature distribution was obtained in the transformed channel, and the results are mapped back into the physical plane. Two geometries are analyzed. One is for a single slot jet directed either into an interior corner formed by two flat plates, or over the external sides of the corner; the flat plates are uniformly heated, and the corner can have various included angles. The heat-transfer coefficient at the stagnation point at the apex of the plates is obtained as a function of the corner angle, and temperature distributions are calculated along the heated walls. The second geometry is an infinite row of uniformly spaced parallel slot jets impinging normally against a uniformly heated plate. The heat-transfer behavior is obtained as a function of the spacing between the jets. Results are given for several jet Peclet numbers from 5 to 50.
Two-point correlation functions in inhomogeneous and anisotropic cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcori, Oton H.; Pereira, Thiago S., E-mail: otonhm@hotmail.com, E-mail: tspereira@uel.br
Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation functionmore » in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.« less
N-point functions in rolling tachyon background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokela, Niko; Keski-Vakkuri, Esko; Department of Physics, P.O. Box 64, FIN-00014, University of Helsinki
2009-04-15
We study n-point boundary correlation functions in timelike boundary Liouville theory, relevant for open string multiproduction by a decaying unstable D brane. We give an exact result for the one-point function of the tachyon vertex operator and show that it is consistent with a previously proposed relation to a conserved charge in string theory. We also discuss when the one-point amplitude vanishes. Using a straightforward perturbative expansion, we find an explicit expression for a tachyon n-point amplitude for all n, however the result is still a toy model. The calculation uses a new asymptotic approximation for Toeplitz determinants, derived bymore » relating the system to a Dyson gas at finite temperature.« less
Naser, Mohamed A.; Patterson, Michael S.
2011-01-01
Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647
Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N
2000-05-01
We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.
Blanc, F; Gouteux, J P; Cuisance, D; Pounekrozou, E; N'Dokoué, F; Le Gall, F
1991-06-01
Two trapping methods were compared during a survey of the distribution of tsetse flies in the Mbororo cattle breeding area of the Central African Republic: (a) several traps dispersed throughout the riverine forest galleries and remaining only one day at each site: (b) one sentinel trap placed at the cattle drinking point and remaining for several days. The latter method was more reliable and is therefore recommended. The concentration of tsetse flies at the drinking points was negligible during the rainy season.
The N-terminal tropomyosin- and actin-binding sites are important for leiomodin 2's function.
Ly, Thu; Moroz, Natalia; Pappas, Christopher T; Novak, Stefanie M; Tolkatchev, Dmitri; Wooldridge, Dayton; Mayfield, Rachel M; Helms, Gregory; Gregorio, Carol C; Kostyukova, Alla S
2016-08-15
Leiomodin is a potent actin nucleator related to tropomodulin, a capping protein localized at the pointed end of the thin filaments. Mutations in leiomodin-3 are associated with lethal nemaline myopathy in humans, and leiomodin-2-knockout mice present with dilated cardiomyopathy. The arrangement of the N-terminal actin- and tropomyosin-binding sites in leiomodin is contradictory and functionally not well understood. Using one-dimensional nuclear magnetic resonance and the pointed-end actin polymerization assay, we find that leiomodin-2, a major cardiac isoform, has an N-terminal actin-binding site located within residues 43-90. Moreover, for the first time, we obtain evidence that there are additional interactions with actin within residues 124-201. Here we establish that leiomodin interacts with only one tropomyosin molecule, and this is the only site of interaction between leiomodin and tropomyosin. Introduction of mutations in both actin- and tropomyosin-binding sites of leiomodin affected its localization at the pointed ends of the thin filaments in cardiomyocytes. On the basis of our new findings, we propose a model in which leiomodin regulates actin poly-merization dynamics in myocytes by acting as a leaky cap at thin filament pointed ends. © 2016 Ly, Moroz, et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy
NASA Astrophysics Data System (ADS)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco
2016-08-01
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.
[Spatial point patterns of Antarctic krill fishery in the northern Antarctic Peninsula].
Yang, Xiao Ming; Li, Yi Xin; Zhu, Guo Ping
2016-12-01
As a key species in the Antarctic ecosystem, the spatial distribution of Antarctic krill (thereafter krill) often tends to present aggregation characteristics, which therefore reflects the spatial patterns of krill fishing operation. Based on the fishing data collected from Chinese krill fishing vessels, of which vessel A was professional krill fishing vessel and Vessel B was a fishing vessel which shifted between Chilean jack mackerel (Trachurus murphyi) fishing ground and krill fishing ground. In order to explore the characteristics of spatial distribution pattern and their ecological effects of two obvious different fishing fleets under a high and low nominal catch per unit effort (CPUE), from the viewpoint of spatial point pattern, the present study analyzed the spatial distribution characteristics of krill fishery in the northern Antarctic Peninsula from three aspects: (1) the two vessels' point pattern characteristics of higher CPUEs and lower CPUEs at different scales; (2) correlation of the bivariate point patterns between these points of higher CPUE and lower CPUE; and (3) correlation patterns of CPUE. Under the analysis derived from the Ripley's L function and mark correlation function, the results showed that the point patterns of the higher/lo-wer catch available were similar, both showing an aggregation distribution in this study windows at all scale levels. The aggregation intensity of krill fishing was nearly maximum at 15 km spatial scale, and kept stably higher values at the scale of 15-50 km. The aggregation intensity of krill fishery point patterns could be described in order as higher CPUE of vessel A > lower CPUE of vessel B >higher CPUE of vessel B > higher CPUE of vessel B. The relationship of the higher and lo-wer CPUEs of vessel A showed positive correlation at the spatial scale of 0-75 km, and presented stochastic relationship after 75 km scale, whereas vessel B showed positive correlation at all spatial scales. The point events of higher and lower CPUEs were synchronized, showing significant correlations at most of spatial scales because of the dynamics nature and complex of krill aggregation patterns. The distribution of vessel A's CPUEs was positively correlated at scales of 0-44 km, but negatively correlated at the scales of 44-80 km. The distribution of vessel B's CPUEs was negatively correlated at the scales of 50-70 km, but no significant correlations were found at other scales. The CPUE mark point patterns showed a negative correlation, which indicated that intraspecific competition for space and prey was significant. There were significant differences in spatial point pattern distribution between vessel A with higher fishing capacity and vessel B with lower fishing capacity. The results showed that the professional krill fishing vessel is suitable to conduct the analysis of spatial point pattern and scientific fishery survey.
NASA Astrophysics Data System (ADS)
Zhong, Rui; Wang, Qingshan; Tang, Jinyuan; Shuai, Cijun; Liang, Qian
2018-02-01
This paper presents the first known vibration characteristics of moderately thick functionally graded carbon nanotube reinforced composite rectangular plates on Pasternak foundation with arbitrary boundary conditions and internal line supports on the basis of the firstorder shear deformation theory. Different distributions of single walled carbon nanotubes (SWCNTs) along the thickness are considered. Uniform and other three kinds of functionally graded distributions of carbon nanotubes along the thickness direction of plates are studied. The solutions carried out using an enhanced Ritz method mainly include the following three points: Firstly, create the Lagrange energy function by the energy principle; Secondly, as the main innovation point, the modified Fourier series are chosen as the basic functions of the admissible functions of the plates to eliminate all the relevant discontinuities of the displacements and their derivatives at the edges; Lastly, solve the natural frequencies as well as the associated mode shapes by means of the Ritz-variational energy method. In this study, the influences of the volume fraction of CNTs, distribution type of CNTs, boundary restrain parameters, location of the internal line supports, foundation coefficients on the natural frequencies and mode shapes of the FG-CNT reinforced composite rectangular plates are presented.
Beyond Zipf's Law: The Lavalette Rank Function and Its Properties.
Fontanelli, Oscar; Miramontes, Pedro; Yang, Yaning; Cocho, Germinal; Li, Wentian
Although Zipf's law is widespread in natural and social data, one often encounters situations where one or both ends of the ranked data deviate from the power-law function. Previously we proposed the Beta rank function to improve the fitting of data which does not follow a perfect Zipf's law. Here we show that when the two parameters in the Beta rank function have the same value, the Lavalette rank function, the probability density function can be derived analytically. We also show both computationally and analytically that Lavalette distribution is approximately equal, though not identical, to the lognormal distribution. We illustrate the utility of Lavalette rank function in several datasets. We also address three analysis issues on the statistical testing of Lavalette fitting function, comparison between Zipf's law and lognormal distribution through Lavalette function, and comparison between lognormal distribution and Lavalette distribution.
Statistical analysis of trypanosomes' motility
NASA Astrophysics Data System (ADS)
Zaburdaev, Vasily; Uppaluri, Sravanti; Pfohl, Thomas; Engstler, Markus; Stark, Holger; Friedrich, Rudolf
2010-03-01
Trypanosome is a parasite causing the sleeping sickness. The way it moves in the blood stream and penetrates various obstacles is the area of active research. Our goal was to investigate a free trypanosomes' motion in the planar geometry. Our analysis of trypanosomes' trajectories reveals that there are two correlation times - one is associated with a fast motion of its body and the second one with a slower rotational diffusion of the trypanosome as a point object. We propose a system of Langevin equations to model such motion. One of its peculiarities is the presence of multiplicative noise predicting higher level of noise for higher velocity of the trypanosome. Theoretical and numerical results give a comprehensive description of the experimental data such as the mean squared displacement, velocity distribution and auto-correlation function.
A tri-reference point theory of decision making under risk.
Wang, X T; Johnson, Joseph G
2012-11-01
The tri-reference point (TRP) theory takes into account minimum requirements (MR), the status quo (SQ), and goals (G) in decision making under risk. The 3 reference points demarcate risky outcomes and risk perception into 4 functional regions: success (expected value of x ≥ G), gain (SQ < × < G), loss (MR ≤ x < SQ), and failure (x < MR). The psychological impact of achieving or failing to achieve these reference points is rank ordered as MR > G > SQ. We present TRP assumptions and value functions and a mathematical formalization of the theory. We conducted empirical tests of crucial TRP predictions using both explicit and implicit reference points. We show that decision makers consider both G and MR and give greater weight to MR than G, indicating failure aversion (i.e., the disutility of a failure is greater than the utility of a success in the same task) in addition to loss aversion (i.e., the disutility of a loss is greater than the utility of the same amount of gain). Captured by a double-S shaped value function with 3 inflection points, risk preferences switched between risk seeking and risk aversion when the distribution of a gamble straddled a different reference point. The existence of MR (not G) significantly shifted choice preference toward risk aversion even when the outcome distribution of a gamble was well above the MR. Single reference point based models such as prospect theory cannot consistently account for these findings. The TRP theory provides simple guidelines for evaluating risky choices for individuals and organizational management. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
2013-05-01
representation of a centralized control system on a turbine engine. All actuators and sensors are point-to-point cabled to the controller ( FADEC ) which...electronics themselves. Figure 1: Centralized Control System Each function resides within the FADEC and uses Unique Point-to-Point Analog...distributed control system on the same turbine engine. The actuators and sensors interface to Smart Nodes which, in turn communicate to the FADEC via
Freeform lens generation for quasi-far-field successive illumination targets
NASA Astrophysics Data System (ADS)
Zhuang, Zhenfeng; Thibault, Simon
2018-07-01
A predefined mapping to tailor one or more freeform surfaces is employed to build a freeform illumination system. The emergent rays from the light source corresponding to the prescribed target mesh for a pre-determined lighting distance are mapped by a point-to-point algorithm with respect to the freeform optics, which involves limiting design flexibility. To tackle the problem of design limitation and find the optimum design results, a freeform lens is exploited to produce the desired rectangular illumination distribution at successive target planes at quasi-far-field lighting distances. It is generated using numerical solutions to find out an initial starting point, and an appropriate approach to obtain variables for parameterization of the freeform surface is introduced. The relative standard deviation, which is a useful figure of merit for the analysis, is set up as merit function with respect to illumination non-uniformity at the successive sampled target planes. Therefore, the irradiance distribution in terms of the specific lighting distance range can be ensured by the proposed scheme. A design example of a freeform illumination system, composed of a spherical surface and a freeform surface, is given to produce desired irradiance distribution within the lighting distance range. An optical performance with low non-uniformity and high efficiency is achieved. Compared with the conventional approach, the uniformity of the sampled targets is dramatically enhanced; meanwhile, a design result with a large tolerance of LED size is offered.
Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam
2011-01-01
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
Spatial distribution of pollutants in the area of the former CHP plant
NASA Astrophysics Data System (ADS)
Cichowicz, Robert
2018-01-01
The quality of atmospheric air and level of its pollution are now one of the most important issues connected with life on Earth. The frequent nuisance and exceedance of pollution standards often described in the media are generated by both low emission sources and mobile sources. Also local organized energy emission sources such as local boiler houses or CHP plants have impact on air pollution. At the same time it is important to remember that the role of local power stations in shaping air pollution immission fields depends on the height of emitters and functioning of waste gas treatment installations. Analysis of air pollution distribution was carried out in 2 series/dates, i.e. 2 and 10 weeks after closure of the CHP plant. In the analysis as a reference point the largest intersection of streets located in the immediate vicinity of the plant was selected, from which virtual circles were drawn every 50 meters, where 31 measuring points were located. As a result, the impact of carbon dioxide, hydrogen sulfide and ammonia levels could be observed and analyzed, depending on the distance from the street intersection.
Measurement of argon neutral velocity distribution functions near an absorbing boundary in a plasma
NASA Astrophysics Data System (ADS)
Short, Zachary; Thompson, Derek; Good, Timothy; Scime, Earl
2016-10-01
Neutral particle distributions are critical to the study of plasma boundary interactions, where ion-neutral collisions, e.g. via charge exchange, may modify energetic particle populations impacting the boundary surface. Neutral particle behavior at absorbing boundaries thus underlies a number of important plasma physics issues, such as wall loading in fusion devices and anomalous erosion in Hall thruster channels. Neutral velocity distribution functions (NVDFs) are measured using laser-induced fluorescence (LIF). Our LIF scheme excites the 1s4 non-metastable state of neutral argon with 667.913 nm photons. The subsequent decay emission at 750.590 nm is recorded synchronously with injection laser frequency. Measurements are performed near a grounded boundary immersed in a cylindrical helicon plasma, with the boundary plate oriented at an oblique angle to the magnetic field. NVDFs are recorded in multiple velocity dimensions and in a three-dimensional volume, enabling point-to-point comparisons with NVDF predictions from particle-in-cell models as well as comparisons with ion velocity distribution function measurements obtained in the same regions through Ar-II LIF. This work is supported by US National Science Foundation Grant Number PHYS-1360278.
NASA Astrophysics Data System (ADS)
Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann
2017-04-01
Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas horizontal similarity fades out in short distances. Some of the input data have been acquired in the framework of the ChangeHabitats2 project financed by the European Union. BS contributed as an Alexander von Humboldt Research Fellow.
Entanglement between atomic thermal states and coherent or squeezed photons in a damping cavity
NASA Astrophysics Data System (ADS)
Yadollahi, F.; Safaiee, R.; Golshan, M. M.
2018-02-01
In the present study, the standard Jaynes-Cummings model, in a lossy cavity, is employed to characterize the entanglement between atoms and photons when the former is initially in a thermal state (mixed ensemble) while the latter is described by either coherent or squeezed distributions. The whole system is thus assumed to be in equilibrium with a heat reservoir at a finite temperature T, and the measure of negativity is used to determine the time evolution of atom-photon entanglement. To this end, the master equation for the density matrix, in the secular approximation, is solved and a partial transposition of the result is made. The degree of atom-photon entanglement is then numerically computed, through the negativity, as a function of time and temperature. To justify the behavior of atom-photon entanglement, moreover, we employ the so obtained total density matrix to compute and analyze the time evolution of the initial photonic coherent or squeezed probability distributions and the squeezing parameters. On more practical points, our results demonstrate that as the initial photon mean number increases, the atom-photon entanglement decays at a faster pace for the coherent distribution compared to the squeezed one. Moreover, it is shown that the degree of atom-photon entanglement is much higher and more stable for the squeezed distribution than that for the coherent one. Consequently, we conclude that the time intervals during which the atom-photon entanglement is distillable is longer for the squeezed distribution. It is also illustrated that as the temperature increases the rate of approaching separability is faster for the coherent initial distribution. The novel point of the present report is the calculation of dynamical density matrix (containing all physical information) for the combined system of atom-photon in a lossy cavity, as well as the corresponding negativity, at a finite temperature.
Democracy, Equal Citizenship, and Education
ERIC Educational Resources Information Center
Callan, Eamonn
2016-01-01
Two appealing principles of educational distribution--equality and sufficiency--are comparatively assessed. The initial point of comparison is the distribution of civic educational goods. One reason to favor equality in educational distribution rather than sufficiency is the elimination of undeserved positional advantage in access to labor…
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1979-01-01
The evolution of the two-point correlation function for the large-scale distribution of galaxies in an expanding universe is studied on the assumption that the perturbation densities lie in a Gaussian distribution centered on any given mass scale. The perturbations are evolved according to the Friedmann equation, and the correlation function for the resulting distribution of perturbations at the present epoch is calculated. It is found that: (1) the computed correlation function gives a satisfactory fit to the observed function in cosmological models with a density parameter (Omega) of approximately unity, provided that a certain free parameter is suitably adjusted; (2) the power-law slope in the nonlinear regime reflects the initial fluctuation spectrum, provided that the density profile of individual perturbations declines more rapidly than the -2.4 power of distance; and (3) both positive and negative contributions to the correlation function are predicted for cosmological models with Omega less than unity.
Spacecraft solid state power distribution switch
NASA Technical Reports Server (NTRS)
Praver, G. A.; Theisinger, P. C.
1986-01-01
As a spacecraft performs its mission, various loads are connected to the spacecraft power bus in response to commands from an on board computer, a function called power distribution. For the Mariner Mark II set of planetary missions, the power bus is 30 volts dc and when loads are connected or disconnected, both the bus and power return side must be switched. In addition, the power distribution function must be immune to single point failures and, when power is first applied, all switches must be in a known state. Traditionally, these requirements have been met by electromechanical latching relays. This paper describes a solid state switch which not only satisfies the requirements but incorporates several additional features including soft turn on, programmable current trip point with noise immunity, instantaneous current limiting, and direct telemetry of load currents and switch status. A breadboard of the design has been constructed and some initial test results are included.
Chloride Channelopathies of ClC-2
Bi, Miao Miao; Hong, Sen; Zhou, Hong Yan; Wang, Hong Wei; Wang, Li Na; Zheng, Ya Juan
2014-01-01
Chloride channels (ClCs) have gained worldwide interest because of their molecular diversity, widespread distribution in mammalian tissues and organs, and their link to various human diseases. Nine different ClCs have been molecularly identified and functionally characterized in mammals. ClC-2 is one of nine mammalian members of the ClC family. It possesses unique biophysical characteristics, pharmacological properties, and molecular features that distinguish it from other ClC family members. ClC-2 has wide organ/tissue distribution and is ubiquitously expressed. Published studies consistently point to a high degree of conservation of ClC-2 function and regulation across various species from nematodes to humans over vast evolutionary time spans. ClC-2 has been intensively and extensively studied over the past two decades, leading to the accumulation of a plethora of information to advance our understanding of its pathophysiological functions; however, many controversies still exist. It is necessary to analyze the research findings, and integrate different views to have a better understanding of ClC-2. This review focuses on ClC-2 only, providing an analytical overview of the available literature. Nearly every aspect of ClC-2 is discussed in the review: molecular features, biophysical characteristics, pharmacological properties, cellular function, regulation of expression and function, and channelopathies. PMID:24378849
Point pattern analysis applied to flood and landslide damage events in Switzerland (1972-2009)
NASA Astrophysics Data System (ADS)
Barbería, Laura; Schulte, Lothar; Carvalho, Filipe; Peña, Juan Carlos
2017-04-01
Damage caused by meteorological and hydrological extreme events depends on many factors, not only on hazard, but also on exposure and vulnerability. In order to reach a better understanding of the relation of these complex factors, their spatial pattern and underlying processes, the spatial dependency between values of damage recorded at sites of different distances can be investigated by point pattern analysis. For the Swiss flood and landslide damage database (1972-2009) first steps of point pattern analysis have been carried out. The most severe events have been selected (severe, very severe and catastrophic, according to GEES classification, a total number of 784 damage points) and Ripley's K-test and L-test have been performed, amongst others. For this purpose, R's library spatstat has been used. The results confirm that the damage points present a statistically significant clustered pattern, which could be connected to prevalence of damages near watercourses and also to rainfall distribution of each event, together with other factors. On the other hand, bivariate analysis shows there is no segregated pattern depending on process type: flood/debris flow vs landslide. This close relation points to a coupling between slope and fluvial processes, connectivity between small-size and middle-size catchments and the influence of spatial distribution of precipitation, temperature (snow melt and snow line) and other predisposing factors such as soil moisture, land-cover and environmental conditions. Therefore, further studies will investigate the relationship between the spatial pattern and one or more covariates, such as elevation, distance from watercourse or land use. The final goal will be to perform a regression model to the data, so that the adjusted model predicts the intensity of the point process as a function of the above mentioned covariates.
An Analysis of Our Cable Distribution System: Its Current and Future Capabilities.
ERIC Educational Resources Information Center
Clarke, Tobin de Leon
Three goals have been set for San Joaquin Delta College Learning Resource Center's cable distribution system: it is to be made useable, useful, and flexible. Presently the system consists of a microwave dish installed on one building which points to a relay station with approximately one and one half miles of cable pulled to various locations. A…
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle
NASA Astrophysics Data System (ADS)
Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei
2016-08-01
We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.
NASA Astrophysics Data System (ADS)
Schäfer, L.; Dierksheide, U.; Klaas, M.; Schröder, W.
2011-03-01
A new method to describe statistical information from passive scalar fields has been proposed by Wang and Peters ["The length-scale distribution function of the distance between extremal points in passive scalar turbulence," J. Fluid Mech. 554, 457 (2006)]. They used direct numerical simulations (DNS) of homogeneous shear flow to introduce the innovative concept. This novel method determines the local minimum and maximum points of the fluctuating scalar field via gradient trajectories, starting from every grid point in the direction of the steepest ascending and descending scalar gradients. Relying on gradient trajectories, a dissipation element is defined as the region of all the grid points, the trajectories of which share the same pair of maximum and minimum points. The procedure has also been successfully applied to various DNS fields of homogeneous shear turbulence using the three velocity components and the kinetic energy as scalar fields [L. Wang and N. Peters, "Length-scale distribution functions and conditional means for various fields in turbulence," J. Fluid Mech. 608, 113 (2008)]. In this spirit, dissipation elements are, for the first time, determined from experimental data of a fully developed turbulent channel flow. The dissipation elements are deduced from the gradients of the instantaneous fluctuation of the three velocity components u', v', and w' and the instantaneous kinetic energy k', respectively. The measurements are conducted at a Reynolds number of 1.7×104 based on the channel half-height δ and the bulk velocity U. The required three-dimensional velocity data are obtained investigating a 17.75×17.75×6 mm3 (0.355δ×0.355δ×0.12δ) test volume using tomographic particle-image velocimetry. Detection and analysis of dissipation elements from the experimental velocity data are discussed in detail. The statistical results are compared to the DNS data from Wang and Peters ["The length-scale distribution function of the distance between extremal points in passive scalar turbulence," J. Fluid Mech. 554, 457 (2006); "Length-scale distribution functions and conditional means for various fields in turbulence," J. Fluid Mech. 608, 113 (2008)]. Similar characteristics have been found especially for the pdf's of the large dissipation element length regarding the exponential decay. In agreement with the DNS results, over 99% of the experimental dissipation elements possess a length that is smaller than three times the average element length.
Influence of emphysema distribution on pulmonary function parameters in COPD patients
Bastos, Helder Novais e; Neves, Inês; Redondo, Margarida; Cunha, Rui; Pereira, José Miguel; Magalhães, Adriana; Fernandes, Gabriela
2015-01-01
ABSTRACT OBJECTIVE: To evaluate the impact that the distribution of emphysema has on clinical and functional severity in patients with COPD. METHODS: The distribution of the emphysema was analyzed in COPD patients, who were classified according to a 5-point visual classification system of lung CT findings. We assessed the influence of emphysema distribution type on the clinical and functional presentation of COPD. We also evaluated hypoxemia after the six-minute walk test (6MWT) and determined the six-minute walk distance (6MWD). RESULTS: Eighty-six patients were included. The mean age was 65.2 ± 12.2 years, 91.9% were male, and all but one were smokers (mean smoking history, 62.7 ± 38.4 pack-years). The emphysema distribution was categorized as obviously upper lung-predominant (type 1), in 36.0% of the patients; slightly upper lung-predominant (type 2), in 25.6%; homogeneous between the upper and lower lung (type 3), in 16.3%; and slightly lower lung-predominant (type 4), in 22.1%. Type 2 emphysema distribution was associated with lower FEV1, FVC, FEV1/FVC ratio, and DLCO. In comparison with the type 1 patients, the type 4 patients were more likely to have an FEV1 < 65% of the predicted value (OR = 6.91, 95% CI: 1.43-33.45; p = 0.016), a 6MWD < 350 m (OR = 6.36, 95% CI: 1.26-32.18; p = 0.025), and post-6MWT hypoxemia (OR = 32.66, 95% CI: 3.26-326.84; p = 0.003). The type 3 patients had a higher RV/TLC ratio, although the difference was not significant. CONCLUSIONS: The severity of COPD appears to be greater in type 4 patients, and type 3 patients tend to have greater hyperinflation. The distribution of emphysema could have a major impact on functional parameters and should be considered in the evaluation of COPD patients. PMID:26785956
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chase, Hilary M.; Chen, Shunli; Fu, Li
2017-09-01
Inferring molecular orientations from vibrational sum frequency generation (SFG) spectra is challenging in polarization combinations that result in low signal intensities, or when the local point group symmetry approximation fails. While combining experiments with density functional theory (DFT) could overcome this problem, the scope of the combined method has yet to be established. Here, we assess its feasibility of determining the distributions of molecular orientations for one monobasic ester, two epoxides and three alcohols at the vapor/fused silica interface. We find that molecular orientations of nonlocal vibrational modes cannot be determined using polarization-resolved SFG measurements alone.
1983-01-01
are ignored, from the formula i,;k i/s&-) - A.(S’o e T() (2.28) ( = . 4,-p ) L) C(_ (one point function has S S 2 two body correlation integrates over s...rigid solid limit since the contributions of the first two integrals of equation (5) cancel in this case. However, for correlation times Tc - T1 4no...expression for TID for a distribution of correlation times in the same manner as we did previously for T and using the activation parameters previously
Simultaneous two-dimensional laser-induced-fluorescence measurements of argon ions.
Hansen, A K; Galante, Matthew; McCarren, Dustin; Sears, Stephanie; Scime, E E
2010-10-01
Recent laser upgrades on the Hot Helicon Experiment at West Virginia University have enabled multiplexed simultaneous measurements of the ion velocity distribution function at a single location, expanding our capabilities in laser-induced fluorescence diagnostics. The laser output is split into two beams, each modulated with an optical chopper and injected perpendicular and parallel to the magnetic field. Light from the crossing point of the beams is transported to a narrow-band photomultiplier tube filtered at the fluorescence wavelength and monitored by two lock-in amplifiers, each referenced to one of the two chopper frequencies.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Wang, Hao; Tomar, Vikas
2018-04-01
This work presents direct measurements of stress and temperature distribution during the mesoscale microstructural deformation of Inconel-617 (IN-617) during 3-point bending tests as a function of temperature. A novel nanomechanical Raman spectroscopy (NMRS)-based measurement platform was designed for simultaneous in situ temperature and stress mapping as a function of microstructure during deformation. The temperature distribution was found to be directly correlated to stress distribution for the analyzed microstructures. Stress concentration locations are shown to be directly related to higher heat conduction and result in microstructural hot spots with significant local temperature variation.
Analysis of data from NASA B-57B gust gradient program
NASA Technical Reports Server (NTRS)
Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.
1985-01-01
Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.
NASA Astrophysics Data System (ADS)
Choi, B. H.; Min, B. I.; Yoshinobu, T.; Kim, K. O.; Pelinovsky, E.
2012-04-01
Data from a field survey of the 2011 tsunami in the Sanriku area of Japan is presented and used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated using a theoretical log-normal curve [Choi et al, 2002]. The characteristics of the distribution functions derived from the runup-heights data obtained during the 2011 event are compared with data from two previous gigantic tsunamis (1896 and 1933) that occurred in almost the same region. The number of observations during the last tsunami is very large (more than 5,247), which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and number of observations suggested by Kajiura [1983]. The distribution function of the 2011 event demonstrates the sensitivity to the number of observation points (many of them cannot be considered independent measurements) and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.
Description of waves in inhomogeneous domains using Heun's equation
NASA Astrophysics Data System (ADS)
Bednarik, M.; Cervenka, M.
2018-04-01
There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.
NASA Astrophysics Data System (ADS)
Giraud, Olivier; Grabsch, Aurélien; Texier, Christophe
2018-05-01
We study statistical properties of N noninteracting identical bosons or fermions in the canonical ensemble. We derive several general representations for the p -point correlation function of occupation numbers n1⋯np ¯. We demonstrate that it can be expressed as a ratio of two p ×p determinants involving the (canonical) mean occupations n1¯, ..., np¯, which can themselves be conveniently expressed in terms of the k -body partition functions (with k ≤N ). We draw some connection with the theory of symmetric functions and obtain an expression of the correlation function in terms of Schur functions. Our findings are illustrated by revisiting the problem of Bose-Einstein condensation in a one-dimensional harmonic trap, for which we get analytical results. We get the moments of the occupation numbers and the correlation between ground-state and excited-state occupancies. In the temperature regime dominated by quantum correlations, the distribution of the ground-state occupancy is shown to be a truncated Gumbel law. The Gumbel law, describing extreme-value statistics, is obtained when the temperature is much smaller than the Bose-Einstein temperature.
Note: Precise radial distribution of charged particles in a magnetic guiding field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backe, H., E-mail: backe@kph.uni-mainz.de
2015-07-15
Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less
On the Tracy-Widomβ Distribution for β=6
NASA Astrophysics Data System (ADS)
Grava, Tamara; Its, Alexander; Kapaev, Andrei; Mezzadri, Francesco
2016-11-01
We study the Tracy-Widom distribution function for Dyson's β-ensemble with β = 6. The starting point of our analysis is the recent work of I. Rumanov where he produces a Lax-pair representation for the Bloemendal-Virág equation. The latter is a linear PDE which describes the Tracy-Widom functions corresponding to general values of β. Using his Lax pair, Rumanov derives an explicit formula for the Tracy-Widom β=6 function in terms of the second Painlevé transcendent and the solution of an auxiliary ODE. Rumanov also shows that this formula allows him to derive formally the asymptotic expansion of the Tracy-Widom function. Our goal is to make Rumanov's approach and hence the asymptotic analysis it provides rigorous. In this paper, the first one in a sequel, we show that Rumanov's Lax-pair can be interpreted as a certain gauge transformation of the standard Lax pair for the second Painlevé equation. This gauge transformation though contains functional parameters which are defined via some auxiliary nonlinear ODE which is equivalent to the auxiliary ODE of Rumanov's formula. The gauge-interpretation of Rumanov's Lax-pair allows us to highlight the steps of the original Rumanov's method which needs rigorous justifications in order to make the method complete. We provide a rigorous justification of one of these steps. Namely, we prove that the Painlevé function involved in Rumanov's formula is indeed, as it has been suggested by Rumanov, the Hastings-McLeod solution of the second Painlevé equation. The key issue which we also discuss and which is still open is the question of integrability of the auxiliary ODE in Rumanov's formula. We note that this question is crucial for the rigorous asymptotic analysis of the Tracy-Widom function. We also notice that our work is a partial answer to one of the problems related to the β-ensembles formulated by Percy Deift during the June 2015 Montreal Conference on integrable systems.
Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM
NASA Technical Reports Server (NTRS)
Crane, Robert G.; Hewitson, B. C.
1991-01-01
A new diagnostic tool is developed for examining relationships between the synoptic scale circulation and regional temperature distributions in GCMs. The 4 x 5 deg GISS GCM is shown to produce accurate simulations of the variance in the synoptic scale sea level pressure distribution over the U.S. An analysis of the observational data set from the National Meteorological Center (NMC) also shows a strong relationship between the synoptic circulation and grid point temperatures. This relationship is demonstrated by deriving transfer functions between a time-series of circulation parameters and temperatures at individual grid points. The circulation parameters are derived using rotated principal components analysis, and the temperature transfer functions are based on multivariate polynomial regression models. The application of these transfer functions to the GCM circulation indicates that there is considerable spatial bias present in the GCM temperature distributions. The transfer functions are also used to indicate the possible changes in U.S. regional temperatures that could result from differences in synoptic scale circulation between a 1XCO2 and a 2xCO2 climate, using a doubled CO2 version of the same GISS GCM.
3D reconstruction from non-uniform point clouds via local hierarchical clustering
NASA Astrophysics Data System (ADS)
Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo
2017-07-01
Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji
2004-06-01
Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.
The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis
NASA Astrophysics Data System (ADS)
Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.
2011-05-01
In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.
Asymptotic approximations to posterior distributions via conditional moment equations
Yee, J.L.; Johnson, W.O.; Samaniego, F.J.
2002-01-01
We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.
NASA Technical Reports Server (NTRS)
Tschunko, H. F. A.
1983-01-01
Reference is made to a study by Tschunko (1979) in which it was discussed how apodization modifies the modulation transfer function for various central obstruction ratios. It is shown here how apodization, together with the central obstruction ratio, modifies the point spread function, which is the basic element for the comparison of imaging performance and for the derivation of energy integrals and other functions. At high apodization levels and lower central obstruction (less than 0.1), new extended radial zones are formed in the outer part of the central ring groups. These transmutation of the image functions are of more than theoretical interest, especially if the irradiance levels in the outer ring zones are to be compared to the background irradiance levels. Attention is then given to the energy distribution in point images generated by annular apertures apodized by various transmission functions. The total energy functions are derived; partial energy integrals are determined; and background irradiance functions are discussed.
RipleyGUI: software for analyzing spatial patterns in 3D cell distributions
Hansson, Kristin; Jafari-Mamaghani, Mehrdad; Krieger, Patrik
2013-01-01
The true revolution in the age of digital neuroanatomy is the ability to extensively quantify anatomical structures and thus investigate structure-function relationships in great detail. To facilitate the quantification of neuronal cell patterns we have developed RipleyGUI, a MATLAB-based software that can be used to detect patterns in the 3D distribution of cells. RipleyGUI uses Ripley's K-function to analyze spatial distributions. In addition the software contains statistical tools to determine quantitative statistical differences, and tools for spatial transformations that are useful for analyzing non-stationary point patterns. The software has a graphical user interface making it easy to use without programming experience, and an extensive user manual explaining the basic concepts underlying the different statistical tools used to analyze spatial point patterns. The described analysis tool can be used for determining the spatial organization of neurons that is important for a detailed study of structure-function relationships. For example, neocortex that can be subdivided into six layers based on cell density and cell types can also be analyzed in terms of organizational principles distinguishing the layers. PMID:23658544
One-point functions in defect CFT and integrability
NASA Astrophysics Data System (ADS)
de Leeuw, Marius; Kristjansen, Charlotte; Zarembo, Konstantin
2015-08-01
We calculate planar tree level one-point functions of non-protected operators in the defect conformal field theory dual to the D3-D5 brane system with k units of the world volume flux. Working in the operator basis of Bethe eigenstates of the Heisenberg XXX 1/2 spin chain we express the one-point functions as overlaps of these eigenstates with a matrix product state. For k = 2 we obtain a closed expression of determinant form for any number of excitations, and in the case of half-filling we find a relation with the Néel state. In addition, we present a number of results for the limiting case k → ∞.
Self-Avoiding Walks on the Random Lattice and the Random Hopping Model on a Cayley Tree
NASA Astrophysics Data System (ADS)
Kim, Yup
Using a field theoretic method based on the replica trick, it is proved that the three-parameter renormalization group for an n-vector model with quenched randomness reduces to a two-parameter one in the limit n (--->) 0 which corresponds to self-avoiding walks (SAWs). This is also shown by the explicit calculation of the renormalization group recursion relations to second order in (epsilon). From this reduction we find that SAWs on the random lattice are in the same universality class as SAWs on the regular lattice. By analogy with the case of the n-vector model with cubic anisotropy in the limit n (--->) 1, the fixed-point structure of the n-vector model with randomness is analyzed in the SAW limit, so that a physical interpretation of the unphysical fixed point is given. Corrections of the values of critical exponents of the unphysical fixed point published previously is also given. Next we formulate an integral equation and recursion relations for the configurationally averaged one particle Green's function of the random hopping model on a Cayley tree of coordination number ((sigma) + 1). This formalism is tested by applying it successfully to the nonrandom model. Using this scheme for 1 << (sigma) < (INFIN) we calculate the density of states of this model with a Gaussian distribution of hopping matrix elements in the range of energy E('2) > E(,c)('2), where E(,c) is a critical energy described below. The singularity in the Green's function which occurs at energy E(,1)('(0)) for (sigma) = (INFIN) is shifted to complex energy E(,1) (on the unphysical sheet of energy E) for small (sigma)('-1). This calculation shows that the density of states is smooth function of energy E around the critical energy E(,c) = Re E(,1) in accord with Wegner's theorem. In this formulation the density of states has no sharp phase transition on the real axis of E because E(,1) has developed an imaginary part. Using the Lifschitz argument, we calculate the density of states near the band edge for the model when the hopping matrix elements are governed by a bounded probability distribution. It is also shown within the dynamical system language that the density of states of the model with a bounded distribution never vanishes inside the band and we suggest a theoretical mechanism for the formation of energy bands.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitagawa, Takuya; Pielawa, Susanne; Demler, Eugene
2010-06-25
We theoretically analyze Ramsey interference experiments in one-dimensional quasicondensates and obtain explicit expressions for the time evolution of full distribution functions of fringe contrast. We show that distribution functions contain unique signatures of the many-body mechanism of decoherence. We argue that Ramsey interference experiments provide a powerful tool for analyzing strongly correlated nature of 1D interacting systems.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-29
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
Statistical measurement of the gamma-ray source-count distribution as a function of energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Atmospheric Teleconnections From Cumulants
NASA Astrophysics Data System (ADS)
Sabou, F.; Kaspi, Y.; Marston, B.; Schneider, T.
2011-12-01
Multi-point cumulants of fields such as vorticity provide a way to visualize atmospheric teleconnections, complementing other approaches such as the method of empirical orthogonal functions (EOFs). We calculate equal-time two-point cumulants of the vorticity from NCEP reanalysis data during the period 1980 -- 2010 and from direct numerical simulation (DNS) using an idealized dry general circulation model (GCM) (Schneider and Walker, 2006). Extratropical correlations seen in the NCEP data are qualitatively reproduced by the model. Three- and four-point cumulants accumulated from DNS quantify departures of the probability distribution function from a normal distribution, shedding light on the efficacy of direct statistical simulation (DSS) of atmosphere dynamics by cumulant expansions (Marston, Conover, and Schneider, 2008; Marston 2011). Lagged-time two-point cumulants between temperature gradients and eddy kinetic energy (EKE), accumulated by DNS of an idealized moist aquaplanet GCM (O'Gorman and Schneider, 2008), reveal dynamics of storm tracks. Regions of enhanced baroclinicity (as found along the eastern boundary of continents) lead to a local enhancement of EKE and a suppression of EKE further downstream as the storm track self-destructs (Kaspi and Schneider, 2011).
A grid spacing control technique for algebraic grid generation methods
NASA Technical Reports Server (NTRS)
Smith, R. E.; Kudlinski, R. A.; Everton, E. L.
1982-01-01
A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.
Wavelength-division multiplexed optical integrated circuit with vertical diffraction grating
NASA Technical Reports Server (NTRS)
Lang, Robert J. (Inventor); Forouhar, Siamak (Inventor)
1994-01-01
A semiconductor optical integrated circuit for wave division multiplexing has a semiconductor waveguide layer, a succession of diffraction grating points in the waveguide layer along a predetermined diffraction grating contour, a semiconductor diode array in the waveguide layer having plural optical ports facing the succession of diffraction grating points along a first direction, respective semiconductor diodes in the array corresponding to respective ones of a predetermined succession of wavelengths, an optical fiber having one end thereof terminated at the waveguide layer, the one end of the optical fiber facing the succession of diffraction grating points along a second direction, wherein the diffraction grating points are spatially distributed along the predetermined contour in such a manner that the succession of diffraction grating points diffracts light of respective ones of the succession of wavelengths between the one end of the optical fiber and corresponding ones of the optical ports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitru, Adrian; Skokov, Vladimir
The conventional and linearly polarized Weizsäcker-Williams gluon distributions at small x are defined from the two-point function of the gluon field in light-cone gauge. They appear in the cross section for dijet production in deep inelastic scattering at high energy. We determine these functions in the small-x limit from solutions of the JIMWLK evolution equations and show that they exhibit approximate geometric scaling. Also, we discuss the functional distributions of these WW gluon distributions over the JIMWLK ensemble at rapidity Y ~ 1/αs. These are determined by a 2d Liouville action for the logarithm of the covariant gauge function g2trmore » A+(q)A+(-q). For transverse momenta on the order of the saturation scale we observe large variations across configurations (evolution trajectories) of the linearly polarized distribution up to several times its average, and even to negative values.« less
Hexagonalization of correlation functions II: two-particle contributions
NASA Astrophysics Data System (ADS)
Fleury, Thiago; Komatsu, Shota
2018-02-01
In this work, we compute one-loop planar five-point functions in N=4 super-Yang-Mills using integrability. As in the previous work, we decompose the correlation functions into hexagon form factors and glue them using the weight factors which depend on the cross-ratios. The main new ingredient in the computation, as compared to the four-point functions studied in the previous paper, is the two-particle mirror contribution. We develop techniques to evaluate it and find agreement with the perturbative results in all the cases we analyzed. In addition, we consider next-to-extremal four-point functions, which are known to be protected, and show that the sum of one-particle and two-particle contributions at one loop adds up to zero as expected. The tools developed in this work would be useful for computing higher-particle contributions which would be relevant for more complicated quantities such as higher-loop corrections and non-planar correlators.
NASA Astrophysics Data System (ADS)
Székely, B.; Kania, A.; Standovár, T.; Heilmeier, H.
2016-06-01
The horizontal variation and vertical layering of the vegetation are important properties of the canopy structure determining the habitat; three-dimensional (3D) distribution of objects (shrub layers, understory vegetation, etc.) is related to the environmental factors (e.g., illumination, visibility). It has been shown that gaps in forests, mosaic-like structures are essential to biodiversity; various methods have been introduced to quantify this property. As the distribution of gaps in the vegetation is a multi-scale phenomenon, in order to capture it in its entirety, scale-independent methods are preferred; one of these is the calculation of lacunarity. We used Airborne Laser Scanning point clouds measured over a forest plantation situated in a former floodplain. The flat topographic relief ensured that the tree growth is independent of the topographic effects. The tree pattern in the plantation crops provided various quasi-regular and irregular patterns, as well as various ages of the stands. The point clouds were voxelized and layers of voxels were considered as images for two-dimensional input. These images calculated for a certain vicinity of reference points were taken as images for the computation of lacunarity curves, providing a stack of lacunarity curves for each reference points. These sets of curves have been compared to reveal spatial changes of this property. As the dynamic range of the lacunarity values is very large, the natural logarithms of the values were considered. Logarithms of lacunarity functions show canopy-related variations, we analysed these variations along transects. The spatial variation can be related to forest properties and ecology-specific aspects.
NASA Technical Reports Server (NTRS)
Diederich, Franklin W; Zlotnick, Martin
1955-01-01
Spanwise lift distributions have been calculated for nineteen unswept wings with various aspect ratios and taper ratios and with a variety of angle-of-attack or twist distributions, including flap and aileron deflections, by means of the Weissinger method with eight control points on the semispan. Also calculated were aerodynamic influence coefficients which pertain to a certain definite set of stations along the span, and several methods are presented for calculating aerodynamic influence functions and coefficients for stations other than those stipulated. The information presented in this report can be used in the analysis of untwisted wings or wings with known twist distributions, as well as in aeroelastic calculations involving initially unknown twist distributions.
NASA Astrophysics Data System (ADS)
Sulyok, G.
2017-07-01
Starting from the general definition of a one-loop tensor N-point function, we use its Feynman parametrization to calculate the ultraviolet (UV-)divergent part of an arbitrary tensor coefficient in the framework of dimensional regularization. In contrast to existing recursion schemes, we are able to present a general analytic result in closed form that enables direct determination of the UV-divergent part of any one-loop tensor N-point coefficient independent from UV-divergent parts of other one-loop tensor N-point coefficients. Simplified formulas and explicit expressions are presented for A-, B-, C-, D-, E-, and F-functions.
A random wave model for the Aharonov-Bohm effect
NASA Astrophysics Data System (ADS)
Houston, Alexander J. H.; Gradhand, Martin; Dennis, Mark R.
2017-05-01
We study an ensemble of random waves subject to the Aharonov-Bohm effect. The introduction of a point with a magnetic flux of arbitrary strength into a random wave ensemble gives a family of wavefunctions whose distribution of vortices (complex zeros) is responsible for the topological phase associated with the Aharonov-Bohm effect. Analytical expressions are found for the vortex number and topological charge densities as functions of distance from the flux point. Comparison is made with the distribution of vortices in the isotropic random wave model. The results indicate that as the flux approaches half-integer values, a vortex with the same sign as the fractional part of the flux is attracted to the flux point, merging with it in the limit of half-integer flux. We construct a statistical model of the neighbourhood of the flux point to study how this vortex-flux merger occurs in more detail. Other features of the Aharonov-Bohm vortex distribution are also explored.
Conductance of three-terminal molecular bridge based on tight-binding theory
NASA Astrophysics Data System (ADS)
Wang, Li-Guang; Li, Yong; Yu, Ding-Wen; Katsunori, Tagami; Masaru, Tsukada
2005-05-01
The quantum transmission characteristic of three-benzene ring nano-molecular bridge is investigated theoretically by using Green's function approach based on tight-binding theory with only a π orbital per carbon atom at the site. The transmission probabilities that electrons transport through the molecular bridge from one terminal to the other two terminals are obtained. The electronic current distributions inside the molecular bridge are calculated and shown in graphical analogy by the current density method based on Fisher-Lee formula at the energy points E = ±0.42, ±1.06 and ±1.5, respectively, where the transmission spectra appear peaks. We find that the transmission spectra are related to the incident electronic energy and the molecular levels strongly and the current distributions agree well with Kirchhoff quantum current momentum conservation law.
Skin dose mapping for non-uniform x-ray fields using a backscatter point spread function
NASA Astrophysics Data System (ADS)
Vijayan, Sarath; Xiong, Zhenyu; Shankar, Alok; Rudin, Stephen; Bednarek, Daniel R.
2017-03-01
Beam shaping devices like ROI attenuators and compensation filters modulate the intensity distribution of the xray beam incident on the patient. This results in a spatial variation of skin dose due to the variation of primary radiation and also a variation in backscattered radiation from the patient. To determine the backscatter component, backscatter point spread functions (PSF) are generated using EGS Monte-Carlo software. For this study, PSF's were determined by simulating a 1 mm beam incident on the lateral surface of an anthropomorphic head phantom and a 20 cm thick PMMA block phantom. The backscatter PSF's for the head phantom and PMMA phantom are curve fit with a Lorentzian function after being normalized to the primary dose intensity (PSFn). PSFn is convolved with the primary dose distribution to generate the scatter dose distribution, which is added to the primary to obtain the total dose distribution. The backscatter convolution technique is incorporated in the dose tracking system (DTS), which tracks skin dose during fluoroscopic procedures and provides a color map of the dose distribution on a 3D patient graphic model. A convolution technique is developed for the backscatter dose determination for the nonuniformly spaced graphic-model surface vertices. A Gafchromic film validation was performed for shaped x-ray beams generated with an ROI attenuator and with two compensation filters inserted into the field. The total dose distribution calculated by the backscatter convolution technique closely agreed with that measured with the film.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Wavefronts for a global reaction-diffusion population model with infinite distributed delay
NASA Astrophysics Data System (ADS)
Weng, Peixuan; Xu, Zhiting
2008-09-01
We consider a global reaction-diffusion population model with infinite distributed delay which includes models of Nicholson's blowflies and hematopoiesis derived by Gurney, Mackey and Glass, respectively. The existence of monotone wavefronts is derived by using the abstract settings of functional differential equations and Schauder fixed point theory.
Li, Zhijun; Ge, Shuzhi Sam; Liu, Sibang
2014-08-01
This paper investigates optimal feet forces' distribution and control of quadruped robots under external disturbance forces. First, we formulate a constrained dynamics of quadruped robots and derive a reduced-order dynamical model of motion/force. Consider an external wrench on quadruped robots; the distribution of required forces and moments on the supporting legs of a quadruped robot is handled as a tip-point force distribution and used to equilibrate the external wrench. Then, a gradient neural network is adopted to deal with the optimized objective function formulated as to minimize this quadratic objective function subjected to linear equality and inequality constraints. For the obtained optimized tip-point force and the motion of legs, we propose the hybrid motion/force control based on an adaptive neural network to compensate for the perturbations in the environment and approximate feedforward force and impedance of the leg joints. The proposed control can confront the uncertainties including approximation error and external perturbation. The verification of the proposed control is conducted using a simulation.
Back in the saddle: large-deviation statistics of the cosmic log-density field
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.
2016-08-01
We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.
A Facile and Eco-friendly Route to Fabricate Poly(Lactic Acid) Scaffolds with Graded Pore Size.
Scaffaro, Roberto; Lopresti, Francesco; Botta, Luigi; Maio, Andrea; Sutera, Fiorenza; Mistretta, Maria Chiara; La Mantia, Francesco Paolo
2016-10-17
Over the recent years, functionally graded scaffolds (FGS) gaineda crucial role for manufacturing of devices for tissue engineering. The importance of this new field of biomaterials research is due to the necessity to develop implants capable of mimicking the complex functionality of the various tissues, including a continuous change from one structure or composition to another. In this latter context, one topic of main interest concerns the design of appropriate scaffolds for bone-cartilage interface tissue. In this study, three-layered scaffolds with graded pore size were achieved by melt mixing poly(lactic acid) (PLA), sodium chloride (NaCl) and polyethylene glycol (PEG). Pore size distributions were controlled by NaCl granulometry and PEG solvation. Scaffolds were characterized from a morphological and mechanical point of view. A correlation between the preparation method, the pore architecture and compressive mechanical behavior was found. The interface adhesion strength was quantitatively evaluated by using a custom-designed interfacial strength test. Furthermore, in order to imitate the human physiology, mechanical tests were also performed in phosphate buffered saline (PBS) solution at 37 °C. The method herein presented provides a high control of porosity, pore size distribution and mechanical performance, thus offering the possibility to fabricate three-layered scaffolds with tailored properties by following a simple and eco-friendly route.
Resolution of Transverse Electron Beam Measurements using Optical Transition Radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ischebeck, Rasmus; Decker, Franz-Josef; Hogan, Mark
2005-06-22
In the plasma wakefield acceleration experiment E-167, optical transition radiation is used to measure the transverse profile of the electron bunches before and after the plasma acceleration. The distribution of the electric field from a single electron does not give a point-like distribution on the detector, but has a certain extension. Additionally, the resolution of the imaging system is affected by aberrations. The transverse profile of the bunch is thus convolved with a point spread function (PSF). Algorithms that deconvolve the image can help to improve the resolution. Imaged test patterns are used to determine the modulation transfer function ofmore » the lens. From this, the PSF can be reconstructed. The Lucy-Richardson algorithm is used to deconvolute this PSF from test images.« less
Pion and kaon valence-quark parton quasidistributions
NASA Astrophysics Data System (ADS)
Xu, Shu-Sheng; Chang, Lei; Roberts, Craig D.; Zong, Hong-Shi
2018-05-01
Algebraic Ansätze for the Poincaré-covariant Bethe-Salpeter wave functions of the pion and kaon are used to calculate their light-front wave functions, parton distribution amplitudes, parton quasidistribution amplitudes, valence parton distribution functions, and parton quasidistribution functions (PqDFs). The light-front wave functions are broad, concave functions, and the scale of flavor-symmetry violation in the kaon is roughly 15%, being set by the ratio of emergent masses in the s - and u -quark sectors. Parton quasidistribution amplitudes computed with longitudinal momentum Pz=1.75 GeV provide a semiquantitatively accurate representation of the objective parton distribution amplitude, but even with Pz=3 GeV , they cannot provide information about this amplitude's end point behavior. On the valence-quark domain, similar outcomes characterize PqDFs. In this connection, however, the ratio of kaon-to-pion u -quark PqDFs is found to provide a good approximation to the true parton distribution function ratio on 0.4 ≲x ≲0.8 , suggesting that with existing resources computations of ratios of parton quasidistributions can yield results that support empirical comparison.
NASA Astrophysics Data System (ADS)
Garcia-Comas, C.; Chiba, S.; Sugisaki, H.; Hashioka, T.; Smith, S. L.
2016-02-01
Understanding how species coexist in rich communities and the role of biodiversity on ecosystem-functioning is a long-standing challenge in ecology. Comparing functional diversity to species diversity may shed light on these questions. Here, we analyze copepod species data from the ODATE collection: 3142 samples collected over a period of 40 years, which includes a 10 o x 10o area of the Oyashio-Kuroshio Transition System, east of Japan (western North Pacific). The area hosts species characteristic of subarctic and subtropical communities. 163 copepod species were classified into five categorical functional traits (i.e., size, food, reproduction, thermal affinity and coastal-offshore habitat), following online databases and local taxonomic keys. We observe a general opposite hump-shaped relationship of species evenness (lower at mid-point) and functional diversity (Rao's Q) (higher at mid-point) with species richness. Subtropical Kuroshio communities tend to be richer with higher species evenness, and yet subarctic and transition waters tend to host communities of higher functional diversity. The distribution of trait values within each functional trait was further examined in relation to the Species Abundances Distribution (SAD). In subtropical communities, the distribution of trait values in the species ranking is homogenous, mirroring the frequency of those trait values in the entire community. In contrast, in subarctic communities the distribution of trait values differs along the species rank, with dominant species having favorable trait values more often than expected by chance (i.e., based on the overall frequency of that trait value in the entire community). Our results suggest that subtropical communities may be niche-saturated towards the most adapted trait values, so that merely having the most adapted trait value confers no strong competitive advantage to a species.
ACOSS Eleven (Active Control of Space Structures). Volume 1
1983-12-01
Influence Function ................. 19 3.4 Mirror Deformations. ........................... o............. 23 3.5 Selection of Point Objects...to simulate errors in the knowledge of influence function . 5) The influence function for edge actuators may be different from that for interior... Influence Function Each of the three mirrors has 37 actuators distributed on an equi- lateral triangular lattice as shown in Figure 3-3. In consultation with
Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping
2016-01-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement. PMID:26977365
Qi, Li; Zhu, Jiang; Hancock, Aneeka M; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D; Chen, Zhongping
2016-02-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement.
Reilly, Jamie; Garcia, Amanda; Binney, Richard J.
2016-01-01
Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210
NASA Astrophysics Data System (ADS)
Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.
2018-04-01
The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.
Shin, Hyun Jin; Lee, Shin-Hyo; Shin, Kang-Jae; Koh, Ki-Seok; Song, Wu-Chul
2018-06-01
To elucidate the intramuscular distribution and branching patterns of the abducens nerve in the lateral rectus (LR) muscle so as to provide anatomical confirmation of the presence of compartmentalization, including for use in clinical applications such as botulinum toxin injections. Thirty whole-mount human cadaver specimens were dissected and then Sihler's stain was applied. The basic dimensions of the LR and its intramuscular nerve distribution were investigated. The distances from the muscle insertion to the point at which the abducens nerve enters the LR and to the terminal nerve plexus were also measured. The LR was 46.0 mm long. The abducens nerve enters the muscle on the posterior one-third of the LR and then typically divides into a few branches (average of 1.8). This supports a segregated abducens nerve selectively innervating compartments of the LR. The intramuscular nerve distribution showed a Y-shaped ramification with root-like arborization. The intramuscular nerve course finished around the middle of the LR (24.8 mm posterior to the insertion point) to form the terminal nerve plexus. This region should be considered the optimal target site for botulinum toxin injections. We have also identified the presence of an overlapping zone and communicating nerve branches between the neighboring LR compartments. Sihler's staining is a useful technique for visualizing the entire nerve network of the LR. Improving the knowledge of the nerve distribution patterns is important not only for researchers but also clinicians to understand the functions of the LR and the diverse pathophysiology of strabismus.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
NASA Technical Reports Server (NTRS)
Genovese, Christopher R.; Stark, Philip B.; Thompson, Michael J.
1995-01-01
Observed solar p-mode frequency splittings can be used to estimate angular velocity as a function of position in the solar interior. Formal uncertainties of such estimates depend on the method of estimation (e.g., least-squares), the distribution of errors in the observations, and the parameterization imposed on the angular velocity. We obtain lower bounds on the uncertainties that do not depend on the method of estimation; the bounds depend on an assumed parameterization, but the fact that they are lower bounds for the 'true' uncertainty does not. Ninety-five percent confidence intervals for estimates of the angular velocity from 1986 Big Bear Solar Observatory (BBSO) data, based on a 3659 element tensor-product cubic-spline parameterization, are everywhere wider than 120 nHz, and exceed 60,000 nHz near the core. When compared with estimates of the solar rotation, these bounds reveal that useful inferences based on pointwise estimates of the angular velocity using 1986 BBSO splitting data are not feasible over most of the Sun's volume. The discouraging size of the uncertainties is due principally to the fact that helioseismic measurements are insensitive to changes in the angular velocity at individual points, so estimates of point values based on splittings are extremely uncertain. Functionals that measure distributed 'smooth' properties are, in general, better constrained than estimates of the rotation at a point. For example, the uncertainties in estimated differences of average rotation between adjacent blocks of about 0.001 solar volumes across the base of the convective zone are much smaller, and one of several estimated differences we compute appears significant at the 95% level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H., E-mail: millerwh@berkeley.edu; Cotton, Stephen J., E-mail: StephenJCotton47@gmail.com
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory—e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of themore » action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states—and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H.; Cotton, Stephen J.
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory - e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer valuesmore » of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states - and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
New Antenna Deployment, Pointing and Supporting Mechanism
NASA Technical Reports Server (NTRS)
Costabile, V.; Lumaca, F.; Marsili, P.; Noni, G.; Portelli, C.
1996-01-01
On ITALSAT Flight 2, the Italian telecommunications satellite, the two L-Ka antennas (Tx and Rx) use two large deployable reflectors (2000-mm diameter), whose deployment and fine pointing functions are accomplished by means of an innovative mechanism concept. The Antenna Deployment & Pointing Mechanism and Supporting Structure (ADPMSS) is based on a new configuration solution, where the reflector and mechanisms are conceived as an integrated, self-contained assembly. This approach is different from the traditional configuration solution. Typically, a rigid arm is used to deploy and then support the reflector in the operating position, and an Antenna Pointing Mechanism (APM) is normally interposed between the reflector and the arm for steering operation. The main characteristics of the ADPMSS are: combined implementation of deployment, pointing, and reflector support; optimum integration of active components and interface matching with the satellite platform; structural link distribution to avoid hyperstatic connections; very light weight and; high performance in terms of deployment torque margin and pointing range/accuracy. After having successfully been subjected to all component-level qualification and system-level acceptance tests, two flight ADPMSS mechanisms (one for each antenna) are now integrated on ITALSAT F2 and are ready for launch. This paper deals with the design concept, development, and testing program performed to qualify the ADPMSS mechanism.
Discrete geometric analysis of message passing algorithm on graphs
NASA Astrophysics Data System (ADS)
Watanabe, Yusuke
2010-04-01
We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.
Morini, F; Knippenberg, S; Deleuze, M S; Hajgató, B
2010-04-01
The main purpose of the present work is to simulate from many-body quantum mechanical calculations the results of experimental studies of the valence electronic structure of n-hexane employing photoelectron spectroscopy (PES) and electron momentum spectroscopy (EMS). This study is based on calculations of the valence ionization spectra and spherically averaged (e, 2e) electron momentum distributions for each known conformer by means of one-particle Green's function [1p-GF] theory along with the third-order algebraic diagrammatic construction [ADC(3)] scheme and using Kohn-Sham orbitals derived from DFT calculations employing the Becke 3-parameters Lee-Yang-Parr (B3LYP) functional as approximations to Dyson orbitals. A first thermostatistical analysis of these spectra and momentum distributions employs recent estimations at the W1h level of conformational energy differences, by Gruzman et al. [J. Phys. Chem. A 2009, 113, 11974], and of correspondingly obtained conformer weights using MP2 geometrical, vibrational, and rotational data in thermostatistical calculations of partition functions beyond the level of the rigid rotor-harmonic oscillator approximation. Comparison is made with the results of a focal point analysis of these energy differences using this time B3LYP geometries and the corresponding vibrational and rotational partition functions in the thermostatistical analysis. Large differences are observed between these two thermochemical models, especially because of strong variations in the contributions of hindered rotations to relative entropies. In contrast, the individual ionization spectra or momentum profiles are almost insensitive to the employed geometry. This study confirms the great sensitivity of valence ionization bands and (e, 2e) momentum distributions on the molecular conformation and sheds further light on spectral fingerprints of through-space methylenic hyperconjugation, in both PES and EMS experiments.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
On dermatomes, meridians and points: results of a quasiexperimental study.
Sánchez-Araujo, Max; Luckert-Barela, Ana J; Sánchez, Nathalia; Torres, Juan; Conde, Jesus Eloy
2014-02-01
Traditional Chinese medicine (TCM) meridians and points run vertically, reflecting their function in the Zhang-Fu system (meridian pattern). However, the trunk's spinal nerves show a traverse orientation, or a 'horizontal pattern'. The aim of the present work was to evaluate, via a cognitive quasiexperiment, whether the clinical indications of the points on the trunk are associated with their meridian function or with their innervation and visceral-somatic connection. The points in each dermatome of the trunk were considered crosswise, regardless of their meridians. The clinical indications for each point were differentiated into two mutually exclusive categories: (a) vertical distribution effect (VDE) or 'meridian pattern', when indications were quite different regarding the indications for the other points on the dermatome; and (b) transverse distribution effects (TDE) or 'horizontal pattern', represented by mainly local or segmental indications except for Shu-Mu points. After observing that the proportions between both categories often exceeded 60% in pilot samples, 60% was adopted as the reference value. A total of 22 dermatomes accommodated 148 points with 809 indications, of which 189 indications (23.4%) exhibited VDE features, whereas 620 (76.6%) exhibited TDE features. A TDE/VDE ratio of 3 : 1 implies that the clinical indications for the points of any dermatome on the torso are similar, regardless of their meridians, and suggests that most of the indications for trunk points involve a 'horizontal pattern' due to their neurobiological nature. These findings may help in understanding acupuncture's neurobiology and clarify some confusing results of clinical research, for example, excluding sham acupuncture as an inert intervention for future clinical trials.
The role of root distribution in eco-hydrological modeling in semi-arid regions
NASA Astrophysics Data System (ADS)
Sivandran, G.; Bras, R. L.
2010-12-01
In semi arid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Niche separation, through rooting strategies, is one manner in which different species coexist. At present, land surface models prescribe rooting profiles as a function of only the plant functional type of interest with no consideration for the soil texture or rainfall regime of the region being modeled. These models do not incorporate the ability of vegetation to dynamically alter their rooting strategies in response to transient changes in environmental forcings and therefore tend to underestimate the resilience of many of these ecosystems. A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point scale simulations were carried out using two vertical root distribution schemes: (i) Static - a temporally invariant root distribution; and (ii) Dynamic - a temporally variable allocation of assimilated carbon at any depth within the root zone in order to minimize the soil moisture-induced stress on the vegetation. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semi-arid Walnut Gulch Experimental Watershed in Arizona. For the static root distribution scheme, a series of simulations were carried out varying the shape of the rooting profile. The optimal distribution for the simulation was defined as the root distribution with the maximum mean transpiration over a 200 year period. This optimal distribution was determined for 5 soil textures and using 2 plant functional types, and the results varied from case to case. The dynamic rooting simulations allow vegetation the freedom to adjust the allocation of assimilated carbon to different rooting depths in response to changes in stress caused by the redistribution and uptake of soil moisture. The results obtained from these experiments elucidate the strong link between plant functional type, soil texture and climate and highlight the potential errors in the modeling of hydrologic fluxes from imposing a static root profile.
Effective structural descriptors for natural and engineered radioactive waste confinement barriers
NASA Astrophysics Data System (ADS)
Lemmens, Laurent; Rogiers, Bart; De Craen, Mieke; Laloy, Eric; Jacques, Diederik; Huysmans, Marijke; Swennen, Rudy; Urai, Janos L.; Desbois, Guillaume
2017-04-01
The microstructure of a radioactive waste confinement barrier strongly influences its flow and transport properties. Numerical flow and transport simulations for these porous media at the pore scale therefore require input data that describe the microstructure as accurately as possible. To date, no imaging method can resolve all heterogeneities within important radioactive waste confinement barrier materials as hardened cement paste and natural clays at the micro scale (nm-cm). Therefore, it is necessary to merge information from different 2D and 3D imaging methods using porous media reconstruction techniques. To qualitatively compare the results of different reconstruction techniques, visual inspection might suffice. To quantitatively compare training-image based algorithms, Tan et al. (2014) proposed an algorithm using an analysis of distance. However, the ranking of the algorithm depends on the choice of the structural descriptor, in their case multiple-point or cluster-based histograms. We present here preliminary work in which we will review different structural descriptors and test their effectiveness, for capturing the main structural characteristics of radioactive waste confinement barrier materials, to determine the descriptors to use in the analysis of distance. The investigated descriptors are particle size distributions, surface area distributions, two point probability functions, multiple point histograms, linear functions and two point cluster functions. The descriptor testing consists of stochastically generating realizations from a reference image using the simulated annealing optimization procedure introduced by Karsanina et al. (2015). This procedure basically minimizes the differences between pre-specified descriptor values associated with the training image and the image being produced. The most efficient descriptor set can therefore be identified by comparing the image generation quality among the tested descriptor combinations. The assessment of the quality of the simulations will be made by combining all considered descriptors. Once the set of the most efficient descriptors is determined, they can be used in the analysis of distance, to rank different reconstruction algorithms in a more objective way in future work. Karsanina MV, Gerke KM, Skvortsova EB, Mallants D (2015) Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure. PLoS ONE 10(5): e0126515. doi:10.1371/journal.pone.0126515 Tan, Xiaojin, Pejman Tahmasebi, and Jef Caers. "Comparing training-image based algorithms using an analysis of distance." Mathematical Geosciences 46.2 (2014): 149-169.
Field Ground Truthing Data Collector - a Mobile Toolkit for Image Analysis and Processing
NASA Astrophysics Data System (ADS)
Meng, X.
2012-07-01
Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1) Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use) and health conditions of ecosystems and environments in the vicinity of the flight field; 2) Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3) Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.
Conversion of woodlands changes soil related ecosystem services in Subsaharan Africa
NASA Astrophysics Data System (ADS)
Groengroeft, Alexander; Landschreiber, Lars; Luther-Mosebach, Jona; Masamba, Wellington; Zimmermann, Ibo; Eschenbach, Annette
2015-04-01
In remote areas of Subsaharan Africa, growing population, changes in consumption patterns and increasing global influences are leading to a strong pressure on the land resources. Smallholders convert woodlands by fire, grazing and clearing in different intensities thus changing soil properties and their ecosystem functioning. As the extraction of ecosystem services forms the basis of local wellbeing for many communities, the role of soils in providing ecosystem services is of high importance. Since 2010, "The Future Okavango" project investigates the quantification of ecosystem functions and services at four core research sites along the Okavango river basin (Angola, Namibia, Botswana, see http://www.future-okavango.org/). These research sites have an extent of 100 km2 each. Within our subproject the soil functions underlying ecosystem services are studied: The amount and spatial variation of soil nutrient reserves in woodland and their changes by land use activities, the water storage function as a basis for plant growth, and their effect on groundwater recharge and the carbon storage function. The scientific framework consists of four major parts including soil survey and mapping, lab analysis, field measurements and modeling approaches on different scales. A detailed soil survey leads to a measure of the spatial distribution, extent and heterogeneity of soil types for each research site. For generalization purposes, geomorphological and pedological characteristics are merged to derive landscape units. These landscape units have been overlaid by recent land use types to stratify the research site for subsequent soil sampling. On the basis of field and laboratory analysis, spatial distribution of soil properties as well as boundaries between neighboring landscape units are derived. The parameters analysed describe properties according to grain size distribution, organic carbon content, saturated and unsaturated hydraulic conductivity as well as pore space distribution. At nine selected sites, soil water contents and pressure heads are logged throughout the year with a 12 hour resolution in depth of 10 to 160 cm. This monitoring gives information about soil water dynamics at point scale and the database is used to evaluate model outputs of soil water balances later on. To derive point scale soil water balances for each landscape unit the one dimensional and physically based model SWAP 3.2 is applied. The presentation will demonstrate the conceptual framework, exemplary results and will discuss, if the ecosystem service approach can help to avoid future land degradation. Key word: Okavango catchment, soil functions, conceptual approach
Chang, Qin; Brodsky, Stanley J.; Li, Xin-Qiang
2017-05-30
In this article the dynamical spin effects of the light-front holographic wave functions for light pseudoscalar mesons are studied. These improved wave functions are then confronted with a number of hadronic observables: the decay constants of π and K mesons, their ξ -moments, the pion-to-photon transition form factor, and the pure annihilationmore » $$\\bar{B}_s$$ → π + π - and $$\\bar{B}_d$$ → K + K - decays. Taking f π , fK , and their ratio fK / f π as constraints, we perform a χ 2 analysis for the holographic parameters, including the mass scale parameter $$\\sqrtλ$$ and the effective quark masses, and find that the fitted results are quite consistent with the ones obtained from the light-quark hadronic Regge trajectories. In addition, we also show that the end point divergence appearing in the pure annihilation $$\\bar{B}_s$$ → π + π - and $$\\bar{B}_d$$ → K + K - decays can be controlled well by using these improved light-front holographic distribution amplitudes.« less
NASA Astrophysics Data System (ADS)
Upadhya, Abhijeet; Dwivedi, Vivek K.; Singh, G.
2018-06-01
In this paper, we have analyzed the performance of dual hop radio frequency (RF)/free-space optical (FSO) fixed gain relay environment confined by atmospheric turbulence induced fading channel over FSO link and modeled using α - μ distribution. The RF hop of the amplify-and-forward scheme undergoes the Rayleigh fading and the proposed system model also considers the pointing error effect on the FSO link. A novel and accurate mathematical expression of the probability density function for a FSO link experiencing α - μ distributed atmospheric turbulence in the presence of pointing error is derived. Further, we have presented analytical expressions of outage probability and bit error rate in terms of Meijer-G function. In addition to this, a useful and mathematically tractable closed-form expression for the end-to-end ergodic capacity of the dual hop scheme in terms of bivariate Fox's H function is derived. The atmospheric turbulence, misalignment errors and various binary modulation schemes for intensity modulation on optical wireless link are considered to yield the results. Finally, we have analyzed each of the three performance metrics for high SNR in order to represent them in terms of elementary functions and the achieved analytical results are supported by computer-based simulations.
NASA Astrophysics Data System (ADS)
Zolotaryuk, A. V.
2017-06-01
Several families of one-point interactions are derived from the system consisting of two and three δ-potentials which are regularized by piecewise constant functions. In physical terms such an approximating system represents two or three extremely thin layers separated by some distance. The two-scale squeezing of this heterostructure to one point as both the width of δ-approximating functions and the distance between these functions simultaneously tend to zero is studied using the power parameterization through a squeezing parameter \\varepsilon \\to 0 , so that the intensity of each δ-potential is cj =aj \\varepsilon1-μ , aj \\in {R} , j = 1, 2, 3, the width of each layer l =\\varepsilon and the distance between the layers r = c\\varepsilon^τ , c > 0. It is shown that at some values of the intensities a 1, a 2 and a 3, the transmission across the limit point potentials is non-zero, whereas outside these (resonance) values the one-point interactions are opaque splitting the system at the point of singularity into two independent subsystems. Within the interval 1 < μ < 2 , the resonance sets consist of two curves on the (a_1, a_2) -plane and three surfaces in the (a_1, a_2, a_3) -space. As the parameter μ approaches the value μ =2 , three types of splitting the one-point interactions into countable families are observed.
A method of PSF generation for 3D brightfield deconvolution.
Tadrous, P J
2010-02-01
This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.
A new strategy for array optimization applied to Brazilian Decimetric Array
NASA Astrophysics Data System (ADS)
Faria, C.; Stephany, S.; Sawant, H. S.
Radio interferometric arrays measure the Fourier transform of the sky brightness distribution in a finite set of points that are determined by the cross-correlation of different pairs of antennas of the array The sky brightness distribution is reconstructed by the inverse Fourier transform of the sampled visibilities The quality of the reconstructed images strongly depends on the array configuration since it determines the sampling function and therefore the points in the Fourier Plane This work proposes a new optimization strategy for the array configuration that is based on the entropy of the distribution of the samples points in the Fourier plane A stochastic optimizer the Ant Colony Optimization employs entropy of the point distribution in the Fourier plane to iteratively refine the candidate solutions The proposed strategy was developed for the Brazilian Decimetric Array BDA a radio interferometric array that is currently being developed for solar observations at the Brazilian Institute for Space Research Configurations results corresponding to the Fourier plane coverage synthesized beam and side lobes levels are shown for an optimized BDA configuration obtained with the proposed strategy and compared to the results for a standard T array configuration that was originally proposed
Verification of floating-point software
NASA Technical Reports Server (NTRS)
Hoover, Doug N.
1990-01-01
Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.
NASA Astrophysics Data System (ADS)
Malik, Zvi; Dishi, M.
1995-05-01
The subcellular localization of endogenous protoporphyrin (endo- PP) during photosensitization in B-16 melanoma cells was analyzed by a novel spectral imaging system, the SpectraCube 1000. The melanoma cells were incubated with 5-aminolevulinic acid (ALA), and then the fluorescence of endo-PP was recorded in individual living cells by three modes: conventional fluorescence imaging, multipixel point by point fluorescence spectroscopy, and image processing, by operating a function of spectral similarity mapping and reconstructing new images derived from spectral information. The fluorescence image of ALA-treated cells revealed vesicular distribution of endo-PP all over the cytosol, with mitochondrial, lysosomal, as well as endoplasmic reticulum cisternael accumulation. Two main spectral fluorescence peaks were demonstrated at 635 and 705 nm, with intensities that differed from one subcellular site to another. Photoirradiation of the cells included point-specific subcellular fluorescence spectrum changes and demonstrated photoproduct formation. Spectral image reconstruction revealed the local distribution of a chosen spectrum in the photosensitized cells. On the other hand, B 16 cells treated with exogenous protoporphyrin (exo-PP) showed a dominant fluorescence peak at 670 nm and a minor peak at 630 nm. Fluorescence was localized at a perinuclear=Golgi region. Light exposure induced photobleaching and photoproduct-spectral changes followed by relocalization. The new localization at subcellular compartments showed pH dependent spectral shifts and photoproduct formation on a subcellular level.
Acupuncture for treating fibromyalgia
Deare, John C; Zheng, Zhen; Xue, Charlie CL; Liu, Jian Ping; Shang, Jingsheng; Scott, Sean W; Littlejohn, Geoff
2014-01-01
Background One in five fibromyalgia sufferers use acupuncture treatment within two years of diagnosis. Objectives To examine the benefits and safety of acupuncture treatment for fibromyalgia. Search methods We searched CENTRAL, PubMed, EMBASE, CINAHL, National Research Register, HSR Project and Current Contents, as well as the Chinese databases VIP and Wangfang to January 2012 with no language restrictions. Selection criteria Randomised and quasi-randomised studies evaluating any type of invasive acupuncture for fibromyalgia diagnosed according to the American College of Rheumatology (ACR) criteria, and reporting any main outcome: pain, physical function, fatigue, sleep, total well-being, stiffness and adverse events. Data collection and analysis Two author pairs selected trials, extracted data and assessed risk of bias. Treatment effects were reported as standardised mean differences (SMD) and 95%confidence intervals (CI) for continuous outcomes using different measurement tools (pain, physical function, fatigue, sleep, total well-being and stiffness) and risk ratio (RR) and 95% CI for dichotomous outcomes (adverse events).We pooled data using the random-effects model. Main results Nine trials (395 participants) were included. All studies except one were at low risk of selection bias; five were at risk of selective reporting bias (favouring either treatment group); two were subject to attrition bias (favouring acupuncture); three were subject to performance bias (favouring acupuncture) and one to detection bias (favouring acupuncture). Three studies utilised electro-acupuncture (EA) with the remainder using manual acupuncture (MA) without electrical stimulation. All studies used ’formula acupuncture’ except for one, which used trigger points. Low quality evidence from one study (13 participants) showed EA improved symptoms with no adverse events at one month following treatment. Mean pain in the non-treatment control group was 70 points on a 100 point scale; EA reduced pain by a mean of 22 points (95% confidence interval (CI) 4 to 41), or 22% absolute improvement. Control group global well-being was 66.5 points on a 100 point scale; EA improved well-being by a mean of 15 points (95% CI 5 to 26 points). Control group stiffness was 4.8 points on a 0 to 10 point; EA reduced stiffness by a mean of 0.9 points (95% CI 0.1 to 2 points; absolute reduction 9%, 95% CI 4% to 16%). Fatigue was 4.5 points (10 point scale) without treatment; EA reduced fatigue by a mean of 1 point (95% CI 0.22 to 2 points), absolute reduction 11% (2% to 20%). There was no difference in sleep quality (MD 0.4 points, 95% CI −1 to 0.21 points, 10 point scale), and physical function was not reported. Moderate quality evidence from six studies (286 participants) indicated that acupuncture (EA or MA) was no better than sham acupuncture, except for less stiffness at one month. Subgroup analysis of two studies (104 participants) indicated benefits of EA. Mean pain was 70 points on 0 to 100 point scale with sham treatment; EA reduced pain by 13% (5% to 22%); (SMD −0.63, 95% CI −1.02 to −0.23). Global well-being was 5.2 points on a 10 point scale with sham treatment; EA improved well-being: SMD 0.65, 95% CI 0.26 to 1.05; absolute improvement 11% (4% to 17%). EA improved sleep, from 3 points on a 0 to 10 point scale in the sham group: SMD 0.40 (95% CI 0.01 to 0.79); absolute improvement 8% (0.2% to 16%). Low-quality evidence from one study suggested that MA group resulted in poorer physical function: mean function in the sham group was 28 points (100 point scale); treatment worsened function by a mean of 6 points (95% CI −10.9 to −0.7). Low-quality evidence from three trials (289 participants) suggested no difference in adverse events between real (9%) and sham acupuncture (35%); RR 0.44 (95% CI 0.12 to 1.63). Moderate quality evidence from one study (58 participants) found that compared with standard therapy alone (antidepressants and exercise), adjunct acupuncture therapy reduced pain at one month after treatment: mean pain was 8 points on a 0 to 10 point scale in the standard therapy group; treatment reduced pain by 3 points (95% CI −3.9 to −2.1), an absolute reduction of 30% (21% to 39%). Two people treated with acupuncture reported adverse events; there were none in the control group (RR 3.57; 95% CI 0.18 to 71.21). Global well-being, sleep, fatigue and stiffness were not reported. Physical function data were not usable. Low quality evidence from one study (38 participants) showed a short-term benefit of acupuncture over antidepressants in pain relief: mean pain was 29 points (0 to 100 point scale) in the antidepressant group; acupuncture reduced pain by 17 points (95% CI −24.1 to −10.5). Other outcomes or adverse events were not reported. Moderate-quality evidence from one study (41 participants) indicated that deep needling with or without deqi did not differ in pain, fatigue, function or adverse events. Other outcomes were not reported. Four studies reported no differences between acupuncture and control or other treatments described at six to seven months follow-up. No serious adverse events were reported, but there were insufficient adverse events to be certain of the risks. Authors’ conclusions There is low tomoderate-level evidence that compared with no treatment and standard therapy, acupuncture improves pain and stiffness in people with fibromyalgia. There is moderate-level evidence that the effect of acupuncture does not differ from sham acupuncture in reducing pain or fatigue, or improving sleep or global well-being. EA is probably better than MA for pain and stiffness reduction and improvement of global well-being, sleep and fatigue. The effect lasts up to one month, but is not maintained at six months follow-up. MA probably does not improve pain or physical functioning. Acupuncture appears safe. People with fibromyalgia may consider using EA alone or with exercise and medication. The small sample size, scarcity of studies for each comparison, lack of an ideal sham acupuncture weaken the level of evidence and its clinical implications. Larger studies are warranted. PMID:23728665
Sekine, Ryojun; Aoki, Hiroyuki; Ito, Shinzaburo
2009-10-01
The chain end distribution of a block copolymer in a two-dimensional microphase-separated structure was studied by scanning near-field optical microscopy (SNOM). In the monolayer of poly(octadecyl methacrylate)-block-poly(isobutyl methacrylate) (PODMA-b-PiBMA), the free end of the PiBMA subchain was directly observed by SNOM, and the spatial distributions of the whole block and the chain end are examined and compared with the convolution of the point spread function of the microscope and distribution function of the model structures. It was found that the chain end distribution of the block copolymer confined in two dimensions has a peak near the domain center, being concentrated in the narrower region, as compared with three-dimensional systems.
Role of length polydispersity in the phase behavior of freely rotating hard-rectangle fluids
NASA Astrophysics Data System (ADS)
Díaz-De Armas, Ariel; Martínez-Ratón, Yuri
2017-05-01
We use the density-functional formalism, in particular the scaled-particle theory, applied to a length-polydisperse hard-rectangle fluid to study its phase behavior as a function of the mean particle aspect ratio κ0 and polydispersity Δ0. The numerical solutions of the coexistence equations are calculated by transforming the original problem with infinite degrees of freedoms to a finite set of equations for the amplitudes of the Fourier expansion of the moments of the density profiles. We divide the study into two parts. The first one is devoted to the calculation of the phase diagrams in the packing fraction η0-κ0 plane for a fixed Δ0 and selecting parent distribution functions with exponential (the Schulz distribution) or Gaussian decays. In the second part we study the phase behavior in the η0-Δ0 plane for fixed κ0 while Δ0 is changed. We characterize in detail the orientational ordering of particles and the fractionation of different species between the coexisting phases. Also we study the character (second vs first order) of the isotropic-nematic phase transition as a function of polydispersity. We particularly focus on the stability of the tetratic phase as a function of κ0 and Δ0. The isotropic-nematic transition becomes strongly of first order when polydispersity is increased: The coexistence gap widens and the location of the tricritical point moves to higher values of κ0 while the tetratic phase is slightly destabilized with respect to the nematic one. The results obtained here can be tested in experiments on shaken monolayers of granular rods.
The Dipole Segment Model for Axisymmetrical Elongated Asteroids
NASA Astrophysics Data System (ADS)
Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong
2018-02-01
Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.
Deducing Electron Properties from Hard X-Ray Observations
NASA Technical Reports Server (NTRS)
Kontar, E. P.; Brown, J. C.; Emslie, A. G.; Hajdas, W.; Holman, G. D.; Hurford, G. J.; Kasparova, J.; Mallik, P. C. V.; Massone, A. M.; McConnell, M. L.;
2011-01-01
X-radiation from energetic electrons is the prime diagnostic of flare-accelerated electrons. The observed X-ray flux (and polarization state) is fundamentally a convolution of the cross-section for the hard X-ray emission process(es) in question with the electron distribution function, which is in turn a function of energy, direction, spatial location and time. To address the problems of particle propagation and acceleration one needs to infer as much information as possible on this electron distribution function, through a deconvolution of this fundamental relationship. This review presents recent progress toward this goal using spectroscopic, imaging and polarization measurements, primarily from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Previous conclusions regarding the energy, angular (pitch angle) and spatial distributions of energetic electrons in solar flares are critically reviewed. We discuss the role and the observational evidence of several radiation processes: free-free electron-ion, free-free electron-electron, free-bound electron-ion, photoelectric absorption and Compton backscatter (albedo), using both spectroscopic and imaging techniques. This unprecedented quality of data allows for the first time inference of the angular distributions of the X-ray-emitting electrons and improved model-independent inference of electron energy spectra and emission measures of thermal plasma. Moreover, imaging spectroscopy has revealed hitherto unknown details of solar flare morphology and detailed spectroscopy of coronal, footpoint and extended sources in flaring regions. Additional attempts to measure hard X-ray polarization were not sufficient to put constraints on the degree of anisotropy of electrons, but point to the importance of obtaining good quality polarization data in the future.
Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure
Skvortsova, Elena B.; Mallants, Dirk
2015-01-01
Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity. PMID:26010779
Universal spatial correlation functions for describing and reconstructing soil microstructure.
Karsanina, Marina V; Gerke, Kirill M; Skvortsova, Elena B; Mallants, Dirk
2015-01-01
Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity.
NASA Astrophysics Data System (ADS)
Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.
Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.
Lando, Asiyanthi Tabran; Nakayama, Hirofumi; Shimaoka, Takayuki
2017-01-01
Methane from landfills contributes to global warming and can pose an explosion hazard. To minimize these effects emissions must be monitored. This study proposed application of portable gas detector (PGD) in point and scanning measurements to estimate spatial distribution of methane emissions in landfills. The aims of this study were to discover the advantages and disadvantages of point and scanning methods in measuring methane concentrations, discover spatial distribution of methane emissions, cognize the correlation between ambient methane concentration and methane flux, and estimate methane flux and emissions in landfills. This study was carried out in Tamangapa landfill, Makassar city-Indonesia. Measurement areas were divided into basic and expanded area. In the point method, PGD was held one meter above the landfill surface, whereas scanning method used a PGD with a data logger mounted on a wire drawn between two poles. Point method was efficient in time, only needed one person and eight minutes in measuring 400m 2 areas, whereas scanning method could capture a lot of hot spots location and needed 20min. The results from basic area showed that ambient methane concentration and flux had a significant (p<0.01) positive correlation with R 2 =0.7109 and y=0.1544 x. This correlation equation was used to describe spatial distribution of methane emissions in the expanded area by using Kriging method. The average of estimated flux from scanning method was 71.2gm -2 d -1 higher than 38.3gm -2 d -1 from point method. Further, scanning method could capture the lower and higher value, which could be useful to evaluate and estimate the possible effects of the uncontrolled emissions in landfill. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Marsh, K. A.; Whitworth, A. P.; Lomax, O.
2015-12-01
We present point process mapping (
General relativistic magnetohydrodynamical κ-jet models for Sagittarius A*
NASA Astrophysics Data System (ADS)
Davelaar, J.; Mościbrodzka, M.; Bronzwaer, T.; Falcke, H.
2018-04-01
Context. The observed spectral energy distribution of an accreting supermassive black hole typically forms a power-law spectrum in the near infrared (NIR) and optical wavelengths, that may be interpreted as a signature of accelerated electrons along the jet. However, the details of acceleration remain uncertain. Aim. In this paper, we study the radiative properties of jets produced in axisymmetric general relativistic magnetohydrodynamics (GRMHD) simulations of hot accretion flows onto underluminous supermassive black holes both numerically and semi-analytically, with the aim of investigating the differences between models with and without accelerated electrons inside the jet. Methods: We assume that electrons are accelerated in the jet regions of our GRMHD simulation. To model them, we modify the electrons' distribution function in the jet regions from a purely relativistic thermal distribution to a combination of a relativistic thermal distribution and the κ-distribution function (the κ-distribution function is itself a combination of a relativistic thermal and a non-thermal power-law distribution, and thus it describes accelerated electrons). Inside the disk, we assume a thermal distribution for the electrons. In order to resolve the particle acceleration regions in the GRMHD simulations, we use a coordinate grid that is optimized for modeling jets. We calculate jet spectra and synchrotron maps by using the ray tracing code RAPTOR, and compare the synthetic observations to observations of Sgr A*. Finally, we compare numerical models of jets to semi-analytical ones. Results: We find that in the κ-jet models, the radio-emitting region size, radio flux, and spectral index in NIR/optical bands increase for decreasing values of the κ parameter, which corresponds to a larger amount of accelerated electrons. This is in agreement with analytical predictions. In our models, the size of the emission region depends roughly linearly on the observed wavelength λ, independently of the assumed distribution function. The model with κ = 3.5, ηacc = 5-10% (the percentage of electrons that are accelerated), and observing angle i = 30° fits the observed Sgr A* emission in the flaring state from the radio to the NIR/optical regimes, while κ = 3.5, ηacc < 1%, and observing angle i = 30° fit the upper limits in quiescence. At this point, our models (including the purely thermal ones) cannot reproduce the observed source sizes accurately, which is probably due to the assumption of axisymmetry in our GRMHD simulations. The κ-jet models naturally recover the observed nearly-flat radio spectrum of Sgr A* without invoking the somewhat artificial isothermal jet model that was suggested earlier. Conclusions: From our model fits we conclude that between 5% and 10% of the electrons inside the jet of Sgr A* are accelerated into a κ distribution function when Sgr A* is flaring. In quiescence, we match the NIR upper limits when this percentage is <1%.
Wave turbulence in shallow water models.
Clark di Leoni, P; Cobelli, P J; Mininni, P D
2014-06-01
We study wave turbulence in shallow water flows in numerical simulations using two different approximations: the shallow water model and the Boussinesq model with weak dispersion. The equations for both models were solved using periodic grids with up to 2048{2} points. In all simulations, the Froude number varies between 0.015 and 0.05, while the Reynolds number and level of dispersion are varied in a broader range to span different regimes. In all cases, most of the energy in the system remains in the waves, even after integrating the system for very long times. For shallow flows, nonlinear waves are nondispersive and the spectrum of potential energy is compatible with ∼k{-2} scaling. For deeper (Boussinesq) flows, the nonlinear dispersion relation as directly measured from the wave and frequency spectrum (calculated independently) shows signatures of dispersion, and the spectrum of potential energy is compatible with predictions of weak turbulence theory, ∼k{-4/3}. In this latter case, the nonlinear dispersion relation differs from the linear one and has two branches, which we explain with a simple qualitative argument. Finally, we study probability density functions of the surface height and find that in all cases the distributions are asymmetric. The probability density function can be approximated by a skewed normal distribution as well as by a Tayfun distribution.
Dusty Pair Plasma—Wave Propagation and Diffusive Transition of Oscillations
NASA Astrophysics Data System (ADS)
Atamaniuk, Barbara; Turski, Andrzej J.
2011-11-01
The crucial point of the paper is the relation between equilibrium distributions of plasma species and the type of propagation or diffusive transition of plasma response to a disturbance. The paper contains a unified treatment of disturbance propagation (transport) in the linearized Vlasov electron-positron and fullerene pair plasmas containing charged dust impurities, based on the space-time convolution integral equations. Electron-positron-dust/ion (e-p-d/i) plasmas are rather widespread in nature. Space-time responses of multi-component linearized Vlasov plasmas on the basis of multiple integral equations are invoked. An initial-value problem for Vlasov-Poisson/Ampère equations is reduced to the one multiple integral equation and the solution is expressed in terms of forcing function and its space-time convolution with the resolvent kernel. The forcing function is responsible for the initial disturbance and the resolvent is responsible for the equilibrium velocity distributions of plasma species. By use of resolvent equations, time-reversibility, space-reflexivity and the other symmetries are revealed. The symmetries carry on physical properties of Vlasov pair plasmas, e.g., conservation laws. Properly choosing equilibrium distributions for dusty pair plasmas, we can reduce the resolvent equation to: (i) the undamped dispersive wave equations, (ii) and diffusive transport equations of oscillations.
NASA Astrophysics Data System (ADS)
Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying
2017-02-01
Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.
Anticipatory control of xenon in a pressurized water reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Impink, A.J. Jr.
1987-02-10
A method is described for automatically dampening xenon-135 spatial transients in the core of a pressurized water reactor having control rods which regulate reactor power level, comprising the steps of: measuring the neutron flu in the reactor core at a plurality of axially spaced locations on a real-time, on-line basis; repetitively generating from the neutron flux measurements, on a point-by-point basis, signals representative of the current axial distribution of xenon-135, and signals representative of the current rate of change of the axial distribution of xenon-135; generating from the xenon-135 distribution signals and the rate of change of xenon distribution signals,more » control signals for reducing the xenon transients; and positioning the control rods as a function of the control signals to dampen the xenon-135 spatial transients.« less
Performance prediction evaluation of ceramic materials in point-focusing solar receivers
NASA Technical Reports Server (NTRS)
Ewing, J.; Zwissler, J.
1979-01-01
A performance prediction was adapted to evaluate the use of ceramic materials in solar receivers for point focusing distributed applications. System requirements were determined including the receiver operating environment and system operating parameters for various engine types. Preliminary receiver designs were evolved from these system requirements. Specific receiver designs were then evaluated to determine material functional requirements.
Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang
2018-04-05
We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.
NASA Astrophysics Data System (ADS)
Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2018-02-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.
Finley, B; Paustenbach, D
1994-02-01
Probabilistic risk assessments are enjoying increasing popularity as a tool to characterize the health hazards associated with exposure to chemicals in the environment. Because probabilistic analyses provide much more information to the risk manager than standard "point" risk estimates, this approach has generally been heralded as one which could significantly improve the conduct of health risk assessments. The primary obstacles to replacing point estimates with probabilistic techniques include a general lack of familiarity with the approach and a lack of regulatory policy and guidance. This paper discusses some of the advantages and disadvantages of the point estimate vs. probabilistic approach. Three case studies are presented which contrast and compare the results of each. The first addresses the risks associated with household exposure to volatile chemicals in tapwater. The second evaluates airborne dioxin emissions which can enter the food-chain. The third illustrates how to derive health-based cleanup levels for dioxin in soil. It is shown that, based on the results of Monte Carlo analyses of probability density functions (PDFs), the point estimate approach required by most regulatory agencies will nearly always overpredict the risk for the 95th percentile person by a factor of up to 5. When the assessment requires consideration of 10 or more exposure variables, the point estimate approach will often predict risks representative of the 99.9th percentile person rather than the 50th or 95th percentile person. This paper recommends a number of data distributions for various exposure variables that we believe are now sufficiently well understood to be used with confidence in most exposure assessments. A list of exposure variables that may require additional research before adequate data distributions can be developed are also discussed.
USDA-ARS?s Scientific Manuscript database
Thirty one years of spatially distributed air temperature, relative humidity, dew point temperature, precipitation amount, and precipitation phase data are presented for the Reynolds Creek Experimental Watershed. The data are spatially distributed over a 10m Lidar-derived digital elevation model at ...
Optimal design and use of retry in fault tolerant real-time computer systems
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Lévy flights in the presence of a point sink of finite strength
NASA Astrophysics Data System (ADS)
Janakiraman, Deepika
2017-01-01
In this paper, the absorption of a particle undergoing Lévy flight in the presence of a point sink of arbitrary strength and position is studied. The motion of such a particle is given by a modified Fokker-Planck equation whose exact solution in the Laplace domain can be described in terms of the Laplace transform of the unperturbed (absence of the sink) Green's function. This solution for the Green's function is a well-studied, generic result which applies to both fractional and usual Fokker-Planck equations alike. Using this result, the propagator and the absorption-time distribution are obtained for free Lévy flight and Lévy flight in linear and harmonic potentials in the presence of a delta function sink, and their dependence on the sink strength is analyzed. Analytical results are presented for the long-time behavior of the absorption-time distribution in all three above-mentioned potentials. Simulation results are found to corroborate closely with analytical results.
Dhamoon, Mandip S; Cheung, Ying-Kuen; Bagci, Ahmet; Alperin, Noam; Sacco, Ralph L; Elkind, Mitchell S V; Wright, Clinton B
2017-01-01
Asymmetry of brain dysfunction may disrupt brain network efficiency. We hypothesized that greater left-right white matter hyperintensity volume (WMHV) asymmetry was associated with functional trajectories. Methods: In the Northern Manhattan Study, participants underwent brain MRI with axial T1, T2, and fluid attenuated inversion recovery sequences, with baseline interview and examination. Volumetric WMHV distribution across 14 brain regions was determined separately by combining bimodal image intensity distribution and atlas based methods. Participants had annual functional assessments with the Barthel index (BI, range 0-100) over a mean of 7.3 years. Generalized estimating equations (GEE) models estimated associations of regional WMHV and regional left-right asymmetry with baseline BI and change over time, adjusted for baseline medical risk factors, sociodemographics, and cognition, and stroke and myocardial infarction during follow-up. Results: Among 1,195 participants, greater WMHV asymmetry in the parietal lobes (-8.46 BI points per unit greater WMHV on the right compared to left, 95% CI -3.07, -13.86) and temporal lobes (-2.48 BI points, 95% CI -1.04, -3.93) was associated with lower overall function. Greater WMHV asymmetry in the parietal lobes (-1.09 additional BI points per year per unit greater WMHV on the left compared to right, 95% CI -1.89, -0.28) was independently associated with accelerated functional decline. Conclusions: In this large population-based study with long-term repeated measures of function, greater regional WMHV asymmetry was associated with lower function and functional decline. In addition to global WMHV, WHMV asymmetry may be an important predictor of long-term functional status.
NASA Technical Reports Server (NTRS)
Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodriguez-Iturbe, I.
1986-01-01
The parameters of the conceptual model are evaluated from the analysis of eight years of summer rainstorm data from the dense raingage network in the Walnut Gulch catchment near Tucson, Arizona. The occurrence of measurable rain at any one of the 93 gages during a noon to noon day defined a storm. The total rainfall at each of the gages during a storm day constituted the data set for a single storm. The data are interpolated onto a fine grid and analyzed to obtain: an isohyetal plot at 2 mm intervals, the first three moments of point storm depth, the spatial correlation function, the spatial variance function, and the spatial distribution of the total storm depth. The description of the data analysis and the computer programs necessary to read the associated data tapes are presented.
Lepage, Chris; Smith, Andra M; Moreau, Jeremy; Barlow-Krelina, Emily; Wallis, Nancy; Collins, Barbara; MacKenzie, Joyce; Scherling, Carole
2014-01-01
Subsequent to chemotherapy treatment, breast cancer patients often report a decline in cognitive functioning that can adversely impact many aspects of their lives. Evidence has mounted in recent years indicating that a portion of breast cancer survivors who have undergone chemotherapy display reduced performance on objective measures of cognitive functioning relative to comparison groups. Neurophysiological support for chemotherapy-related cognitive impairment has been accumulating due to an increase in neuroimaging studies in this field; however, longitudinal studies are limited and have not examined the relationship between structural grey matter alterations and neuropsychological performance. The aim of this study was to extend the cancer-cognition literature by investigating the association between grey matter attenuation and objectively measured cognitive functioning in chemotherapy-treated breast cancer patients. Female breast cancer patients (n = 19) underwent magnetic resonance imaging after surgery but before commencing chemotherapy, one month following treatment, and one year after treatment completion. Individually matched controls (n = 19) underwent imaging at similar intervals. All participants underwent a comprehensive neuropsychological battery comprising four cognitive domains at these same time points. Longitudinal grey matter changes were investigated using voxel-based morphometry. One month following chemotherapy, patients had distributed grey matter volume reductions. One year after treatment, a partial recovery was observed with alterations persisting predominantly in frontal and temporal regions. This course was not observed in the healthy comparison group. Processing speed followed a similar trajectory within the patient group, with poorest scores obtained one month following treatment and some improvement evident one year post-treatment. This study provides further credence to patient claims of altered cognitive functioning subsequent to chemotherapy treatment.
Measurements for assessing the exposure from 3G femtocells.
Boursianis, Achilles; Vanias, Pantelis; Samaras, Theodoros
2012-06-01
Femtocells are low-power access points, which combine mobile and broadband technologies. The main operation of a femtocell is to function as a miniature base station unit in an indoor environment and to connect to the operator's network through a broadband phone line or a coaxial cable line. This study provides the first experimental measurements and results in Greece for the assessment of exposure to a femtocell access point (FAP) indoors. Using a mobile handset with the appropriate software, power level measurements of the transmitted (Tx) and the received by the mobile handset signal were performed in two different and typical (home and office) environments. Moreover, radiofrequency electric field strength and frequency selective measurements with a radiation meter (SRM-3000) were carried out in the proximity of the FAP installation point. The cumulative distribution functions of the Tx power at most cases (except one) show that in 90% of all points the power of the mobile phone was lower by at least 7 dB during FAP operation. At a distance of ∼1 m from the FAP (in its main beam), power flux density measurements show that there is very little difference between the two situations (FAP ON and OFF). As a conclusion, the use of femtocells indoors improves reception quality, reduces the Tx power of the user's mobile terminal and results in an indiscernible increase of the electromagnetic field in front of the unit, at values that are extremely low compared with reference levels of exposure guidelines.
Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen
2016-08-18
The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.
Communication Needs Assessment for Distributed Turbine Engine Control
NASA Technical Reports Server (NTRS)
Culley, Dennis E.; Behbahani, Alireza R.
2008-01-01
Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.
Location Distribution Optimization of Photographing Sites for Indoor Panorama Modeling
NASA Astrophysics Data System (ADS)
Zhang, S.; Wu, J.; Zhang, Y.; Zhang, X.; Xin, Z.; Liu, J.
2017-09-01
Generally, panoramas image modeling is costly and time-consuming because of photographing continuously to capture enough photos along the routes, especially in complicated indoor environment. Thus, difficulty follows for a wider applications of panoramic image modeling for business. It is indispensable to make a feasible arrangement of panorama sites locations because the locations influence the clarity, coverage and the amount of panoramic images under the condition of certain device. This paper is aim to propose a standard procedure to generate the specific location and total amount of panorama sites in indoor panoramas modeling. Firstly, establish the functional relationship between one panorama site and its objectives. Then, apply the relationship to panorama sites network. We propose the Distance Clarity function (FC and Fe) manifesting the mathematical relationship between panoramas and objectives distance or obstacle distance. The Distance Buffer function (FB) is modified from traditional buffer method to generate the coverage of panorama site. Secondly, transverse every point in possible area to locate possible panorama site, calculate the clarity and coverage synthetically. Finally select as little points as possible to satiate clarity requirement preferentially and then the coverage requirement. In the experiments, detailed parameters of camera lens are given. Still, more experiments parameters need trying out given that relationship between clarity and distance is device dependent. In short, through the function FC, Fe and FB, locations of panorama sites can be generated automatically and accurately.
Far-infrared data for symbiotic stars. I - The IRAS pointed observations
NASA Technical Reports Server (NTRS)
Kenyon, Scott J.; Fernandez-Castro, Telmo; Stencel, Robert E.
1986-01-01
In the present IRAS-pointed observations of eight symbiotic stars, five S-type ones have IR energy distributions that are similar to those of normal M giants, and free-free emission may furnish a fraction of the observed 12- and 25-micron flux in three of them. Three D-type symbiotics have IR energy distributions consistent with those of Mira variables only if the giants are heavily reddened. The binaries' hot components appear to lie outside the dust shell enshrouding the Mira companions.
Skyshine at neutron energies less than or equal to 400 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.
1980-10-01
The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less
NASA Astrophysics Data System (ADS)
Rodríguez-Torres, Sergio A.; Chuang, Chia-Hsun; Prada, Francisco; Guo, Hong; Klypin, Anatoly; Behroozi, Peter; Hahn, Chang Hoon; Comparat, Johan; Yepes, Gustavo; Montero-Dorta, Antonio D.; Brownstein, Joel R.; Maraston, Claudia; McBride, Cameron K.; Tinker, Jeremy; Gottlöber, Stefan; Favole, Ginevra; Shu, Yiping; Kitaura, Francisco-Shu; Bolton, Adam; Scoccimarro, Román; Samushia, Lado; Schlegel, David; Schneider, Donald P.; Thomas, Daniel
2016-08-01
We present a study of the clustering and halo occupation distribution of Baryon Oscillation Spectroscopic Survey (BOSS) CMASS galaxies in the redshift range 0.43 < z < 0.7 drawn from the Final SDSS-III Data Release. We compare the BOSS results with the predictions of a halo abundance matching (HAM) clustering model that assigns galaxies to dark matter haloes selected from the large BigMultiDark N-body simulation of a flat Λ cold dark matter Planck cosmology. We compare the observational data with the simulated ones on a light cone constructed from 20 subsequent outputs of the simulation. Observational effects such as incompleteness, geometry, veto masks and fibre collisions are included in the model, which reproduces within 1σ errors the observed monopole of the two-point correlation function at all relevant scales: from the smallest scales, 0.5 h-1 Mpc, up to scales beyond the baryon acoustic oscillation feature. This model also agrees remarkably well with the BOSS galaxy power spectrum (up to k ˜ 1 h Mpc-1), and the three-point correlation function. The quadrupole of the correlation function presents some tensions with observations. We discuss possible causes that can explain this disagreement, including target selection effects. Overall, the standard HAM model describes remarkably well the clustering statistics of the CMASS sample. We compare the stellar-to-halo mass relation for the CMASS sample measured using weak lensing in the Canada-France-Hawaii Telescope Stripe 82 Survey with the prediction of our clustering model, and find a good agreement within 1σ. The BigMD-BOSS light cone including properties of BOSS galaxies and halo properties is made publicly available.
NASA Astrophysics Data System (ADS)
Zaldívar Huerta, Ignacio E.; Pérez Montaña, Diego F.; Nava, Pablo Hernández; Juárez, Alejandro García; Asomoza, Jorge Rodríguez; Leal Cruz, Ana L.
2013-12-01
We experimentally demonstrate the use of an electro-optical transmission system for distribution of video over long-haul optical point-to-point links using a microwave photonic filter in the frequency range of 0.01-10 GHz. The frequency response of the microwave photonic filter consists of four band-pass windows centered at frequencies that can be tailored to the function of the spectral free range of the optical source, the chromatic dispersion parameter of the optical fiber used, as well as the length of the optical link. In particular, filtering effect is obtained by the interaction of an externally modulated multimode laser diode emitting at 1.5 μm associated to the length of a dispersive optical fiber. Filtered microwave signals are used as electrical carriers to transmit TV-signal over long-haul optical links point-to-point. Transmission of TV-signal coded on the microwave band-pass windows located at 4.62, 6.86, 4.0 and 6.0 GHz are achieved over optical links of 25.25 km and 28.25 km, respectively. Practical applications for this approach lie in the field of the FTTH access network for distribution of services as video, voice, and data.
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
2010-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-11-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
García, Saínza; Alberich, Susana; Martínez-Cengotitabengoa, Mónica; Arango, Celso; Castro-Fornieles, Josefina; Parellada, Mara; Baeza, Inmaculada; Moreno, Carmen; Micó, Juan Antonio; Berrocoso, Esther; Graell, Montserrat; Otero, Soraya; Simal, Tatiana
2018-01-01
Oxidative stress is a pathophysiological mechanism potentially involved in psychiatric disorders. The objective of this study was to assess the relationship between total antioxidant status (TAS) and the functional status of patients with a first episode of psychosis at the onset of the disease. For this purpose, a sample of 70 patients aged between 9 and 17 years with a first episode of psychosis were followed up for a period of two years. Blood samples were drawn to measure TAS levels at three time points: at baseline, at one year, and at two years. Clinical symptoms and functioning were also assessed at the same time points using various scales. Linear regression analysis was performed to investigate the relationship between TAS and clinical status at each assessment, adjusting for potential confounding factors. The distribution of clinical variables was grouped in different percentiles to assess the dose-response in the relation between clinical variables and TAS. At baseline, patient's score on Children's Global Assessment Scale (CGAS) was directly and significantly associated with TAS with a monotonic increase in percentiles, and surprising this association was reversed after one and two years of follow-up with a monotonic decrease. In summary at the onset of the illness, TAS is positively related to clinical status, whereas as the illness progresses this correlation is reversed and becomes negative. This may be the result of an adaptive response. PMID:29698400
A Kinematically Consistent Two-Point Correlation Function
NASA Technical Reports Server (NTRS)
Ristorcelli, J. R.
1998-01-01
A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.
Testing methods of pressure distribution of bra cups on breasts soft tissue
NASA Astrophysics Data System (ADS)
Musilova, B.; Nemcokova, R.; Svoboda, M.
2017-10-01
Objective of this study is to evaluate testing methods of pressure distribution of bra cups on breasts soft tissue, the system which do not affect the space between the wearer's body surface and bra cups and thus do not influence the geometry of the measured body surface and thus investigate the functional performance of brassieres. Two measuring systems were used for the pressure comfort evaluating: 1) The pressure distribution of a wearing bra during 20 minutes on women's breasts has been directly measured using pressure sensor, a dielectricum which is elastic polyurethane foam bra cups. Twelve points were measured in bra cups. 2) Simultaneously the change of temperature in the same points bra was tested with the help of noncontact system the thermal imager. The results indicate that both of those systems can identify different pressure distribution at different points. The same size of bra designing features bra cups made from the same material and which is define by the help of same standardised body dimensions (bust and underbust) can cause different value of a compression on different shape of a woman´s breast soft tissue.
Magnetic tracing of material from a point source in a river system
NASA Astrophysics Data System (ADS)
Appel, Erwin; Liu, Zhao; Mülller, Christina; Frančišković-Bilinski, Stanislav; Rösler, Wolfgang; Zhang, Qi
2017-04-01
In fluvial environment, the mechanism of transport, distribution, and fate of contaminants, and the resulting distribution patterns are complex but only limited studied. A case in Croatia where highly magnetic coal slag was dumped into a river for more than one century (1884-1994) offers an ideal target for studying principles of how to capture the magnetic record of environmental pollution in a river system originating from a well-defined point source. Downstream transport of the coal slag can be roughly recognized by simple sampling of river sediments, but this approach is poorly significant due to the extremely variable magnetic properties caused by hydrodynamic sorting. We suggest applying variogram analyses in river traverses to obtain more reliable values of magnetic concentration, and combining these results with modeling of river bottom magnetic anomalies in order to estimate the amount of coal slag at certain positions. A major focus of this presentation is the translocation of coal slag material to the riverbanks by flooding, i.e. the possible identification of flood affected areas and the discrimination of different flood events. Surface magnetic susceptibility (MS) mapping clearly outlines the extent of flooded areas, and repeated measurements after one year reveal the reach of two recent smaller floods within this period by spatial delineation of strong positive and negative changes of MS values. To identify older flood signatures, dense grids of vertical MS profiles were analyzed at two riverbank areas in two different ways. First, by determining differences between depth horizons at the measurement points, and second, by contouring the vertical MS profiles as a function of the distance to the river (area with flat riverbank topography) and as a function of terrain elevation (area with oblique riverbank). Single flood events cannot be discriminated, but the second approach allows to approximately identify the extent of major historical floods which were interrupted by longer periods of less intensive flooding. The so far obtained results suggest that a more detailed magnetic study of this 'Croatian case' can contribute to better understanding of material displacement in a river system and how to perform significant sampling of river sediments.
Hysteresis of Soil Point Water Retention Functions Determined by Neutron Radiography
NASA Astrophysics Data System (ADS)
Perfect, E.; Kang, M.; Bilheux, H.; Willis, K. J.; Horita, J.; Warren, J.; Cheng, C.
2010-12-01
Soil point water retention functions are needed for modeling flow and transport in partially-saturated porous media. Such functions are usually determined by inverse modeling of average water retention data measured experimentally on columns of finite length. However, the resulting functions are subject to the appropriateness of the chosen model, as well as the initial and boundary condition assumptions employed. Soil point water retention functions are rarely measured directly and when they are the focus is invariably on the main drying branch. Previous direct measurement methods include time domain reflectometry and gamma beam attenuation. Here we report direct measurements of the main wetting and drying branches of the point water retention function using neutron radiography. The measurements were performed on a coarse sand (Flint #13) packed into 2.6 cm diameter x 4 cm long aluminum cylinders at the NIST BT-2 (50 μm resolution) and ORNL-HFIR CG1D (70 μm resolution) imaging beamlines. The sand columns were saturated with water and then drained and rewetted under quasi-equilibrium conditions using a hanging water column setup. 2048 x 2048 pixel images of the transmitted flux of neutrons through the column were acquired at each imposed suction (~10-15 suction values per experiment). Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert’s law in conjunction with beam hardening and geometric corrections. The pixel rows were averaged and combined with information on the known distribution of suctions within the column to give 2048 point drying and wetting functions for each experiment. The point functions exhibited pronounced hysteresis and varied with column height, possibly due to differences in porosity caused by the packing procedure employed. Predicted point functions, extracted from the hanging water column volumetric data using the TrueCell inverse modeling procedure, showed very good agreement with the range of point functions measured within the column using neutron radiography. Extension of these experiments to 3-dimensions using neutron tomography is planned.
Searching for minimum in dependence of squared speed-of-sound on collision energy
Liu, Fu -Hu; Gao, Li -Na; Lacey, Roy A.
2016-01-01
Experimore » mental results of the rapidity distributions of negatively charged pions produced in proton-proton ( p - p ) and beryllium-beryllium (Be-Be) collisions at different beam momentums, measured by the NA61/SHINE Collaboration at the super proton synchrotron (SPS), are described by a revised (three-source) Landau hydrodynamic model. The squared speed-of-sound parameter c s 2 is then extracted from the width of rapidity distribution. There is a local minimum (knee point) which indicates a softest point in the equation of state (EoS) appearing at about 40 A GeV/ c (or 8.8 GeV) in c s 2 excitation function (the dependence of c s 2 on incident beam momentum (or center-of-mass energy)). This knee point should be related to the searching for the onset of quark deconfinement and the critical point of quark-gluon plasma (QGP) phase transition.« less
Calculation of nanodrop profile from fluid density distribution.
Berim, Gersh O; Ruckenstein, Eli
2016-05-01
Two approaches are examined, which can be used to determine the drop profile from the fluid density distributions (FDDs) obtained on the basis of microscopic theories. For simplicity, only two-dimensional (cylindrical, or axisymmetrical) distributions are examined and it is assumed that the fluid is either in contact with a smooth solid or separated from the smooth solid by a lubricating liquid film. The first approach is based on the sharp-kink interface approximation in which the density of the liquid inside and the density of the vapor outside the drop are constant with the exception of the surface layer of the drop where the density is different from the above ones. In this case, the drop profile was calculated by minimizing the total potential energy of the system. The second approach is based on a nonuniform FDD obtained either by the density functional theory or molecular dynamics simulations. To determine the drop profile from such an FDD, which does not contain sharp interfaces, three procedures can be used. In the first two procedures, P1 and P2, the one-dimensional FDDs along straight lines which are parallel to the surface of the solid are extracted from the two-dimensional FDD. Each of those one-dimensional FDDs has a vapor-liquid interface at which the fluid density changes from vapor-like to liquid-like values. Procedure P1 uses the locations of the equimolar dividing surfaces for the one-dimensional FDDs as points of the drop profile. Procedure P2 is based on the assumption that the fluid density is constant on the surface of the drop, that density being selected either arbitrarily or as a fluid density at the location of the equimolar dividing surface for one of the one-dimensional FDDs employed in procedure P1. In the third procedure, P3, which is suggested for the first time in this paper, the one-dimensional FDDs are taken along the straight lines passing through a selected point inside the drop (radial line). Then, the drop profile is calculated like in procedure P1. It is shown, that procedure P3 provides a drop profile which is more reasonable than the other ones. Relationship of the discussed procedures to those used in image analysis is briefly discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Uno, Hajime; Tian, Lu; Claggett, Brian; Wei, L J
2015-12-10
With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan-Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one-sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.
Meng, Fengqun; Cao, Rui; Yang, Dongmei; Niklas, Karl J; Sun, Shucun
2013-07-01
In theory, plants can alter the distribution of leaves along the lengths of their twigs (i.e., within-twig leaf distribution patterns) to optimize light interception in the context of the architectures of their leaves, branches and canopies. We hypothesized that (i) among canopy tree species sharing similar light environments, deciduous trees will have more evenly spaced within-twig leaf distribution patterns compared with evergreen trees (because deciduous species tend to higher metabolic demands than evergreen species and hence require more light), and that (ii) shade-adapted evergreen species will have more evenly spaced patterns compared with sun-adapted evergreen ones (because shade-adapted species are generally light-limited). We tested these hypotheses by measuring morphological traits (i.e., internode length, leaf area, lamina mass per area, LMA; and leaf and twig inclination angles to the horizontal) and physiological traits (i.e., light-saturated net photosynthetic rates, Amax; light saturation points, LSP; and light compensation points, LCP), and calculated the 'evenness' of within-twig leaf distribution patterns as the coefficient of variation (CV; the higher the CV, the less evenly spaced leaves) of within-twig internode length for 9 deciduous canopy tree species, 15 evergreen canopy tree species, 8 shade-adapted evergreen shrub species and 12 sun-adapted evergreen shrub species in a subtropical broad-leaved rainforest in eastern China. Coefficient of variation was positively correlated with large LMA and large leaf and twig inclination angles, which collectively specify a typical trait combination adaptive to low light interception, as indicated by both ordinary regression and phylogenetic generalized least squares analyses. These relationships were also valid within the evergreen tree species group (which had the largest sample size). Consistent with our hypothesis, in the canopy layer, deciduous species (which were characterized by high LCP, LSP and Amax) had more even leaf distribution patterns than evergreen species (which had low LCP, LSP and Amax); shade-adapted evergreen species had more even leaf distribution patterns than sun-adapted evergreen species. We propose that the leaf distribution pattern (i.e., 'evenness' CV, which is an easily measured functional trait) can be used to distinguish among life-forms in communities similar to the one examined in this study.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
Analytic modeling of aerosol size distributions
NASA Technical Reports Server (NTRS)
Deepack, A.; Box, G. P.
1979-01-01
Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.
Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.
2016-01-01
Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276
Korn, Akiva; Kirschner, Adi; Perry, Daniella; Hendler, Talma; Ram, Zvi
2017-01-01
Direct cortical stimulation (DCS) is considered the gold-standard for functional cortical mapping during awake surgery for brain tumor resection. DCS is performed by stimulating one local cortical area at a time. We present a feasibility study using an intra-operative technique aimed at improving our ability to map brain functions which rely on activity in distributed cortical regions. Following standard DCS, Multi-Site Stimulation (MSS) was performed in 15 patients by applying simultaneous cortical stimulations at multiple locations. Language functioning was chosen as a case-cognitive domain due to its relatively well-known cortical organization. MSS, performed at sites that did not produce disruption when applied in a single stimulation point, revealed additional language dysfunction in 73% of the patients. Functional regions identified by this technique were presumed to be significant to language circuitry and were spared during surgery. No new neurological deficits were observed in any of the patients following surgery. Though the neuro-electrical effects of MSS need further investigation, this feasibility study may provide a first step towards sophistication of intra-operative cortical mapping. PMID:28700619
Gonen, Tal; Gazit, Tomer; Korn, Akiva; Kirschner, Adi; Perry, Daniella; Hendler, Talma; Ram, Zvi
2017-01-01
Direct cortical stimulation (DCS) is considered the gold-standard for functional cortical mapping during awake surgery for brain tumor resection. DCS is performed by stimulating one local cortical area at a time. We present a feasibility study using an intra-operative technique aimed at improving our ability to map brain functions which rely on activity in distributed cortical regions. Following standard DCS, Multi-Site Stimulation (MSS) was performed in 15 patients by applying simultaneous cortical stimulations at multiple locations. Language functioning was chosen as a case-cognitive domain due to its relatively well-known cortical organization. MSS, performed at sites that did not produce disruption when applied in a single stimulation point, revealed additional language dysfunction in 73% of the patients. Functional regions identified by this technique were presumed to be significant to language circuitry and were spared during surgery. No new neurological deficits were observed in any of the patients following surgery. Though the neuro-electrical effects of MSS need further investigation, this feasibility study may provide a first step towards sophistication of intra-operative cortical mapping.
A NEW METHOD FOR DERIVING THE STELLAR BIRTH FUNCTION OF RESOLVED STELLAR POPULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gennaro, M.; Brown, T. M.; Gordon, K. D.
We present a new method for deriving the stellar birth function (SBF) of resolved stellar populations. The SBF (stars born per unit mass, time, and metallicity) is the combination of the initial mass function (IMF), the star formation history (SFH), and the metallicity distribution function (MDF). The framework of our analysis is that of Poisson Point Processes (PPPs), a class of statistical models suitable when dealing with points (stars) in a multidimensional space (the measurement space of multiple photometric bands). The theory of PPPs easily accommodates the modeling of measurement errors as well as that of incompleteness. Our method avoidsmore » binning stars in the color–magnitude diagram and uses the whole likelihood function for each data point; combining the individual likelihoods allows the computation of the posterior probability for the population's SBF. Within the proposed framework it is possible to include nuisance parameters, such as distance and extinction, by specifying their prior distributions and marginalizing over them. The aim of this paper is to assess the validity of this new approach under a range of assumptions, using only simulated data. Forthcoming work will show applications to real data. Although it has a broad scope of possible applications, we have developed this method to study multi-band Hubble Space Telescope observations of the Milky Way Bulge. Therefore we will focus on simulations with characteristics similar to those of the Galactic Bulge.« less
Archer, Steven M.
2007-01-01
Purpose Ordinary spherocylindrical refractive errors have been recognized as a cause of monocular diplopia for over a century, yet explanation of this phenomenon using geometrical optics has remained problematic. This study tests the hypothesis that the diffraction theory treatment of refractive errors will provide a more satisfactory explanation of monocular diplopia. Methods Diffraction theory calculations were carried out for modulation transfer functions, point spread functions, and line spread functions under conditions of defocus, astigmatism, and mixed spherocylindrical refractive errors. Defocused photographs of inked and projected black lines were made to demonstrate the predicted consequences of the theoretical calculations. Results For certain amounts of defocus, line spread functions resulting from spherical defocus are predicted to have a bimodal intensity distribution that could provide the basis for diplopia with line targets. Multimodal intensity distributions are predicted in point spread functions and provide a basis for diplopia or polyopia of point targets under conditions of astigmatism. The predicted doubling effect is evident in defocused photographs of black lines, but the effect is not as robust as the subjective experience of monocular diplopia. Conclusions Monocular diplopia due to ordinary refractive errors can be predicted from diffraction theory. Higher-order aberrations—such as spherical aberration—are not necessary but may, under some circumstances, enhance the features of monocular diplopia. The physical basis for monocular diplopia is relatively subtle, and enhancement by neural processing is probably needed to account for the robustness of the percept. PMID:18427616
The large-scale gravitational bias from the quasi-linear regime.
NASA Astrophysics Data System (ADS)
Bernardeau, F.
1996-08-01
It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by <δ^p^({vec}(x)_1_)δ^q^({vec}(x)_2_)>_c_=C_p,q_ <δ^2^({vec}(x))>^p+q-2^ <δ({vec}(x)_1_)δ({vec}(x)_2_)> where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...
2018-03-29
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
AUTO_DERIV: Tool for automatic differentiation of a Fortran code
NASA Astrophysics Data System (ADS)
Stamatiadis, S.; Farantos, S. C.
2010-10-01
AUTO_DERIV is a module comprised of a set of FORTRAN 95 procedures which can be used to calculate the first and second partial derivatives (mixed or not) of any continuous function with many independent variables. The mathematical function should be expressed as one or more FORTRAN 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the FORTRAN 95 language is extensively used to define the differentiation rules. Proper (standard complying) handling of floating-point exceptions is provided by using the IEEE_EXCEPTIONS intrinsic module (Technical Report 15580, incorporated in FORTRAN 2003). New version program summaryProgram title: AUTO_DERIV Catalogue identifier: ADLS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADLS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2963 No. of bytes in distributed program, including test data, etc.: 10 314 Distribution format: tar.gz Programming language: Fortran 95 + (optionally) TR-15580 (Floating-point exception handling) Computer: all platforms with a Fortran 95 compiler Operating system: Linux, Windows, MacOS Classification: 4.12, 6.2 Catalogue identifier of previous version: ADLS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 127 (2000) 343 Does the new version supersede the previous version?: Yes Nature of problem: The need to calculate accurate derivatives of a multivariate function frequently arises in computational physics and chemistry. The most versatile approach to evaluate them by a computer, automatically and to machine precision, is via user-defined types and operator overloading. AUTO_DERIV is a Fortran 95 implementation of them, designed to evaluate the first and second derivatives of a function of many variables. Solution method: The mathematical rules for differentiation of sums, products, quotients, elementary functions in conjunction with the chain rule for compound functions are applied. The function should be expressed as one or more Fortran 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the Fortran 95 language is extensively used to implement the differentiation rules. Reasons for new version: The new version supports Fortran 95, handles properly the floating-point exceptions, and is faster due to internal reorganization. All discovered bugs are fixed. Summary of revisions:The code was rewritten extensively to benefit from features introduced in Fortran 95. Additionally, there was a major internal reorganization of the code, resulting in faster execution. The user interface described in the original paper was not changed. The values that the user must or should specify before compilation (essentially, the number of independent variables) were moved into ad_types module. There were many minor bug fixes. One important bug was found and fixed; the code did not handle correctly the overloading of ∗ in aλ when a=0. The case of division by zero and the discontinuity of the function at the requested point are indicated by standard IEEE exceptions ( IEEE_DIVIDE_BY_ZERO and IEEE_INVALID respectively). If the compiler does not support IEEE exceptions, a module with the appropriate name is provided, imitating the behavior of the 'standard' module in the sense that it raises the corresponding exceptions. It is up to the compiler (through certain flags probably) to detect them. Restrictions: None imposed by the program. There are certain limitations that may appear mostly due to the specific implementation chosen in the user code. They can always be overcome by recoding parts of the routines developed by the user or by modifying AUTO_DERIV according to specific instructions given in [1]. The common restrictions of available memory and the capabilities of the compiler are the same as the original version. Additional comments: The program has been tested using the following compilers: Intel ifort, GNU gfortran, NAGWare f95, g95. Running time: The typical running time for the program depends on the compiler and the complexity of the differentiated function. A rough estimate is that AUTO_DERIV is ten times slower than the evaluation of the analytical ('by hand') function value and derivatives (if they are available). References:S. Stamatiadis, R. Prosmiti, S.C. Farantos, AUTO_DERIV: tool for automatic differentiation of a Fortran code, Comput. Phys. Comm. 127 (2000) 343.
The Effect of Distributed Practice in Undergraduate Statistics Homework Sets: A Randomized Trial
ERIC Educational Resources Information Center
Crissinger, Bryan R.
2015-01-01
Most homework sets in statistics courses are constructed so that students concentrate or "mass" their practice on a certain topic in one problem set. Distributed practice homework sets include review problems in each set so that practice on a topic is distributed across problem sets. There is a body of research that points to the…
NASA Astrophysics Data System (ADS)
Fauzi, A. F.; Aditianata, A.
2018-02-01
The existence of street as a place to perform various human activities becomes an important issue nowadays. In the last few decades, cars and motorcycles dominate streets in various cities in the world. On the other hand, human activity on the street is the determinant of the city livability. Previous research has pointed out that if there is lots of human activity in the street, then the city will be interesting. Otherwise, if the street has no activity, then the city will be boring. Learning from that statement, now various cities in the world are developing the concept of livable streets. Livable streets shown by diversity of human activities conducted in the streets’ pedestrian space. In Yogyakarta, one of the streets shown diversity of human activities is Jalan Kemasan. This study attempts to determine the physical factors of pedestrian space affecting the livability in Jalan Kemasan Yogyakarta through spatial analysis. Spatial analysis was performed by overlay technique between liveable point (activity diversity) distribution map and variable distribution map. Those physical pedestrian space research variable included element of shading, street vendors, building setback, seat location, divider between street and pedestrian way, and mixed use building function. More diverse the activity of one variable, then those variable are more affected then others. Overlay result then strengthened by field observation to qualitatively ensure the deduction. In the end, this research will provide valuable input for street and pedestrian space planning that is comfortable for human activities.
Anderson, Julie A; Tschumper, Gregory S
2006-06-08
Ten stationary points on the water dimer potential energy surface have been examined with ten density functional methods (X3LYP, B3LYP, B971, B98, MPWLYP, PBE1PBE, PBE, MPW1K, B3P86, and BHandHLYP). Geometry optimizations and vibrational frequency calculations were carried out with the TZ2P(f,d)+dif basis set. All ten of the density functionals correctly describe the relative energies of the ten stationary points. However, correctly describing the curvature of the potential energy surface is far more difficult. Only one functional (BHandHLYP) reproduces the number of imaginary frequencies from CCSD(T) calculations. The other nine density functionals fail to correctly characterize the nature of at least one of the ten (H(2)O)(2) stationary points studied here.
Large Scale Ice Water Path and 3-D Ice Water Content
Liu, Guosheng
2008-01-15
Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.
Higher-order phase transitions on financial markets
NASA Astrophysics Data System (ADS)
Kasprzak, A.; Kutner, R.; Perelló, J.; Masoliver, J.
2010-08-01
Statistical and thermodynamic properties of the anomalous multifractal structure of random interevent (or intertransaction) times were thoroughly studied by using the extended continuous-time random walk (CTRW) formalism of Montroll, Weiss, Scher, and Lax. Although this formalism is quite general (and can be applied to any interhuman communication with nontrivial priority), we consider it in the context of a financial market where heterogeneous agent activities can occur within a wide spectrum of time scales. As the main general consequence, we found (by additionally using the Saddle-Point Approximation) the scaling or power-dependent form of the partition function, Z(q'). It diverges for any negative scaling powers q' (which justifies the name anomalous) while for positive ones it shows the scaling with the general exponent τ(q'). This exponent is the nonanalytic (singular) or noninteger power of q', which is one of the pilar of higher-order phase transitions. In definition of the partition function we used the pausing-time distribution (PTD) as the central one, which takes the form of convolution (or superstatistics used, e.g. for describing turbulence as well as the financial market). Its integral kernel is given by the stretched exponential distribution (often used in disordered systems). This kernel extends both the exponential distribution assumed in the original version of the CTRW formalism (for description of the transient photocurrent measured in amorphous glassy material) as well as the Gaussian one sometimes used in this context (e.g. for diffusion of hydrogen in amorphous metals or for aging effects in glasses). Our most important finding is the third- and higher-order phase transitions, which can be roughly interpreted as transitions between the phase where high frequency trading is most visible and the phase defined by low frequency trading. The specific order of the phase transition directly depends upon the shape exponent α defining the stretched exponential integral kernel. On this basis a simple practical hint for investors was formulated.
Energy distribution functions of kilovolt ions in a modified Penning discharge.
NASA Technical Reports Server (NTRS)
Roth, J. R.
1973-01-01
The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution.
Energy distribution functions of kilovolt ions in a modified Penning discharge.
NASA Technical Reports Server (NTRS)
Roth, J. R.
1972-01-01
The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge-exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space, and result in an isotropic energy distribution.
Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.
Renner, Ian W; Warton, David I
2013-03-01
Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Charco, María; González, Pablo J.; Galán del Sastre, Pedro
2017-04-01
The Kilauea volcano (Hawaii, USA) is one of the most active volcanoes world-wide and therefore one of the better monitored volcanoes around the world. Its complex system provides a unique opportunity to investigate the dynamics of magma transport and supply. Geodetic techniques, as Interferometric Synthetic Aperture Radar (InSAR) are being extensively used to monitor ground deformation at volcanic areas. The quantitative interpretation of such surface ground deformation measurements using geodetic data requires both, physical modelling to simulate the observed signals and inversion approaches to estimate the magmatic source parameters. Here, we use synthetic aperture radar data from Sentinel-1 radar interferometry satellite mission to image volcano deformation sources during the inflation along Kilauea's Southwest Rift Zone in April-May 2015. We propose a Finite Element Model (FEM) for the calculation of Green functions in a mechanically heterogeneous domain. The key aspect of the methodology lies in applying the reciprocity relationship of the Green functions between the station and the source for efficient numerical inversions. The search for the best-fitting magmatic (point) source(s) is generally conducted for an array of 3-D locations extending below a predefined volume region. However, our approach allows to reduce the total number of Green functions to the number of the observation points by using the, above mentioned, reciprocity relationship. This new methodology is able to accurately represent magmatic processes using physical models capable of simulating volcano deformation in non-uniform material properties distribution domains, which eventually will lead to better description of the status of the volcano.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.
1991-01-01
In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.
Filter Function for Wavefront Sensing Over a Field of View
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A filter function has been derived as a means of optimally weighting the wavefront estimates obtained in image-based phase retrieval performed at multiple points distributed over the field of view of a telescope or other optical system. When the data obtained in wavefront sensing and, more specifically, image-based phase retrieval, are used for controlling the shape of a deformable mirror or other optic used to correct the wavefront, the control law obtained by use of the filter function gives a more balanced optical performance over the field of view than does a wavefront-control law obtained by use of a wavefront estimate obtained from a single point in the field of view.
Lagrangian statistics in weakly forced two-dimensional turbulence.
Rivera, Michael K; Ecke, Robert E
2016-01-01
Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale ri. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in terms of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.
Renormalization-group theory for finite-size scaling in extreme statistics
NASA Astrophysics Data System (ADS)
Györgyi, G.; Moloney, N. R.; Ozogány, K.; Rácz, Z.; Droz, M.
2010-04-01
We present a renormalization-group (RG) approach to explain universal features of extreme statistics applied here to independent identically distributed variables. The outlines of the theory have been described in a previous paper, the main result being that finite-size shape corrections to the limit distribution can be obtained from a linearization of the RG transformation near a fixed point, leading to the computation of stable perturbations as eigenfunctions. Here we show details of the RG theory which exhibit remarkable similarities to the RG known in statistical physics. Besides the fixed points explaining universality, and the least stable eigendirections accounting for convergence rates and shape corrections, the similarities include marginally stable perturbations which turn out to be generic for the Fisher-Tippett-Gumbel class. Distribution functions containing unstable perturbations are also considered. We find that, after a transitory divergence, they return to the universal fixed line at the same or at a different point depending on the type of perturbation.
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
Castro, P; Huerga, C; Chamorro, P; Garayoa, J; Roch, M; Pérez, L
2018-04-17
The goals of the study are to characterize imaging properties in 2D PET images reconstructed with the iterative algorithm ordered-subset expectation maximization (OSEM) and to propose a new method for the generation of synthetic images. The noise is analyzed in terms of its magnitude, spatial correlation, and spectral distribution through standard deviation, autocorrelation function, and noise power spectrum (NPS), respectively. Their variations with position and activity level are also analyzed. This noise analysis is based on phantom images acquired from 18 F uniform distributions. Experimental recovery coefficients of hot spheres in different backgrounds are employed to study the spatial resolution of the system through point spread function (PSF). The NPS and PSF functions provide the baseline for the proposed simulation method: convolution with PSF as kernel and noise addition from NPS. The noise spectral analysis shows that the main contribution is of random nature. It is also proven that attenuation correction does not alter noise texture but it modifies its magnitude. Finally, synthetic images of 2 phantoms, one of them an anatomical brain, are quantitatively compared with experimental images showing a good agreement in terms of pixel values and pixel correlations. Thus, the contrast to noise ratio for the biggest sphere in the NEMA IEC phantom is 10.7 for the synthetic image and 8.8 for the experimental image. The properties of the analyzed OSEM-PET images can be described by NPS and PSF functions. Synthetic images, even anatomical ones, are successfully generated by the proposed method based on the NPS and PSF. Copyright © 2018 Sociedad Española de Medicina Nuclear e Imagen Molecular. Publicado por Elsevier España, S.L.U. All rights reserved.
Bounds on the conductivity of a suspension of random impenetrable spheres
NASA Astrophysics Data System (ADS)
Beasley, J. D.; Torquato, S.
1986-11-01
We compare the general Beran bounds on the effective electrical conductivity of a two-phase composite to the bounds derived by Torquato for the specific model of spheres distributed throughout a matrix phase. For the case of impenetrable spheres, these bounds are shown to be identical and to depend on the microstructure through the sphere volume fraction φ2 and a three-point parameter ζ2, which is an integral over a three-point correlation function. We evaluate ζ2 exactly through third order in φ2 for distributions of impenetrable spheres. This expansion is compared to the analogous results of Felderhof and of Torquato and Lado, all of whom employed the superposition approximation for the three-particle distribution function involved in ζ2. The results indicate that the exact ζ2 will be greater than the value calculated under the superposition approximation. For reasons of mathematical analogy, the results obtained here apply as well to the determination of the thermal conductivity, dielectric constant, and magnetic permeability of composite media and the diffusion coefficient of porous media.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Mapping of bird distributions from point count surveys
Sauer, J.R.; Pendleton, G.W.; Orsillo, Sandra; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.
Cancer-associated lysosomal changes: friends or foes?
Kallunki, T; Olsen, O D; Jäättelä, M
2013-04-18
Rapidly dividing and invasive cancer cells are strongly dependent on effective lysosomal function. Accordingly, transformation and cancer progression are characterized by dramatic changes in lysosomal volume, composition and cellular distribution. Depending on one's point of view, the cancer-associated changes in the lysosomal compartment can be regarded as friends or foes. Most of them are clearly transforming as they promote invasive growth, angiogenesis and drug resistance. The same changes can, however, strongly sensitize cells to lysosomal membrane permeabilization and thereby to lysosome-targeting anti-cancer drugs. In this review we compile our current knowledge on cancer-associated changes in lysosomal composition and discuss the consequences of these alterations to cancer progression and the possibilities they can bring to cancer therapy.
Thomas, Bex George; Elasser, Ahmed; Bollapragada, Srinivas; Galbraith, Anthony William; Agamy, Mohammed; Garifullin, Maxim Valeryevich
2016-03-29
A system and method of using one or more DC-DC/DC-AC converters and/or alternative devices allows strings of multiple module technologies to coexist within the same PV power plant. A computing (optimization) framework estimates the percentage allocation of PV power plant capacity to selected PV module technologies. The framework and its supporting components considers irradiation, temperature, spectral profiles, cost and other practical constraints to achieve the lowest levelized cost of electricity, maximum output and minimum system cost. The system and method can function using any device enabling distributed maximum power point tracking at the module, string or combiner level.
Finite Element Analysis for Turbine Blades with Contact Problems
NASA Astrophysics Data System (ADS)
Yang, Yuan-Jian; Yang, Liang; Wang, Hai-Kun; Zhu, Shun-Peng; Huang, Hong-Zhong
2016-12-01
Turbine blades are one of the key components in a typical turbofan engine, which plays an important role in flight safety. In this paper, we establish a establishes a three-dimensional finite element model of the turbine blades, then analyses the strength of the blade in complicated conditions under the joint function of temperature load, centrifugal load, and aerodynamic load. Furthermore, contact analysis of blade tenon and dovetail slot is also carried out to study the stress based on the contact elements. Finally, the Von Mises stress-strain distributions are obtained to acquire the several dangerous points and maximum Von Mises stress, which provide the basis for life prediction of turbine blade.
Virtuality and transverse momentum dependence of the pion distribution amplitude
Radyushkin, Anatoly V.
2016-03-08
We describe basics of a new approach to transverse momentum dependence in hard exclusive processes. We develop it in application to the transition process γ*γ → π 0 at the handbag level. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O (0,z)) describing a hadron with momentum p. Treated as functions of (pz) and z 2, they are parametrized through virtuality distribution amplitudes (VDA) Φ(x,σ), with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z 2. For intervals with z + = 0, we introduce the transverse momentum distribution amplitude (TMDA)more » ψ(x, k), and write it in terms of VDA Φ(x,σ). The results of covariant calculations, written in terms of Φ(x, σ) are converted into expressions involving ψ(x, k). Starting with scalar toy models, we extend the analysis onto the case of spin-1/2 quarks and QCD. We propose simple models for soft VDAs/TMDAs, and use them for comparison of handbag results with experimental (BaBar and BELLE) data on the pion transition form factor. Furthermore, we discuss how one can generate high-k tails from primordial soft distributions.« less
Thermalization, Freeze-out, and Noise: Deciphering Experimental Quantum Annealers
NASA Astrophysics Data System (ADS)
Marshall, Jeffrey; Rieffel, Eleanor G.; Hen, Itay
2017-12-01
By contrasting the performance of two quantum annealers operating at different temperatures, we address recent questions related to the role of temperature in these devices and their function as "Boltzmann samplers." Using a method to reliably calculate the degeneracies of the energy levels of large-scale spin-glass instances, we are able to estimate the instance-dependent effective temperature from the output of annealing runs. Our results corroborate the "freeze-out" picture which posits two regimes, one in which the final state corresponds to a Boltzmann distribution of the final Hamiltonian with a well-defined "effective temperature" determined at a freeze-out point late in the annealing schedule, and another regime in which such a distribution is not necessarily expected. We find that the output distributions of the annealers do not, in general, correspond to a classical Boltzmann distribution for the final Hamiltonian. We also find that the effective temperatures at different programing cycles fluctuate greatly, with the effect worsening with problem size. We discuss the implications of our results for the design of future quantum annealers to act as more-effective Boltzmann samplers and for the programing of such annealers.
From Head to Sword: The Clustering Properties of Stars in Orion
NASA Astrophysics Data System (ADS)
Gomez, Mercedes; Lada, Charles J.
1998-04-01
We investigate the structure in the spatial distributions of optically selected samples of young stars in the Head (lambda Orionis) and in the Sword (Orion A) regions of the constellation of Orion with the aid of stellar surface density maps and the two-point angular correlation function. The distributions of young stars in both regions are found to be nonrandom and highly clustered. Stellar surface density maps reveal three distinct clusters in the lambda Ori region. The two-point correlation function displays significant features at angular scales that correspond to the radii and separations of the three clusters identified in the surface density maps. Most young stars in the lambda Ori region (~80%) are presently found within these three clusters, consistent with the idea that the majority of young stars in this region were formed in dense protostellar clusters that have significantly expanded since their formation. Over a scale of ~0.05d-0.5d the correlation function is well described by a single power law that increases smoothly with decreasing angular scale. This suggests that, within the clusters, the stars either are themselves hierarchically clustered or have a volume density distribution that falls steeply with radius. The relative lack of Hα emission-line stars in the one cluster in this region that contains OB stars suggests a timescale for emission-line activity of less than 4 Myr around late-type stars in the cluster and may indicate that the lifetimes of protoplanetary disks around young stellar objects are reduced in clusters containing O stars. The spatial distribution of young stars in the Orion A region is considerably more complex. The angular correlation function of the OB stars (which are mostly foreground to the Orion A molecular cloud) is very similar to that of the Hα stars (which are located mostly within the molecular cloud) and significantly different from that of the young stars in the lambda Ori region. This suggests that, although spatially separated, both populations in the Orion A region may have originated from a similar fragmentation process. Stellar surface density maps and modeling of the angular correlation function suggest that somewhat less than half of the OB and Hα stars in the Orion A cloud are presently within well-defined stellar clusters. Although all the OB stars could have originated in rich clusters, a significant fraction of the Hα stars appear to have formed outside such clusters in a more spatially dispersed manner. The close similarity of the angular correlation functions of the OB and Hα stars toward the molecular cloud, in conjunction with the earlier indications of a relatively high star formation rate and high gas pressure in this cloud, is consistent with the idea that older, foreground OB stars triggered the current episode of star formation in the Orion A cloud. One of the OB clusters (Upper Sword) that is foreground to the cloud does not appear to be associated with any of the clusterings of emission-line stars, again suggesting a timescale (<4 Myr) for emission-line activity and disk lifetimes around late-type stars born in OB clusters.
Polarized structure functions in a constituent quark scenario
NASA Astrophysics Data System (ADS)
Scopetta, Sergio; Vento, Vicente; Traini, Marco
1998-12-01
Using a simple picture of the constituent quark as a composite system of point-like partons, we construct the polarized parton distributions by a convolution between constituent quark momentum distributions and constituent quark structure functions. Using unpolarized data to fix the parameters we achieve good agreement with the polarization experiments for the proton, while not so for the neutron. By relaxing our assumptions for the sea distributions, we define new quark functions for the polarized case, which reproduce well the proton data and are in better agreement with the neutron data. When our results are compared with similar calculations using non-composite constituent quarks the accord with the experiments of the present scheme is impressive. We conclude that, also in the polarized case, DIS data are consistent with a low energy scenario dominated by composite constituents of the nucleon.
One-dimensional gravity in infinite point distributions.
Gabrielli, A; Joyce, M; Sicard, F
2009-10-01
The dynamics of infinite asymptotically uniform distributions of purely self-gravitating particles in one spatial dimension provides a simple and interesting toy model for the analogous three dimensional problem treated in cosmology. In this paper we focus on a limitation of such models as they have been treated so far in the literature: the force, as it has been specified, is well defined in infinite point distributions only if there is a centre of symmetry (i.e., the definition requires explicitly the breaking of statistical translational invariance). The problem arises because naive background subtraction (due to expansion, or by "Jeans swindle" for the static case), applied as in three dimensions, leaves an unregulated contribution to the force due to surface mass fluctuations. Following a discussion by Kiessling of the Jeans swindle in three dimensions, we show that the problem may be resolved by defining the force in infinite point distributions as the limit of an exponentially screened pair interaction. We show explicitly that this prescription gives a well defined (finite) force acting on particles in a class of perturbed infinite lattices, which are the point processes relevant to cosmological N -body simulations. For identical particles the dynamics of the simplest toy model (without expansion) is equivalent to that of an infinite set of points with inverted harmonic oscillator potentials which bounce elastically when they collide. We discuss and compare with previous results in the literature and present new results for the specific case of this simplest (static) model starting from "shuffled lattice" initial conditions. These show qualitative properties of the evolution (notably its "self-similarity") like those in the analogous simulations in three dimensions, which in turn resemble those in the expanding universe.
NASA Astrophysics Data System (ADS)
Croke, Edward; Borselli, Matthew; Kiselev, Andrey; Deelman, Peter; Milosavljevic, Ivan; Alvarado-Rodriguez, Ivan; Ross, Richard; Schmitz, Adele; Gyure, Mark; Hunter, Andrew
2011-03-01
We report measurements of the spin-relaxation lifetime (T1) as a function of magnetic field in a strained-Si, accumulation-mode quantum dot. An integrated quantum-point contact (QPC) charge sensor was used to detect changes in dot occupancy as a function of bias applied to a single gate electrode. The addition spectra we obtained are consistent with theoretical predictions starting at N=0. The conductance of the charge sensor was measured by applying an AC voltage across the QPC and a 3 k Ω resistor. Lifetime measurements were conducted using a three-pulse technique consisting of a load, read, and flush sequence. T1 was measured by observing the decay of the spin bump amplitude as a function of the load pulse length. We measured decay times ranging from approximately 75 msec at 2T to 12 msec at 3T, consistent with previous reports and theoretical predictions. Sponsored by United States Department of Defense. Approved for Public Release, Distribution Unlimited.
do Amaral, Leonardo L.; Pavoni, Juliana F.; Sampaio, Francisco; Netto, Thomaz Ghilardi
2015-01-01
Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source‐to‐detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were −1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N‐ PMID:26699306
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.
2016-08-01
The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.
Streamline integration as a method for two-dimensional elliptic grid generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.
We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less
NASA Astrophysics Data System (ADS)
Sinaga, A. T.; Wangsaputra, R.
2018-03-01
The development of technology causes the needs of products and services become increasingly complex, diverse, and fluctuating. This causes the level of inter-company dependencies within a production chains increased. To be able to compete, efficiency improvements need to be done collaboratively in the production chain network. One of the efforts to increase efficiency is to harmonize production and distribution activities in the production chain network. This paper describes the harmonization of production and distribution activities by applying the use of push-pull system and supply hub in the production chain between two companies. The research methodology begins with conducting empirical and literature studies, formulating research questions, developing mathematical models, conducting trials and analyses, and taking conclusions. The relationship between the two companies is described in the MINLP mathematical model with the total cost of production chain as the objective function. Decisions generated by the mathematical models are the size of production lot, size of delivery lot, number of kanban, frequency of delivery, and the number of understock and overstock lot.
Quesque, François; Gigliotti, Maria-Francesca; Ott, Laurent; Bruyelle, Jean-Luc
2018-01-01
Peripersonal space is a multisensory representation of the environment around the body in relation to the motor system, underlying the interactions with the physical and social world. Although changing body properties and social context have been shown to alter the functional processing of space, little is known about how changing the value of objects influences the representation of peripersonal space. In two experiments, we tested the effect of modifying the spatial distribution of reward-yielding targets on manual reaching actions and peripersonal space representation. Before and after performing a target-selection task consisting of manually selecting a set of targets on a touch-screen table, participants performed a two-alternative forced-choice reachability-judgment task. In the target-selection task, half of the targets were associated with a reward (change of colour from grey to green, providing 1 point), the other half being associated with no reward (change of colour from grey to red, providing no point). In Experiment 1, the target-selection task was performed individually with the aim of maximizing the point count, and the distribution of the reward-yielding targets was either 50%, 25% or 75% in the proximal and distal spaces. In Experiment 2, the target-selection task was performed in a social context involving cooperation between two participants to maximize the point count, and the distribution of the reward-yielding targets was 50% in the proximal and distal spaces. Results showed that changing the distribution of the reward-yielding targets or introducing the social context modified concurrently the amplitude of self-generated manual reaching actions and the representation of peripersonal space. Moreover, a decrease of the amplitude of manual reaching actions caused a reduction of peripersonal space when resulting from the distribution of reward-yielding targets, while this effect was not observed in a social interaction context. In that case, the decreased amplitude of manual reaching actions was accompanied by an increase of peripersonal space representation, which was not due to the mere presence of a confederate (control experiment). We conclude that reward-dependent modulation of objects values in the environment modifies the representation of peripersonal space, when resulting from either self-generated motor actions or observation of motor actions performed by a confederate. PMID:29771982
Coello, Yann; Quesque, François; Gigliotti, Maria-Francesca; Ott, Laurent; Bruyelle, Jean-Luc
2018-01-01
Peripersonal space is a multisensory representation of the environment around the body in relation to the motor system, underlying the interactions with the physical and social world. Although changing body properties and social context have been shown to alter the functional processing of space, little is known about how changing the value of objects influences the representation of peripersonal space. In two experiments, we tested the effect of modifying the spatial distribution of reward-yielding targets on manual reaching actions and peripersonal space representation. Before and after performing a target-selection task consisting of manually selecting a set of targets on a touch-screen table, participants performed a two-alternative forced-choice reachability-judgment task. In the target-selection task, half of the targets were associated with a reward (change of colour from grey to green, providing 1 point), the other half being associated with no reward (change of colour from grey to red, providing no point). In Experiment 1, the target-selection task was performed individually with the aim of maximizing the point count, and the distribution of the reward-yielding targets was either 50%, 25% or 75% in the proximal and distal spaces. In Experiment 2, the target-selection task was performed in a social context involving cooperation between two participants to maximize the point count, and the distribution of the reward-yielding targets was 50% in the proximal and distal spaces. Results showed that changing the distribution of the reward-yielding targets or introducing the social context modified concurrently the amplitude of self-generated manual reaching actions and the representation of peripersonal space. Moreover, a decrease of the amplitude of manual reaching actions caused a reduction of peripersonal space when resulting from the distribution of reward-yielding targets, while this effect was not observed in a social interaction context. In that case, the decreased amplitude of manual reaching actions was accompanied by an increase of peripersonal space representation, which was not due to the mere presence of a confederate (control experiment). We conclude that reward-dependent modulation of objects values in the environment modifies the representation of peripersonal space, when resulting from either self-generated motor actions or observation of motor actions performed by a confederate.
Statistical procedures for evaluating daily and monthly hydrologic model predictions
Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.
2004-01-01
The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.
Derivatives of random matrix characteristic polynomials with applications to elliptic curves
NASA Astrophysics Data System (ADS)
Snaith, N. C.
2005-12-01
The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.
Continuous description of fluctuating eccentricities
NASA Astrophysics Data System (ADS)
Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves
2014-11-01
We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.
Statistical dynamics of regional populations and economies
NASA Astrophysics Data System (ADS)
Huo, Jie; Wang, Xu-Ming; Hao, Rui; Wang, Peng
Quantitative analysis of human behavior and social development is becoming a hot spot of some interdisciplinary studies. A statistical analysis on the population and GDP of 150 cities in China from 1990 to 2013 is conducted. The result indicates the cumulative probability distribution of the populations and that of the GDPs obeying the shifted power law, respectively. In order to understand these characteristics, a generalized Langevin equation describing variation of population is proposed, which is based on the correlations between population and GDP as well as the random fluctuations of the related factors. The equation is transformed into the Fokker-Plank equation to express the evolution of population distribution. The general solution demonstrates a transition of the distribution from the normal Gaussian distribution to a shifted power law, which suggests a critical point of time at which the transition takes place. The shifted power law distribution in the supercritical situation is qualitatively in accordance with the practical result. The distribution of the GDPs is derived from the well-known Cobb-Douglas production function. The result presents a change, in supercritical situation, from a shifted power law to the Gaussian distribution. This is a surprising result-the regional GDP distribution of our world will be the Gaussian distribution one day in the future. The discussions based on the changing trend of economic growth suggest it will be true. Therefore, these theoretical attempts may draw a historical picture of our society in the aspects of population and economy.
NASA Astrophysics Data System (ADS)
Bao, Yi; Cain, John; Chen, Yizheng; Huang, Ying; Chen, Genda; Palek, Leonard
2015-04-01
Thin concrete panels reinforced with alloy polymer macro-synthetic fibers have recently been introduced to rapidly and cost-effectively improve the driving condition of existing roadways by laying down a fabric sheet on the roadways, casting a thin layer of concrete, and then cutting the layer into panels. This study is aimed to understand the strain distribution and potential crack development of concrete panels under three-point loading. To this end, six full-size 6ft×6ft×3in concrete panels were tested to failure in the laboratory. They were instrumented with three types of single-mode optical fiber sensors whose performance and ability to measure the strain distribution and detect cracks were compared. Each optical fiber sensor was spliced and calibrated, and then attached to a fabric sheet using adhesive. A thin layer of mortar (0.25 ~ 0.5 in thick) was cast on the fabric sheet. The three types of distributed sensors were bare SM-28e+ fiber, SM-28e+ fiber with a tight buffer, and concrete crack cable, respectively. The concrete crack cable consisted of one SM-28e+ optical fiber with a tight buffer, one SM-28e+ optical fiber with a loose buffer for temperature compensation, and an outside protective tight sheath. Distributed strains were collected from the three optical fiber sensors with pre-pulse-pump Brillouin optical time domain analysis in room temperature. Among the three sensors, the bare fiber was observed to be most fragile during construction and operation, but most sensitive to strain change or micro-cracks. The concrete crack cable was most rugged, but not as sensitive to micro-cracks and robust in micro-crack measurement as the bare fiber. The ruggedness and sensitivity of the fiber with a tight buffer were in between the bare fiber and the concrete crack cable. The strain distribution resulted from the three optical sensors are in good agreement, and can be applied to successfully locate cracks in the concrete panels. It was observed that the three types of fibers were functional until the concrete panels have experienced inelastic deformation, making the distributed strain sensing technology promising for real applications in pavement engineering.
Queueing analysis of a canonical model of real-time multiprocessors
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, K. G.
1983-01-01
A logical classification of multiprocessor structures from the point of view of control applications is presented. A computation of the response time distribution for a canonical model of a real time multiprocessor is presented. The multiprocessor is approximated by a blocking model. Two separate models are derived: one created from the system's point of view, and the other from the point of view of an incoming task.
NASA Astrophysics Data System (ADS)
Doty, Constance; Cerkoney, Daniel; Gramajo, Ashley; Campbell, Tyler; Reid, Candy; Morales, Manuel; Delfanazari, Kaveh; Yamamoto, Takashi; Tsujimoto, Manabu; Kashiwagi, Takanari; Watanabe, Chiharu; Minami, Hidetoshi; Kadowaki, Kazuo; Klemm, Richard
We study the transverse magnetic (TM) electromagnetic cavity mode wave functions for an ideal equilateral triangular microstrip antenna exhibiting C3v point group symmetry, which restricts the number of TM(n,m) modes to | m - n | = 3 p , where the integer p > 0 for the modes odd and even about the three mirror planes, but p = 0 can also exist for the even modes. We calculate the wave functions and the power distribution forms from the uniform Josephson current source and from the excitation of one of these cavity modes, and fit data on an early equilateral triangular Bi2Sr2CaCu2O8+δ mesa, for which the C3v symmetry was apparently broken. Work supported in part by the UCF RAMP, JSPS Fellowship, CREST-JST, and WPI-MANA.
Aureole radiance field about a source in a scattering-absorbing medium.
Zachor, A S
1978-06-15
A technique is described for computing the aureole radiance field about a point source in a medium that absorbs and scatters according to an arbitrary phase function. When applied to an isotropic source in a homogenous medium, the method uses a double-integral transform which is evaluated recursively to obtain the aureole radiances contributed by successive scattering orders, as in the Neumann solution of the radiative transfer equation. The normalized total radiance field distribution and the variation of flux with field of view and range are given for three wavelengths in the uv and one in the visible, for a sea-level model atmosphere assumed to scatter according to a composite of the Rayleigh and modified Henyey-Greenstein phase functions. These results have application to the detection and measurement of uncollimated uv and visible sources at short ranges in the lower atmosphere.
NASA Astrophysics Data System (ADS)
Ledwon, Aleksandra; Bieda, Robert; Kawczyk-Krupka, Aleksandra; Polanski, Andrzej; Wojciechowski, Konrad; Latos, Wojciech; Sieron-Stoltny, Karolina; Sieron, Aleksander
2008-02-01
Background: Fluorescence diagnostics uses the ability of tissues to fluoresce after exposition to a specific wavelength of light. The change in fluorescence between normal and progression to cancer allows to see early cancer and precancerous lesions often missed by white light. Aim: To improve by computer image processing the sensitivity of fluorescence images obtained during examination of skin, oral cavity, vulva and cervix lesions, during endoscopy, cystoscopy and bronchoscopy using Xillix ONCOLIFE. Methods: Function of image f(x,y):R2 --> R 3 was transformed from original color space RGB to space in which vector of 46 values refers to every point labeled by defined xy-coordinates- f(x,y):R2 --> R 46. By means of Fisher discriminator vector of attributes of concrete point analalyzed in the image was reduced according to two defined classes defined as pathologic areas (foreground) and healthy areas (background). As a result the highest four fisher's coefficients allowing the greatest separation between points of pathologic (foreground) and healthy (background) areas were chosen. In this way new function f(x,y):R2 --> R 4 was created in which point x,y corresponds with vector Y, H, a*, c II. In the second step using Gaussian Mixtures and Expectation-Maximisation appropriate classificator was constructed. This classificator enables determination of probability that the selected pixel of analyzed image is a pathologically changed point (foreground) or healthy one (background). Obtained map of probability distribution was presented by means of pseudocolors. Results: Image processing techniques improve the sensitivity, quality and sharpness of original fluorescence images. Conclusion: Computer image processing enables better visualization of suspected areas examined by means of fluorescence diagnostics.
Voronoi Tessellation for reducing the processing time of correlation functions
NASA Astrophysics Data System (ADS)
Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio
2018-01-01
The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.
Gradients estimation from random points with volumetric tensor in turbulence
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2017-12-01
We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.
Hadron Spectra in p+p Collisions at Rhic and Lhc Energies
NASA Astrophysics Data System (ADS)
Khandai, P. K.; Sett, P.; Shukla, P.; Singh, V.
2013-06-01
We present the systematic analysis of transverse momentum (pT) spectra of identified hadrons in p+p collisions at Relativistic Heavy Ion Collider (√ {s} = 62.4 and 200 GeV) and at Large Hadron Collider (LHC) energies (√ {s} = 0.9, 2.76 and 7.0 TeV) using phenomenological fit functions. We review various forms of Hagedorn and Tsallis distributions and show their equivalence. We use Tsallis distribution which successfully describes the spectra in p+p collisions using two parameters, Tsallis temperature T which governs the soft bulk spectra and power n which determines the initial production in partonic collisions. We obtain these parameters for pions, kaons and protons as a function of center-of-mass energy (√ {s}). It is found that the parameter T has a weak but decreasing trend with increasing √ {s}. The parameter n decreases with increasing √ {s} which shows that production of hadrons at higher energies are increasingly dominated by point like qq scatterings. Another important observation is with increasing √ {s}, the separation between the powers for protons and pions narrows down hinting that the baryons and mesons are governed by same production process as one moves to the highest LHC energy.
NASA Astrophysics Data System (ADS)
Dargent, J.; Aunai, N.; Belmont, G.; Dorville, N.; Lavraud, B.; Hesse, M.
2016-06-01
> Tangential current sheets are ubiquitous in space plasmas and yet hard to describe with a kinetic equilibrium. In this paper, we use a semi-analytical model, the BAS model, which provides a steady ion distribution function for a tangential asymmetric current sheet and we prove that an ion kinetic equilibrium produced by this model remains steady in a fully kinetic particle-in-cell simulation even if the electron distribution function does not satisfy the time independent Vlasov equation. We then apply this equilibrium to look at the dependence of magnetic reconnection simulations on their initial conditions. We show that, as the current sheet evolves from a symmetric to an asymmetric upstream plasma, the reconnection rate is impacted and the X line and the electron flow stagnation point separate from one another and start to drift. For the simulated systems, we investigate the overall evolution of the reconnection process via the classical signatures discussed in the literature and searched in the Magnetospheric MultiScale data. We show that they seem robust and do not depend on the specific details of the internal structure of the initial current sheet.
Global exponential stability analysis on impulsive BAM neural networks with distributed delays
NASA Astrophysics Data System (ADS)
Li, Yao-Tang; Yang, Chang-Bo
2006-12-01
Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.
Removing the Impact of Correlated PSF Uncertainties in Weak Lensing
NASA Astrophysics Data System (ADS)
Lu, Tianhuan; Zhang, Jun; Dong, Fuyu; Li, Yingke; Liu, Dezi; Fu, Liping; Li, Guoliang; Fan, Zuhui
2018-05-01
Accurate reconstruction of the spatial distributions of the point-spread function (PSF) is crucial for high precision cosmic shear measurements. Nevertheless, current methods are not good at recovering the PSF fluctuations of high spatial frequencies. In general, the residual PSF fluctuations are spatially correlated, and therefore can significantly contaminate the correlation functions of the weak lensing signals. We propose a method to correct for this contamination statistically, without any assumptions on the PSF and galaxy morphologies or their spatial distribution. We demonstrate our idea with the data from the W2 field of CFHTLenS.
NASA Astrophysics Data System (ADS)
Moreto, Jose; Liu, Xiaofeng
2017-11-01
The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.
Miller, William H.; Cotton, Stephen J.
2016-08-28
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory - e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer valuesmore » of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states - and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
Miller, William H; Cotton, Stephen J
2016-08-28
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory-e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states-and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.
Lambert W function for applications in physics
NASA Astrophysics Data System (ADS)
Veberič, Darko
2012-12-01
The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.
A study on the application of Fourier series in IMRT treatment planning.
Almeida-Trinidad, R; Garnica-Garza, H M
2007-12-01
In intensity-modulated radiotherapy, a set of x-ray fluence profiles is iteratively adjusted until a desired absorbed dose distribution is obtained. The purpose of this article is to present a method that allows the optimization of fluence profiles based on the Fourier series decomposition of an initial approximation to the profile. The method has the advantage that a new fluence profile can be obtained in a precise and controlled way with the tuning of only two parameters, namely the phase of the sine and cosine terms of one of the Fourier components, in contrast to the point-by-point tuning of the profile. Also, because the method uses analytical functions, the resultant profiles do not exhibit numerical artifacts. A test case consisting of a mathematical phantom with a target wrapped around a critical structure is discussed to illustrate the algorithm. It is shown that the degree of conformality of the absorbed dose distribution can be tailored by varying the number of Fourier terms made available to the optimization algorithm. For the test case discussed here, it is shown that the number of Fourier terms to be modified depends on the number of radiation beams incident on the target but it is in general in the order of 10 terms.
Spiga, D
2018-01-01
X-ray mirrors with high focusing performances are commonly used in different sectors of science, such as X-ray astronomy, medical imaging and synchrotron/free-electron laser beamlines. While deformations of the mirror profile may cause degradation of the focus sharpness, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators. The resulting profile can be characterized with suitable metrology tools and correlated with the expected optical quality via a wavefront propagation code or, sometimes, predicted using geometric optics. In the latter case and for the special class of profile deformations with monotonically increasing derivative, i.e. concave upwards, the point spread function (PSF) can even be predicted analytically. Moreover, under these assumptions, the relation can also be reversed: from the desired PSF the required profile deformation can be computed analytically, avoiding the use of trial-and-error search codes. However, the computation has been so far limited to geometric optics, which entailed some limitations: for example, mirror diffraction effects and the size of the coherent X-ray source were not considered. In this paper, the beam-shaping formalism in the framework of physical optics is reviewed, in the limit of small light wavelengths and in the case of Gaussian intensity wavefronts. Some examples of shaped profiles are also shown, aiming at turning a Gaussian intensity distribution into a top-hat one, and checks of the shaping performances computing the at-wavelength PSF by means of the WISE code are made.
Distributed Computation of the knn Graph for Large High-Dimensional Point Sets
Plaku, Erion; Kavraki, Lydia E.
2009-01-01
High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318
Characteristics of Ion Distribution Functions in Dipolarizing FluxBundles: THEMIS Event Studies
NASA Astrophysics Data System (ADS)
Runov, A.; Artemyev, A.; Birn, J.; Pritchett, P. L.; Zhou, X.
2016-12-01
Taking advantage of multi-point observations from repeating configuration of the Time History of Events and Macroscale Interactions during Substorms (THEMIS) fleet with probe separation of 1 to 2 Earth radii (RE) along X, Y, and Z in the geocentric solar magnetospheric system (GSM), we study ion distribution functions observed by the probes during three transient dipolarization events. Comparing observations by the multiple probes, we characterize changes in the ion distribution functions with respect to geocentric distance (X), cross-tail probe separation (Y), and levels of |Bx|, which characterize the distance from the neutral sheet. We examined 2-D and 1-D cuts of the 3-D velocity distribution functions by the {Vb,Vbxv} plane. The results indicate that the velocity distribution functions observed inside the dipolarizing flux bundles (DFB) close to the magnetic equator are often perpendicularly anisotropic for velocities Vth≤v≤2Vth, where Vth is the ion thermal velocity. Ions of higher energies (v>2Vth) are isotropic. Hence, interaction of DFBs and ambient ions may result in the perpendicular anisotropy of the injecting energetic ions, which is an important factor for plasma waves and instabilities excitation and further particle acceleration in the inner magnetosphere. We also compare the observations with the results of test-particles and PIC simulations.
Analysis of Mass Averaged Tissue Doses in CAM, CAF, MAX, and FAX
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Qualls, Garry D.; Clowdsley, Martha S.; Blattnig, Steve R.; Simonsen, Lisa C.; Walker, Steven A.; Singleterry, Robert C.
2009-01-01
To estimate astronaut health risk due to space radiation, one must have the ability to calculate exposure-related quantities averaged over specific organs and tissue types. In this study, we first examine the anatomical properties of the Computerized Anatomical Man (CAM), Computerized Anatomical Female (CAF), Male Adult voXel (MAX), and Female Adult voXel (FAX) models by comparing the masses of various tissues to the reference values specified by the International Commission on Radiological Protection (ICRP). Major discrepancies are found between the CAM and CAF tissue masses and the ICRP reference data for almost all of the tissues. We next examine the distribution of target points used with the deterministic transport code HZETRN to compute mass averaged exposure quantities. A numerical algorithm is used to generate multiple point distributions for many of the effective dose tissues identified in CAM, CAF, MAX, and FAX. It is concluded that the previously published CAM and CAF point distributions were under-sampled and that the set of point distributions presented here should be adequate for future studies involving CAM, CAF, MAX, or FAX. It is concluded that MAX and FAX are more accurate than CAM and CAF for space radiation analyses.
Gluon amplitudes as 2 d conformal correlators
NASA Astrophysics Data System (ADS)
Pasterski, Sabrina; Shao, Shu-Heng; Strominger, Andrew
2017-10-01
Recently, spin-one wave functions in four dimensions that are conformal primaries of the Lorentz group S L (2 ,C ) were constructed. We compute low-point, tree-level gluon scattering amplitudes in the space of these conformal primary wave functions. The answers have the same conformal covariance as correlators of spin-one primaries in a 2 d CFT. The Britto-Cachazo-Feng-Witten (BCFW) recursion relation between three- and four-point gluon amplitudes is recast into this conformal basis.
Stability and Optimal Harvesting of Modified Leslie-Gower Predator-Prey Model
NASA Astrophysics Data System (ADS)
Toaha, S.; Azis, M. I.
2018-03-01
This paper studies a modified of dynamics of Leslie-Gower predator-prey population model. The model is stated as a system of first order differential equations. The model consists of one predator and one prey. The Holling type II as a predation function is considered in this model. The predator and prey populations are assumed to be beneficial and then the two populations are harvested with constant efforts. Existence and stability of the interior equilibrium point are analysed. Linearization method is used to get the linearized model and the eigenvalue is used to justify the stability of the interior equilibrium point. From the analyses, we show that under a certain condition the interior equilibrium point exists and is locally asymptotically stable. For the model with constant efforts of harvesting, cost function, revenue function, and profit function are considered. The stable interior equilibrium point is then related to the maximum profit problem as well as net present value of revenues problem. We show that there exists a certain value of the efforts that maximizes the profit function and net present value of revenues while the interior equilibrium point remains stable. This means that the populations can live in coexistence for a long time and also maximize the benefit even though the populations are harvested with constant efforts.
Li, Jiahui; Yu, Qiqing
2016-01-01
Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.
NASA Astrophysics Data System (ADS)
Malkin, B. Z.; Abishev, N. M.; Baibekov, E. I.; Pytalev, D. S.; Boldyrev, K. N.; Popova, M. N.; Bettinelli, M.
2017-07-01
We construct a distribution function of the strain-tensor components induced by point defects in an elastically anisotropic continuum, which can be used to account quantitatively for many effects observed in different branches of condensed matter physics. Parameters of the derived six-dimensional generalized Lorentz distribution are expressed through the integrals computed over the array of strains. The distribution functions for the cubic diamond and elpasolite crystals and tetragonal crystals with the zircon and scheelite structures are presented. Our theoretical approach is supported by a successful modeling of specific line shapes of singlet-doublet transitions of the T m3 + ions doped into AB O4 (A =Y , Lu; B =P , V) crystals with zircon structure, observed in high-resolution optical spectra. The values of the defect strengths of impurity T m3 + ions in the oxygen surroundings, obtained as a result of this modeling, can be used in future studies of random strains in different rare-earth oxides.
Depth resolved investigations of boron implanted silicon
NASA Astrophysics Data System (ADS)
Sztucki, M.; Metzger, T. H.; Milita, S.; Berberich, F.; Schell, N.; Rouvière, J. L.; Patel, J.
2003-01-01
We have studied the depth distribution and structure of defects in boron implanted silicon (0 0 1). Silicon wafers were implanted with a boron dose of 6×10 15 ions/cm -2 at 32 keV and went through different annealing treatments. Using diffuse X-ray scattering at grazing incidence and exit angles we are able to distinguish between different kinds of defects (point defect clusters and extrinsic stacking faults on {1 1 1} planes) and to determine their depth distribution as a function of the thermal budget. Cross-section transmission electron microscopy was used to gain complementary information. In addition we have determined the strain distribution caused by the boron implantation as a function of depth from rocking curve measurements.
Time distribution of heavy rainfall events in south west of Iran
NASA Astrophysics Data System (ADS)
Ghassabi, Zahra; kamali, G. Ali; Meshkatee, Amir-Hussain; Hajam, Sohrab; Javaheri, Nasrolah
2016-07-01
Accurate knowledge of rainfall time distribution is a fundamental issue in many Meteorological-Hydrological studies such as using the information of the surface runoff in the design of the hydraulic structures, flood control and risk management, and river engineering studies. Since the main largest dams of Iran are in the south-west of the country (i.e. South Zagros), this research investigates the temporal rainfall distribution based on an analytical numerical method to increase the accuracy of hydrological studies in Iran. The United States Soil Conservation Service (SCS) estimated the temporal rainfall distribution in various forms. Hydrology studies usually utilize the same distribution functions in other areas of the world including Iran due to the lack of sufficient observation data. However, we first used Weather Research Forecasting (WRF) model to achieve the simulated rainfall results of the selected storms on south west of Iran in this research. Then, a three-parametric Logistic function was fitted to the rainfall data in order to compute the temporal rainfall distribution. The domain of the WRF model is 30.5N-34N and 47.5E-52.5E with a resolution of 0.08 degree in latitude and longitude. We selected 35 heavy storms based on the observed rainfall data set to simulate with the WRF Model. Storm events were scrutinized independently from each other and the best analytical three-parametric logistic function was fitted for each grid point. The results show that the value of the coefficient a of the logistic function, which indicates rainfall intensity, varies from the minimum of 0.14 to the maximum of 0.7. Furthermore, the values of the coefficient B of the logistic function, which indicates rain delay of grid points from start time of rainfall, vary from 1.6 in south-west and east to more than 8 in north and central parts of the studied area. In addition, values of rainfall intensities are lower in south west of IRAN than those of observed or proposed by the SCS values in the US.
Whistler Waves With Electron Temperature Anisotropy And Non-Maxwellian Distribution Functions
NASA Astrophysics Data System (ADS)
Masood, W.
2017-12-01
Low frequency waves (˜ 100Hz), popularly known as Lion roars, are ubiquitously observed by satellites in terrestrial magnetosheath. By dint of both wave and electron data from the Cluster spacecraft and employing the linear kinetic theory for the electromagnetic waves, Masood et. al. (Ann. Geophysicae. 24, 1725-1735 (2006)) examined the conjecture made by Thorne and Tsurutani (Nature, 93, 384 (1981)) that whistler waves with electron temperature anisotropy are the progenitors of lion roars. It turned out that the study based upon the bi-Maxwellian distribution function did not come up with a satisfactory explanation of certain disagreements between theory and data. In this paper, we revisit the problem using the generalized (r, q) distribution to carry out the linear stability analysis. It is shown that good qualitative and quantitative agreements are found between theory and data using this distribution. Whistler waves with electron temperature anisotropy are also investigated with other non-Maxwellian distribution functions and general comparison is made in the end and differences in each case are highlighted. The possible applications in space plasmas are also pointed out.
NASA Astrophysics Data System (ADS)
Ouyang, Bo; Shang, Weiwei
2016-03-01
The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.
The Spitzer-IRAC Point-source Catalog of the Vela-D Cloud
NASA Astrophysics Data System (ADS)
Strafella, F.; Elia, D.; Campeggio, L.; Giannini, T.; Lorenzetti, D.; Marengo, M.; Smith, H. A.; Fazio, G.; De Luca, M.; Massi, F.
2010-08-01
This paper presents the observations of Cloud D in the Vela Molecular Ridge, obtained with the Infrared Array Camera (IRAC) camera on board the Spitzer Space Telescope at the wavelengths λ = 3.6, 4.5, 5.8, and 8.0 μm. A photometric catalog of point sources, covering a field of approximately 1.2 deg2, has been extracted and complemented with additional available observational data in the millimeter region. Previous observations of the same region, obtained with the Spitzer MIPS camera in the photometric bands at 24 μm and 70 μm, have also been reconsidered to allow an estimate of the spectral slope of the sources in a wider spectral range. A total of 170,299 point sources, detected at the 5σ sensitivity level in at least one of the IRAC bands, have been reported in the catalog. There were 8796 sources for which good quality photometry was obtained in all four IRAC bands. For this sample, a preliminary characterization of the young stellar population based on the determination of spectral slope is discussed; combining this with diagnostics in the color-magnitude and color-color diagrams, the relative population of young stellar objects (YSOs) in different evolutionary classes has been estimated and a total of 637 candidate YSOs have been selected. The main differences in their relative abundances have been highlighted and a brief account for their spatial distribution is given. The star formation rate has also been estimated and compared with the values derived for other star-forming regions. Finally, an analysis of the spatial distribution of the sources by means of the two-point correlation function shows that the younger population, constituted by the Class I and flat-spectrum sources, is significantly more clustered than the Class II and III sources.
Du, Hang; Song, Ci; Li, Shengyi; Xu, Mingjin; Peng, Xiaoqiang
2017-05-20
In the process of computer controlled optical surfacing (CCOS), the uncontrollable rolled edge restricts further improvements of the machining accuracy and efficiency. Two reasons are responsible for the rolled edge problem during small tool polishing. One is that the edge areas cannot be processed because of the orbit movement. The other is that changing the tool influence function (TIF) is difficult to compensate for in algorithms, since pressure step appears in the local pressure distribution at the surface edge. In this paper, an acentric tool influence function (A-TIF) is designed to remove the rolled edge after CCOS polishing. The model of A-TIF is analyzed theoretically, and a control point translation dwell time algorithm is used to verify that the full aperture of the workpiece can be covered by the peak removal point of the tool influence functions. Thus, surface residual error in the full aperture can be effectively corrected. Finally, the experiments are carried out. Two fused silica glass samples of 100 mm×100 mm are polished by traditional CCOS and the A-TIF method, respectively. The rolled edge was clearly produced in the sample polished by the traditional CCOS, while residual errors do not show this problem the sample polished by the A-TIF method. Therefore, the rolled edge caused by the traditional CCOS process is successfully suppressed during the A-TIF process. The ability to suppress the rolled edge of the designed A-TIF has been confirmed.
Approach to the origin of turbulence on the basis of two-point kinetic theory
NASA Technical Reports Server (NTRS)
Tsuge, S.
1974-01-01
Equations for the fluctuation correlation in an incompressible shear flow are derived on the basis of kinetic theory, utilizing the two-point distribution function which obeys the BBGKY hierarchy equation truncated with the hypothesis of 'ternary' molecular chaos. The step from the molecular to the hydrodynamic description is accomplished by a moment expansion which is a two-point version of the thirteen-moment method, and which leads to a series of correlation equations, viz., the two-point counterparts of the continuity equation, the Navier-Stokes equation, etc. For almost parallel shearing flows the two-point equation is separable and reduces to two Orr-Sommerfeld equations with different physical implications.
NASA Astrophysics Data System (ADS)
Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.
2016-12-01
Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.
mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons
NASA Astrophysics Data System (ADS)
Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris
2018-02-01
mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.
Agamy, Mohammed; Elasser, Ahmed; Sabate, Juan Antonio; Galbraith, Anthony William; Harfman Todorovic, Maja
2014-09-09
A distributed photovoltaic (PV) power plant includes a plurality of distributed dc-dc converters. The dc-dc converters are configured to switch in coordination with one another such that at least one dc-dc converter transfers power to a common dc-bus based upon the total system power available from one or more corresponding strings of PV modules. Due to the coordinated switching of the dc-dc converters, each dc-dc converter transferring power to the common dc-bus continues to operate within its optimal efficiency range as well as to optimize the maximum power point tracking in order to increase the energy yield of the PV power plant.
NASA Astrophysics Data System (ADS)
Lin, Pei-Sheng; Rosset, Denis; Zhang, Yanbao; Bancal, Jean-Daniel; Liang, Yeong-Cherng
2018-03-01
The device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a finite number of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable, and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap. As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates of the amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of any Bell inequality tailored for the specific property and the specific distribution of interest.
NASA Astrophysics Data System (ADS)
Rana, Parvez; Vauhkonen, Jari; Junttila, Virpi; Hou, Zhengyang; Gautam, Basanta; Cawkwell, Fiona; Tokola, Timo
2017-12-01
Large-diameter trees (taking DBH > 30 cm to define large trees) dominate the dynamics, function and structure of a forest ecosystem. The aim here was to employ sparse airborne laser scanning (ALS) data with a mean point density of 0.8 m-2 and the non-parametric k-most similar neighbour (k-MSN) to predict tree diameter at breast height (DBH) distributions in a subtropical forest in southern Nepal. The specific objectives were: (1) to evaluate the accuracy of the large-tree fraction of the diameter distribution; and (2) to assess the effect of the number of training areas (sample size, n) on the accuracy of the predicted tree diameter distribution. Comparison of the predicted distributions with empirical ones indicated that the large tree diameter distribution can be derived in a mixed species forest with a RMSE% of 66% and a bias% of -1.33%. It was also feasible to downsize the sample size without losing the interpretability capacity of the model. For large-diameter trees, even a reduction of half of the training plots (n = 250), giving a marginal increase in the RMSE% (1.12-1.97%) was reported compared with the original training plots (n = 500). To be consistent with these outcomes, the sample areas should capture the entire range of spatial and feature variability in order to reduce the occurrence of error.
The Blume-Capel model on hierarchical lattices: Exact local properties
NASA Astrophysics Data System (ADS)
Rocha-Neto, Mário J. G.; Camelo-Neto, G.; Nogueira, E., Jr.; Coutinho, S.
2018-03-01
The local properties of the spin one ferromagnetic Blume-Capel model defined on hierarchical lattices with dimension two and three are obtained by a numerical recursion procedure and studied as functions of the temperature and the reduced crystal-field parameter. The magnetization and the density of sites in the configuration S = 0 state are carefully investigated at low temperature in the region of the phase diagram that presents the phenomenon of phase reentrance. Both order parameters undergo transitions from the ferromagnetic to the ordered paramagnetic phase with abrupt discontinuities that decrease along the phase boundary at low temperatures. The distribution of magnetization in a typical profile was determined on the transition line presenting a broad multifractal spectrum that narrows towards the fractal limit (single point) as the discontinuities of the order parameters grow towards a maximum. The amplitude of the order-parameter discontinuities and the narrowing of the multifractal spectra were used to delimit the low temperature interval for the possible locus of the tricritical point.
Critical tipping point distinguishing two types of transitions in modular network structures
NASA Astrophysics Data System (ADS)
Shai, Saray; Kenett, Dror Y.; Kenett, Yoed N.; Faust, Miriam; Dobson, Simon; Havlin, Shlomo
2015-12-01
Modularity is a key organizing principle in real-world large-scale complex networks. The relatively sparse interactions between modules are critical to the functionality of the system and are often the first to fail. We model such failures as site percolation targeting interconnected nodes, those connecting between modules. We find, using percolation theory and simulations, that they lead to a "tipping point" between two distinct regimes. In one regime, removal of interconnected nodes fragments the modules internally and causes the system to collapse. In contrast, in the other regime, while only attacking a small fraction of nodes, the modules remain but become disconnected, breaking the entire system. We show that networks with broader degree distribution might be highly vulnerable to such attacks since only few nodes are needed to interconnect the modules, consequently putting the entire system at high risk. Our model has the potential to shed light on many real-world phenomena, and we briefly consider its implications on recent advances in the understanding of several neurocognitive processes and diseases.
Age differences in search of web pages: the effects of link size, link number, and clutter.
Grahame, Michael; Laberge, Jason; Scialfa, Charles T
2004-01-01
Reaction time, eye movements, and errors were measured during visual search of Web pages to determine age-related differences in performance as a function of link size, link number, link location, and clutter. Participants (15 young adults, M = 23 years; 14 older adults, M = 57 years) searched Web pages for target links that varied from trial to trial. During one half of the trials, links were enlarged from 10-point to 12-point font. Target location was distributed among the left, center, and bottom portions of the screen. Clutter was manipulated according to the percentage of used space, including graphics and text, and the number of potentially distracting nontarget links was varied. Increased link size improved performance, whereas increased clutter and links hampered search, especially for older adults. Results also showed that links located in the left region of the page were found most easily. Actual or potential applications of this research include Web site design to increase usability, particularly for older adults.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Heisenberg scaling with weak measurement: a quantum state discrimination point of view
2015-03-18
a quantum state discrimination point of view. The Heisenberg scaling of the photon number for the precision of the interaction parameter between...coherent light and a spin one-half particle (or pseudo-spin) has a simple interpretation in terms of the interaction rotating the quantum state to an...release; distribution is unlimited. Heisenberg scaling with weak measurement: a quantum state discrimination point of view The views, opinions and/or
SU-E-I-16: Scan Length Dependency of the Radial Dose Distribution in a Long Polyethylene Cylinder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakalyar, D; McKenney, S; Feng, W
Purpose: The area-averaged dose in the central plane of a long cylinder following a CT scan depends upon the radial dose distribution and the length of the scan. The ICRU/TG200 phantom, a polyethylene cylinder 30 cm in diameter and 60 cm long, was the subject of this study. The purpose was to develop an analytic function that could determine the dose for a scan length L at any point in the central plane of this phantom. Methods: Monte Carlo calculations were performed on a simulated ICRU/TG200 phantom under conditions of cylindrically symmetric conditions of irradiation. Thus, the radial dose distributionmore » function must be an even function that accounts for two competing effects: The direct beam makes its weakest contribution at the center while the scatter begins abruptly at the outer radius and grows as the center is approached. The scatter contribution also increases with scan length with the increase approaching its limiting value at the periphery faster than along the central axis. An analytic function was developed that fit the data and possessed these features. Results: Symmetry and continuity dictate a local extremum at the center which is a minimum for the ICRU/TG200 phantom. The relative depth of the minimum decreases as the scan length grows and an absolute maximum can occur between the center and outer edge of the cylinders. As the scan length grows, the relative dip in the center decreases so that for very long scan lengths, the dose profile is relatively flat. Conclusion: An analytic function characterizes the radial and scan length dependency of dose for long cylindrical phantoms. The function can be integrated with the results expressed in closed form. One use for this is to help determine average dose distribution over the central cylinder plane for any scan length.« less
MinFinder v2.0: An improved version of MinFinder
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, Isaac E.
2008-10-01
A new version of the "MinFinder" program is presented that offers an augmented linking procedure for Fortran-77 subprograms, two additional stopping rules and a new start-point rejection mechanism that saves a significant portion of gradient and function evaluations. The method is applied on a set of standard test functions and the results are reported. New version program summaryProgram title: MinFinder v2.0 Catalogue identifier: ADWU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC Licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 14 150 No. of bytes in distributed program, including test data, etc.: 218 144 Distribution format: tar.gz Programming language used: GNU C++, GNU FORTRAN, GNU C Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200 000 bytes Classification: 4.9 Catalogue identifier of previous version: ADWU_v1_0 Journal reference of previous version: Computer Physics Communications 174 (2006) 166-179 Does the new version supersede the previous version?: Yes Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be trapped in any local minimum. Global optimization is then the appropriate tool. For example, solving a non-linear system of equations via optimization, one may encounter many local minima that do not correspond to solutions, i.e. they are far from zero. Solution method: Using a uniform pdf, points are sampled from a rectangular domain. A clustering technique, based on a typical distance and a gradient criterion, is used to decide from which points a local search should be started. Further searching is terminated when all the local minima inside the search domain are thought to be found. This is accomplished via three stopping rules: the "double-box" stopping rule, the "observables" stopping rule and the "expected minimizers" stopping rule. Reasons for the new version: The link procedure for source code in Fortran 77 is enhanced, two additional stopping rules are implemented and a new criterion for accepting-start points, that economizes on function and gradient calls, is introduced. Summary of revisions:Addition of command line parameters to the utility program make_program. Augmentation of the link process for Fortran 77 subprograms, by linking the final executable with the g2c library. Addition of two probabilistic stopping rules. Introduction of a rejection mechanism to the Checking step of the original method, that reduces the number of gradient evaluations. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the objective function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerasimov, O. I.; Adamian, V. M.
The behavior of the theoretically predicted correlational ''fine''energy-loss spectrum of inelastic electron scattering in disordered systemsclose to single resonance is investigated near the critical point. In extendingour earlier work, it is shown that the relation of the statistical expressionof the cross section of energy loss to the function which describes the lineshape in an ideal gas asymptotically increases near the critical point as apower law. ''Fracton'' interpretation of display of the localization of asingle excitation in disordered systems in the resonance-line shape of theenergy-loss spectrum is suggested. The possibility of direct determination ofthe pair distribution function (without Fourier transformation ofmore » the structurefactor) using the method of charged-particle scattering is discussed.« less
NASA Astrophysics Data System (ADS)
Tarasov, V. F.
In the present paper exact formulae for the calculation of zeros of Rnl(r) and 1F1(-a c; z), where z = 2 λ r, a = n - l - 1 >= 0 and c = 2l + 2 >= 2 are presented. For a <= 4 the method due to Tartallia and Cardono, and that due to L. Ferrai, L. Euler and J. L. Lagrange are used. In other cases (a > 4) numerical methods are employed to obtain the results (to within 10-15). For greater geometrical obviousness of the irregulary distribution (as a > 3) of zeros xk = zk - (c + a - 1) on the axis y = 0, the circular diagrams with the radius Ra = (a - 1) √ {c + a - 1} are presented for the first time. It is possible to notice some singularities of distribution of these zeros and their images - the points Tk - on the circle. For a = 3 and 4 their exact ``angle'' asymptotics (as c --> ∞) are obtained. It is shown that in the basis of the L. Ferrari, L. Euler and J.-L. Lagrange methods, using for solving the equation 1F1(-4 c; z) = 0, one
Random Matrix Theory and Elliptic Curves
2014-11-24
distribution is unlimited. 1 ELLIPTIC CURVES AND THEIR L-FUNCTIONS 2 points on that curve. Counting rational points on curves is a field with a rich ...deficiency of zeros near the origin of the histograms in Figure 1. While as d becomes large this discretization becomes smaller and has less and less effect...order of 30), the regular oscillations seen at the origin become dominated by fluctuations of an arithmetic origin, influenced by zeros of the Riemann
Bao, Ande; Zhao, Xia; Phillips, William T; Woolley, F Ross; Otto, Randal A; Goins, Beth; Hevezi, James M
2005-01-01
Radioimmunotherapy of hematopoeitic cancers and micrometastases has been shown to have significant therapeutic benefit. The treatment of solid tumors with radionuclide therapy has been less successful. Previous investigations of intratumoral activity distribution and studies on intratumoral drug delivery suggest that a probable reason for the disappointing results in solid tumor treatment is nonuniform intratumoral distribution coupled with restricted intratumoral drug penetrance, thus inhibiting antineoplastic agents from reaching the tumor's center. This paper describes a nonuniform intratumoral activity distribution identified by limited radiolabeled tracer diffusion from tumor surface to tumor center. This activity was simulated using techniques that allowed the absorbed dose distributions to be estimated using different intratumoral diffusion capabilities and calculated for tumors of varying diameters. The influences of these absorbed dose distributions on solid tumor radionuclide therapy are also discussed. The absorbed dose distribution was calculated using the dose point kernel method that provided for the application of a three-dimensional (3D) convolution between a dose rate kernel function and an activity distribution function. These functions were incorporated into 3D matrices with voxels measuring 0.10 x 0.10 x 0.10 mm3. At this point fast Fourier transform (FFT) and multiplication in frequency domain followed by inverse FFT (iFFT) were used to effect this phase of the dose calculation process. The absorbed dose distribution for tumors of 1, 3, 5, 10, and 15 mm in diameter were studied. Using the therapeutic radionuclides of 131I, 186Re, 188Re, and 90Y, the total average dose, center dose, and surface dose for each of the different tumor diameters were reported. The absorbed dose in the nearby normal tissue was also evaluated. When the tumor diameters exceed 15 mm, a much lower tumor center dose is delivered compared with tumors between 3 and 5 mm in diameter. Based on these findings, the use of higher beta-energy radionuclides, such as 188Re and 90Y is more effective in delivering a higher absorbed dose to the tumor center at tumor diameters around 10 mm.
NASA Technical Reports Server (NTRS)
Roth, R. J.
1973-01-01
The distribution function of ion energy parallel to the magnetic field of a modified Penning discharge has been measured with a retarding potential energy analyzer. These ions escaped through one of the throats of the magnetic mirror geometry. Simultaneous measurements of the ion energy distribution function perpendicular to the magnetic field have been made with a charge exchange neutral detector. The ion energy distribution functions are approximately Maxwellian, and the parallel and perpendicular kinetic temperatures are equal within experimental error. These results suggest that turbulent processes previously observed in this discharge Maxwellianize the velocity distribution along a radius in velocity space and cause an isotropic energy distribution. When the distributions depart from Maxwellian, they are enhanced above the Maxwellian tail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archambault, L; Papaconstadopoulos, P; Seuntjens, J
Purpose: To study Cherenkov light emission in plastic scintillation detectors (PSDs) from a theoretical point of view to identify situations that may arise where the calibration coefficient obtained in one condition is not applicable to another condition. By identifying problematic situations, we hope to provide guidance on how to confidently use PSDs. Methods: Cherenkov light emission in PSD was modelled using basic physical principles. In particular, changes in refractive index as a function of wavelength were accounted for using the Sellmeier empirical equation. Both electron and photon beams were considered. For photons, realistic distributions of secondary charged particles were calculatedmore » using Klein-Nishina’s formula. Cherenkov production and collection in PSDs were studied for a range of parameters including beam energy, charged particle momentum distribution, detector orientation and material composition. Finally, experimental validation was made using a commercial plastic scintillation detector. Results: In specific situations, results show that the Cherenkov spectrum coupled in the PSD can deviate from its expected behaviour (i.e. one over the square of the wavelength). In these cases were the model is realistic it is possible to see a peak wavelength instead of a monotonically decreasing function. Consequences of this phenomenon are negligible when the momentum of charged particle is distributed randomly, but in some clinically relevant cases, such as an electron beam at depth close to R50 or for photon beams with minimal scatter component, the value of the calibration coefficient can be altered. Experimental tests with electron beams showed changes in the Cherenkov light ratio, the parameter used in the calibration of PSDs, up to 2–3% depending on the PSD orientation. Conclusion: This work is the first providing a physical explanation for apparent change in PSD calibration coefficient. With this new information at hand, it will be possible to better guide the clinical use of PSDs.« less
Fly eye radar or micro-radar sensor technology
NASA Astrophysics Data System (ADS)
Molchanov, Pavlo; Asmolova, Olga
2014-05-01
To compensate for its eye's inability to point its eye at a target, the fly's eye consists of multiple angularly spaced sensors giving the fly the wide-area visual coverage it needs to detect and avoid the threats around him. Based on a similar concept a revolutionary new micro-radar sensor technology is proposed for detecting and tracking ground and/or airborne low profile low altitude targets in harsh urban environments. Distributed along a border or around a protected object (military facility and buildings, camp, stadium) small size, low power unattended radar sensors can be used for target detection and tracking, threat warning, pre-shot sniper protection and provides effective support for homeland security. In addition it can provide 3D recognition and targets classification due to its use of five orders more pulses than any scanning radar to each space point, by using few points of view, diversity signals and intelligent processing. The application of an array of directional antennas eliminates the need for a mechanical scanning antenna or phase processor. It radically decreases radar size and increases bearing accuracy several folds. The proposed micro-radar sensors can be easy connected to one or several operators by point-to-point invisible protected communication. The directional antennas have higher gain, can be multi-frequency and connected to a multi-functional network. Fly eye micro-radars are inexpensive, can be expendable and will reduce cost of defense.
NASA Astrophysics Data System (ADS)
Li, Xiao-Dong; Park, Changbom; Sabiu, Cristiano G.; Park, Hyunbae; Cheng, Cheng; Kim, Juhan; Hong, Sungwook E.
2017-08-01
We develop a methodology to use the redshift dependence of the galaxy 2-point correlation function (2pCF) across the line of sight, ξ ({r}\\perp ), as a probe of cosmological parameters. The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. This geometrical distortion can be observed as a redshift-dependent rescaling in the measured ξ ({r}\\perp ). We test this methodology using a sample of 1.75 billion mock galaxies at redshifts 0, 0.5, 1, 1.5, and 2, drawn from the Horizon Run 4 N-body simulation. The shape of ξ ({r}\\perp ) can exhibit a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. Other contributions, including the gravitational growth of structure, galaxy bias, and the redshift space distortions, do not produce large redshift evolution in the shape. We show that one can make use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This method could be applicable to future large-scale structure surveys, especially photometric surveys such as DES and LSST, to derive tight cosmological constraints. This work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities.
Pole-Like Road Furniture Detection in Sparse and Unevenly Distributed Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Li, F.; Lehtomäki, M.; Oude Elberink, S.; Vosselman, G.; Puttonen, E.; Kukko, A.; Hyyppä, J.
2018-05-01
Pole-like road furniture detection received much attention due to its traffic functionality in recent years. In this paper, we develop a framework to detect pole-like road furniture from sparse mobile laser scanning data. The framework is carried out in four steps. The unorganised point cloud is first partitioned. Then above ground points are clustered and roughly classified after removing ground points. A slicing check in combination with cylinder masking is proposed to extract pole-like road furniture candidates. Pole-like road furniture are obtained after occlusion analysis in the last stage. The average completeness and correctness of pole-like road furniture in sparse and unevenly distributed mobile laser scanning data was above 0.83. It is comparable to the state of art in the field of pole-like road furniture detection in mobile laser scanning data of good quality and is potentially of practical use in the processing of point clouds collected by autonomous driving platforms.
Waypoints Following Guidance for Surface-to-Surface Missiles
NASA Astrophysics Data System (ADS)
Zhou, Hao; Khalil, Elsayed M.; Rahman, Tawfiqur; Chen, Wanchun
2018-04-01
The paper proposes waypoints following guidance law. In this method an optimal trajectory is first generated which is then represented through a set of waypoints that are distributed from the starting point up to the final target point using a polynomial. The guidance system then works by issuing guidance command needed to move from one waypoint to the next one. Here the method is applied for a surface-to-surface missile. The results show that the method is feasible for on-board application.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579
Grass shrimp are one of the more widely distributed estuarine benthic organisms along the Gulf of Mexico and Atlantic coasts, but they have been used infrequently in contaminated sediment assessments. Early life stages of the caridean grass shrimp, Palaemonetes pugio (Holthuis), ...
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Topology for Dominance for Network of Multi-Agent System
NASA Astrophysics Data System (ADS)
Szeto, K. Y.
2007-05-01
The resource allocation problem in evolving two-dimensional point patterns is investigated for the existence of good strategies for the construction of initial configuration that leads to fast dominance of the pattern by one single species, which can be interpreted as market dominance by a company in the context of multi-agent systems in econophysics. For hexagonal lattice, certain special topological arrangements of the resource in two-dimensions, such as rings, lines and clusters have higher probability of dominance, compared to random pattern. For more complex networks, a systematic way to search for a stable and dominant strategy of resource allocation in the changing environment is found by means of genetic algorithm. Five typical features can be summarized by means of the distribution function for the local neighborhood of friends and enemies as well as the local clustering coefficients: (1) The winner has more triangles than the loser has. (2) The winner likes to form clusters as the winner tends to connect with other winner rather than with losers; while the loser tends to connect with winners rather than losers. (3) The distribution function of friends as well as enemies for the winner is broader than the corresponding distribution function for the loser. (4) The connectivity at which the peak of the distribution of friends for the winner occurs is larger than that of the loser; while the peak values for friends for winners is lower. (5) The connectivity at which the peak of the distribution of enemies for the winner occurs is smaller than that of the loser; while the peak values for enemies for winners is lower. These five features appear to be general, at least in the context of two-dimensional hexagonal lattices of various sizes, hierarchical lattice, Voronoi diagrams, as well as high-dimensional random networks. These general local topological properties of networks are relevant to strategists aiming at dominance in evolving patterns when the interaction between the agents is local.
[Good drug distribution practice and its implementation in drug distribution companies].
Draksiene, Gailute
2002-01-01
Good Distribution Practice is based on the Directive of the Board of the European Community 92/25/EEC regarding the wholesale distribution of drugs for human consumption. It is stated in the Directive that the whole drug distribution channel is to be controlled from the point of drug production or import down to the supplies to the end user. In order to reach the goal, the drug distribution company must create the quality assurance system and facilitate its correct functioning. This aim requires development of the rules of the Good Distribution Practice. Those rules set the general requirements of the Good Distribution Practice for distribution companies that they must conduct. The article explains main requirements postulated in the rules of the Good Distribution Practice and implementation of the Good Distribution Practice requirements in drug distribution companies.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
NASA Astrophysics Data System (ADS)
Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.
1996-02-01
Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.
Evaluation of Rock Surface Characterization by Means of Temperature Distribution
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.
2017-12-01
Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.
The melting point of lithium: an orbital-free first-principles molecular dynamics study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Mohan; Hung, Linda; Huang, Chen
2013-08-25
The melting point of liquid lithium near zero pressure is studied with large-scale orbital-free first-principles molecular dynamics (OF-FPMD) in the isobaric-isothermal ensemble. Here, we adopt the Wang-Govind-Carter (WGC) functional as our kinetic energy density functional (KEDF) and construct a bulk-derived local pseudopotential (BLPS) for Li. Our simulations employ both the ‘heat-until-melts’ method and the coexistence method. We predict 465 K as an upper bound of the melting point of Li from the ‘heat-until-melts’ method, while we predict 434 K as the melting point of Li from the coexistence method. These values compare well with an experimental melting point of 453more » K at zero pressure. Furthermore, we calculate a few important properties of liquid Li including the diffusion coefficients, pair distribution functions, static structure factors, and compressibilities of Li at 470 K and 725 K in the canonical ensemble. This theoretically-obtained results show good agreement with known experimental results, suggesting that OF-FPMD using a non-local KEDF and a BLPS is capable of accurately describing liquid metals.« less
New Approaches to Robust Confidence Intervals for Location: A Simulation Study.
1984-06-01
obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined
Cumulative Poisson Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert
1990-01-01
Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.
Selective structural source identification
NASA Astrophysics Data System (ADS)
Totaro, Nicolas
2018-04-01
In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.
r.randomwalk v1.0, a multi-functional conceptual tool for mass movement routing
NASA Astrophysics Data System (ADS)
Mergili, M.; Krenn, J.; Chu, H.-J.
2015-09-01
We introduce r.randomwalk, a flexible and multi-functional open source tool for backward- and forward-analyses of mass movement propagation. r.randomwalk builds on GRASS GIS, the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are: (i) multiple break criteria can be combined to compute an impact indicator score, (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter settings, resulting in an impact indicator index in the range 0-1, (iii) built-in functions for validation and visualization of the results are provided, (iv) observed landslides can be back-analyzed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk (i) for a single event, the Acheron Rock Avalanche in New Zealand, (ii) for landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) for lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.
r.randomwalk v1, a multi-functional conceptual tool for mass movement routing
NASA Astrophysics Data System (ADS)
Mergili, M.; Krenn, J.; Chu, H.-J.
2015-12-01
We introduce r.randomwalk, a flexible and multi-functional open-source tool for backward and forward analyses of mass movement propagation. r.randomwalk builds on GRASS GIS (Geographic Resources Analysis Support System - Geographic Information System), the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are (i) multiple break criteria can be combined to compute an impact indicator score; (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter sets, resulting in an impact indicator index in the range 0-1; (iii) built-in functions for validation and visualization of the results are provided; (iv) observed landslides can be back analysed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk for (i) a single event, the Acheron rock avalanche in New Zealand; (ii) landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.
Many-body perturbation theory using the density-functional concept: beyond the GW approximation.
Bruneval, Fabien; Sottile, Francesco; Olevano, Valerio; Del Sole, Rodolfo; Reining, Lucia
2005-05-13
We propose an alternative formulation of many-body perturbation theory that uses the density-functional concept. Instead of the usual four-point integral equation for the polarizability, we obtain a two-point one, which leads to excellent optical absorption and energy-loss spectra. The corresponding three-point vertex function and self-energy are then simply calculated via an integration, for any level of approximation. Moreover, we show the direct impact of this formulation on the time-dependent density-functional theory. Numerical results for the band gap of bulk silicon and solid argon illustrate corrections beyond the GW approximation for the self-energy.
Nonalgebraic integrability of one reversible dynamical system of the Cremona type
NASA Astrophysics Data System (ADS)
Rerikh, K. V.
1998-05-01
A reversible dynamical system (RDS) and a system of nonlinear functional equations, defined by a certain rational quadratic Cremona mapping and arising from the static model of the dispersion approach in the theory of strong interactions [the Chew-Low-type equations with crossing-symmetry matrix A(l,1)], are considered. This RDS is split into one- and two-dimensional ones. An explicit Cremona transformation that completely determines the exact solution of the two-dimensional system is found. This solution depends on an odd function satisfying a nonlinear autonomous three-point functional equation. Nonalgebraic integrability of RDS under consideration is proved using the method of Poincaré normal forms and the Siegel theorem on biholomorphic linearization of a mapping at a nonresonant fixed point.
Obtaining the Grobner Initialization for the Ground Flash Fraction Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Solakiewicz, R.; Attele, R.; Koshak, W.
2011-01-01
At optical wavelengths and from the vantage point of space, the multiple scattering cloud medium obscures one's view and prevents one from easily determining what flashes strike the ground. However, recent investigations have made some progress examining the (easier, but still difficult) problem of estimating the ground flash fraction in a set of N flashes observed from space In the study by Koshak, a Bayesian inversion method was introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function of three variables (one of which is the ground flash fraction) was minimized by a numerical method. This method has formed the basis of a Ground Flash Fraction Retrieval Algorithm (GoFFRA) that is being tested as part of GOES-R GLM risk reduction.
A task-irrelevant stimulus attribute affects perception and short-term memory
Huang, Jie; Kahana, Michael J.; Sekuler, Robert
2010-01-01
Selective attention protects cognition against intrusions of task-irrelevant stimulus attributes. This protective function was tested in coordinated psychophysical and memory experiments. Stimuli were superimposed, horizontally and vertically oriented gratings of varying spatial frequency; only one orientation was task relevant. Experiment 1 demonstrated that a task-irrelevant spatial frequency interfered with visual discrimination of the task-relevant spatial frequency. Experiment 2 adopted a two-item Sternberg task, using stimuli that had been scaled to neutralize interference at the level of vision. Despite being visually neutralized, the task-irrelevant attribute strongly influenced recognition accuracy and associated reaction times (RTs). This effect was sharply tuned, with the task-irrelevant spatial frequency having an impact only when the task-relevant spatial frequencies of the probe and study items were highly similar to one another. Model-based analyses of judgment accuracy and RT distributional properties converged on the point that the irrelevant orientation operates at an early stage in memory processing, not at a later one that supports decision making. PMID:19933454
Total recall in distributive associative memories
NASA Technical Reports Server (NTRS)
Danforth, Douglas G.
1991-01-01
Iterative error correction of asymptotically large associative memories is equivalent to a one-step learning rule. This rule is the inverse of the activation function of the memory. Spectral representations of nonlinear activation functions are used to obtain the inverse in closed form for Sparse Distributed Memory, Selected-Coordinate Design, and Radial Basis Functions.
An extension of the Laplace transform to Schwartz distributions
NASA Technical Reports Server (NTRS)
Price, D. R.
1974-01-01
A characterization of the Laplace transform is developed which extends the transform to the Schwartz distributions. The class of distributions includes the impulse functions and other singular functions which occur as solutions to ordinary and partial differential equations. The standard theorems on analyticity, uniqueness, and invertibility of the transform are proved by using the characterization as the definition of the Laplace transform. The definition uses sequences of linear transformations on the space of distributions which extends the Laplace transform to another class of generalized functions, the Mikusinski operators. It is shown that the sequential definition of the transform is equivalent to Schwartz' extension of the ordinary Laplace transform to distributions but, in contrast to Schwartz' definition, does not use the distributional Fourier transform. Several theorems concerning the particular linear transformations used to define the Laplace transforms are proved. All the results proved in one dimension are extended to the n-dimensional case, but proofs are presented only for those situations that require methods different from their one-dimensional analogs.
Application of Simulated Annealing and Related Algorithms to TWTA Design
NASA Technical Reports Server (NTRS)
Radke, Eric M.
2004-01-01
Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is decremented and the process repeats. Eventually (and hopefully), a near-globally optimal solution is attained as T approaches zero. Several exciting variants of SA have recently emerged, including Discrete-State Simulated Annealing (DSSA) and Simulated Tempering (ST). The DSSA algorithm takes the thermodynamic analogy one step further by categorizing objective function evaluations into discrete states. In doing so, many of the case-specific problems associated with fine-tuning the SA algorithm can be avoided; for example, theoretical approximations for the initial and final temperature can be derived independently of the case. In this manner, DSSA provides a scheme that is more robust with respect to widely differing design surfaces. ST differs from SA in that the temperature T becomes an additional random variable in the optimization. The system is also kept in equilibrium as the temperature changes, as opposed to the system being driven out of equilibrium as temperature changes in SA. ST is designed to overcome obstacles in design surfaces where numerous local minima are separated by high barriers. These algorithms are incorporated into the optimal design of the traveling-wave tube amplifier (TWTA). The area under scrutiny is the collector, in which it would be ideal to use negative potential to decelerate the spent electron beam to zero kinetic energy just as it reaches the collector surface. In reality this is not plausible due to a number of physical limitations, including repulsion and differing levels of kinetic energy among individual electrons. Instead, the collector is designed with multiple stages depressed below ground potential. The design of this multiple-stage collector is the optimization problem of interest. One remaining problem in SA and DSSA is the difficulty in determining when equilibrium has been reached so that the current Markov chain can be terminated. It has been suggested in recent literature that simulating the thermodynamic properties opecific heat, entropy, and internal energy from the Boltzmann distribution can provide good indicators of having reached equilibrium at a certain temperature. These properties are tested for their efficacy and implemented in SA and DSSA code with respect to TWTA collector optimization.
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
The structure of water around the compressibility minimum
L. B. Skinner; Benmore, C. J.; Parise, J.; ...
2014-12-03
Here we present diffraction data that yield the oxygen-oxygen pair distribution function, gOO(r) over the range 254.2–365.9 K. The running O-O coordination number, which represents the integral of the pair distribution function as a function of radial distance, is found to exhibit an isosbestic point at 3.30(5) Å. The probability of finding an oxygen atom surrounding another oxygen at this distance is therefore shown to be independent of temperature and corresponds to an O-O coordination number of 4.3(2). Moreover, the experimental data also show a continuous transition associated with the second peak position in gOO(r) concomitant with the compressibility minimummore » at 319 K.« less
Tunneling and reflection in unimolecular reaction kinetic energy release distributions
NASA Astrophysics Data System (ADS)
Hansen, K.
2018-02-01
The kinetic energy release distributions in unimolecular reactions is calculated with detailed balance theory, taking into account the tunneling and the reflection coefficient in three different types of transition states; (i) a saddle point corresponding to a standard RRKM-type theory, (ii) an attachment Langevin cross section, and (iii) an absorbing sphere potential at short range, without long range interactions. Corrections are significant in the one dimensional saddle point states. Very light and lightly bound absorbing systems will show measurable effects in decays from the absorbing sphere, whereas the Langevin cross section is essentially unchanged.
FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)
NASA Astrophysics Data System (ADS)
Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.
2017-02-01
This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.
Metastable Distributions of Markov Chains with Rare Transitions
NASA Astrophysics Data System (ADS)
Freidlin, M.; Koralov, L.
2017-06-01
In this paper we consider Markov chains X^\\varepsilon _t with transition rates that depend on a small parameter \\varepsilon . We are interested in the long time behavior of X^\\varepsilon _t at various \\varepsilon -dependent time scales t = t(\\varepsilon ). The asymptotic behavior depends on how the point (1/\\varepsilon , t(\\varepsilon )) approaches infinity. We introduce a general notion of complete asymptotic regularity (a certain asymptotic relation between the ratios of transition rates), which ensures the existence of the metastable distribution for each initial point and a given time scale t(\\varepsilon ). The technique of i-graphs allows one to describe the metastable distribution explicitly. The result may be viewed as a generalization of the ergodic theorem to the case of parameter-dependent Markov chains.
Mechanical analysis of the dry stone walls built by the Incas
NASA Astrophysics Data System (ADS)
Castro, Jaime; Vallejo, Luis E.; Estrada, Nicolas
2017-06-01
In this paper, the retaining walls in the agricultural terraces built by the Incas are analyzed from a mechanical point of view. In order to do so, ten different walls from the Lower Agricultural Sector of Machu Picchu, Perú, were selected using images from Google Street View and Google Earth Pro. Then, these walls were digitalized and their mechanical stability was evaluated. Firstly, it was found that these retaining walls are characterized by two distinctive features: disorder and a block size distribution with a large size span, i.e., the particle size varies from blocks that can be carried by one person to large blocks weighing several tons. Secondly, it was found that, thanks to the large span of the block size distribution, the factor of safety of the Inca retaining walls is remarkably close to those that are recommended in modern geotechnical design standards. This suggests that these structures were not only functional but also highly optimized, probably as a result of a careful trial and error procedure.
Learning stochastic reward distributions in a speeded pointing task.
Seydell, Anna; McCann, Brian C; Trommershäuser, Julia; Knill, David C
2008-04-23
Recent studies have shown that humans effectively take into account task variance caused by intrinsic motor noise when planning fast hand movements. However, previous evidence suggests that humans have greater difficulty accounting for arbitrary forms of stochasticity in their environment, both in economic decision making and sensorimotor tasks. We hypothesized that humans can learn to optimize movement strategies when environmental randomness can be experienced and thus implicitly learned over several trials, especially if it mimics the kinds of randomness for which subjects might have generative models. We tested the hypothesis using a task in which subjects had to rapidly point at a target region partly covered by three stochastic penalty regions introduced as "defenders." At movement completion, each defender jumped to a new position drawn randomly from fixed probability distributions. Subjects earned points when they hit the target, unblocked by a defender, and lost points otherwise. Results indicate that after approximately 600 trials, subjects approached optimal behavior. We further tested whether subjects simply learned a set of stimulus-contingent motor plans or the statistics of defenders' movements by training subjects with one penalty distribution and then testing them on a new penalty distribution. Subjects immediately changed their strategy to achieve the same average reward as subjects who had trained with the second penalty distribution. These results indicate that subjects learned the parameters of the defenders' jump distributions and used this knowledge to optimally plan their hand movements under conditions involving stochastic rewards and penalties.
Reconfiguration in Robust Distributed Real-Time Systems Based on Global Checkpoints
1991-12-01
achieved by utilizing distributed systems in which a single application program executes on multiple processors, connected to a network. The distributed...single application program executes on multiple proces- sors, connected to a network. The distributed nature of such systems make it possible to ...resident at every node. How - ever, the responsibility for execution of a particular function is assigned to only one node in this framework. This function
Comparison of calculated and observed integral magnitudes for the globular cluster M13
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerashchenko, A.N.; Kadla, Z.I.
On the basis of a study of the distribution of stars in the central region of the globular cluster M13 it is found that integral photoelectric observations cover stars down to about the point of turnoff from the main sequence. Here the distribution of giants and stars of the horizontal branch as a function of distance from the center of the cluster is the same within limits of 0
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
Infrared divergences for free quantum fields in cosmological spacetimes
NASA Astrophysics Data System (ADS)
Higuchi, Atsushi; Rendell, Nicola
2018-06-01
We investigate the nature of infrared divergences for the free graviton and inflaton two-point functions in flat Friedman–Lemaître–Robertson–Walker spacetime. These divergences arise because the momentum integral for these two-point functions diverges in the infrared. It is straightforward to see that the power of the momentum in the integrand can be increased by 2 in the infrared using large gauge transformations, which are sufficient for rendering these two-point functions infrared finite for slow-roll inflation. In other words, if the integrand of the momentum integral for these two-point functions behaves like , where p is the momentum, in the infrared, then it can be made to behave like by large gauge transformations. On the other hand, it is known that, if one smears these two-point functions in a gauge-invariant manner, the power of the momentum in the integrand is changed from to . This fact suggests that the power of the momentum in the integrand for these two-point functions can be increased by 4 using large gauge transformations. In this paper we show that this is indeed the case. Thus, the two-point functions for the graviton and inflaton fields can be made finite by large gauge transformations for a large class of potentials and states in single-field inflation.
Scale-dependent cyclone-anticyclone asymmetry in a forced rotating turbulence experiment
NASA Astrophysics Data System (ADS)
Gallet, B.; Campagne, A.; Cortet, P.-P.; Moisy, F.
2014-03-01
We characterize the statistical and geometrical properties of the cyclone-anticyclone asymmetry in a statistically steady forced rotating turbulence experiment. Turbulence is generated by a set of vertical flaps which continuously inject velocity fluctuations towards the center of a tank mounted on a rotating platform. We first characterize the cyclone-anticyclone asymmetry from conventional single-point vorticity statistics. We propose a phenomenological model to explain the emergence of the asymmetry in the experiment, from which we predict scaling laws for the root-mean-square velocity in good agreement with the experimental data. We further quantify the cyclone-anticyclone asymmetry using a set of third-order two-point velocity correlations. We focus on the correlations which are nonzero only if the cyclone-anticyclone symmetry is broken. They offer two advantages over single-point vorticity statistics: first, they are defined from velocity measurements only, so an accurate resolution of the Kolmogorov scale is not required; second, they provide information on the scale-dependence of the cyclone-anticyclone asymmetry. We compute these correlation functions analytically for a random distribution of independent identical vortices. These model correlations describe well the experimental ones, indicating that the cyclone-anticyclone asymmetry is dominated by the large-scale long-lived cyclones.
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.
2012-01-01
The potential development of large aperture ground-based "photon bucket" optical receivers for deep space communications has received considerable attention recently. One approach currently under investigation proposes to polish the aluminum reflector panels of 34-meter microwave antennas to high reflectance, and accept the relatively large spotsize generated by even state-of-the-art polished aluminum panels. Here we describe the experimental effort currently underway at the Deep Space Network (DSN) Goldstone Communications Complex in California, to test and verify these concepts in a realistic operational environment. A custom designed aluminum panel has been mounted on the 34 meter research antenna at Deep-Space Station 13 (DSS-13), and a remotely controlled CCD camera with a large CCD sensor in a weather-proof container has been installed next to the subreflector, pointed directly at the custom polished panel. Using the planet Jupiter as the optical point-source, the point-spread function (PSF) generated by the polished panel has been characterized, the array data processed to determine the center of the intensity distribution, and expected communications performance of the proposed polished panel optical receiver has been evaluated.
Single Crystal Diamond Needle as Point Electron Source.
Kleshch, Victor I; Purcell, Stephen T; Obraztsov, Alexander N
2016-10-12
Diamond has been considered to be one of the most attractive materials for cold-cathode applications during past two decades. However, its real application is hampered by the necessity to provide appropriate amount and transport of electrons to emitter surface which is usually achieved by using nanometer size or highly defective crystallites having much lower physical characteristics than the ideal diamond. Here, for the first time the use of single crystal diamond emitter with high aspect ratio as a point electron source is reported. Single crystal diamond needles were obtained by selective oxidation of polycrystalline diamond films produced by plasma enhanced chemical vapor deposition. Field emission currents and total electron energy distributions were measured for individual diamond needles as functions of extraction voltage and temperature. The needles demonstrate current saturation phenomenon and sensitivity of emission to temperature. The analysis of the voltage drops measured via electron energy analyzer shows that the conduction is provided by the surface of the diamond needles and is governed by Poole-Frenkel transport mechanism with characteristic trap energy of 0.2-0.3 eV. The temperature-sensitive FE characteristics of the diamond needles are of great interest for production of the point electron beam sources and sensors for vacuum electronics.
Single Crystal Diamond Needle as Point Electron Source
NASA Astrophysics Data System (ADS)
Kleshch, Victor I.; Purcell, Stephen T.; Obraztsov, Alexander N.
2016-10-01
Diamond has been considered to be one of the most attractive materials for cold-cathode applications during past two decades. However, its real application is hampered by the necessity to provide appropriate amount and transport of electrons to emitter surface which is usually achieved by using nanometer size or highly defective crystallites having much lower physical characteristics than the ideal diamond. Here, for the first time the use of single crystal diamond emitter with high aspect ratio as a point electron source is reported. Single crystal diamond needles were obtained by selective oxidation of polycrystalline diamond films produced by plasma enhanced chemical vapor deposition. Field emission currents and total electron energy distributions were measured for individual diamond needles as functions of extraction voltage and temperature. The needles demonstrate current saturation phenomenon and sensitivity of emission to temperature. The analysis of the voltage drops measured via electron energy analyzer shows that the conduction is provided by the surface of the diamond needles and is governed by Poole-Frenkel transport mechanism with characteristic trap energy of 0.2-0.3 eV. The temperature-sensitive FE characteristics of the diamond needles are of great interest for production of the point electron beam sources and sensors for vacuum electronics.
Structure of Soot-Containing Laminar Jet Diffusion Flames
NASA Technical Reports Server (NTRS)
Mortazavi, S.; Sunderland, P. B.; Jurng, J.; Koylu, U. O.; Faeth, G. M.
1993-01-01
The structure and soot properties of nonbuoyant and weakly-buoyant round jet diffusion flames were studied, considering ethylene, propane and acetylene burning in air at pressures of 0.125-2.0 atm. Measurements of flame structure included radiative heat loss fractions, flame shape and temperature distributions in the fuel-lean (overfire) region. These measurements were used to evaluate flame structure predictions based on the conserved-scalar formalism in conjunction with the laminar flamelet concept, finding good agreement betweem predictions and measurements. Soot property measurements included laminar smoke points, soot volume function distributions using laser extinction, and soot structure using thermophoretic sampling and analysis by transmission electron microscopy. Nonbuoyant flames were found to exhibit laminar smoke points like buoyant flames but their properties are very different; in particular, nonbuoyant flames have laminar smoke point flame lengths and residence times that are shorter and longer, respectively, than buoyant flames.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Hua-Sheng
2013-09-15
A unified, fast, and effective approach is developed for numerical calculation of the well-known plasma dispersion function with extensions from Maxwellian distribution to almost arbitrary distribution functions, such as the δ, flat top, triangular, κ or Lorentzian, slowing down, and incomplete Maxwellian distributions. The singularity and analytic continuation problems are also solved generally. Given that the usual conclusion γ∝∂f{sub 0}/∂v is only a rough approximation when discussing the distribution function effects on Landau damping, this approach provides a useful tool for rigorous calculations of the linear wave and instability properties of plasma for general distribution functions. The results are alsomore » verified via a linear initial value simulation approach. Intuitive visualizations of the generalized plasma dispersion function are also provided.« less
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.
2018-03-01
Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.
SU-F-T-336: A Quick Auto-Planning (QAP) Method for Patient Intensity Modulated Radiotherapy (IMRT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, J; Zhang, Z; Wang, J
2016-06-15
Purpose: The aim of this study is to develop a quick auto-planning system that permits fast patient IMRT planning with conformal dose to the target without manual field alignment and time-consuming dose distribution optimization. Methods: The planning target volume (PTV) of the source and the target patient were projected to the iso-center plane in certain beameye- view directions to derive the 2D projected shapes. Assuming the target interior was isotropic for each beam direction boundary analysis under polar coordinate was performed to map the source shape boundary to the target shape boundary to derive the source-to-target shape mapping function. Themore » derived shape mapping function was used to morph the source beam aperture to the target beam aperture over all segments in each beam direction. The target beam weights were re-calculated to deliver the same dose to the reference point (iso-center) as the source beam did in the source plan. The approach was tested on two rectum patients (one source patient and one target patient). Results: The IMRT planning time by QAP was 5 seconds on a laptop computer. The dose volume histograms and the dose distribution showed the target patient had the similar PTV dose coverage and OAR dose sparing with the source patient. Conclusion: The QAP system can instantly and automatically finish the IMRT planning without dose optimization.« less
Particle Dynamics at and near the Electron and Ion Diffusion Regions as a Function of Guide Field
NASA Astrophysics Data System (ADS)
Giles, Barbara; Burch, James; Phan, Tai; Webster, James; Avanov, Levon; Torbert, Roy; Chen, Li-Jen; Chandler, Michael; Dorelli, John; Ergun, Robert; Fuselier, Stephen; Gershman, Daniel; Lavraud, Benoit; Moore, Thomas; Paterson, William; Pollock, Craig; Russell, Christopher; Saito, Yoshifumi; Strangeway, Robert; Wang, Shan
2017-04-01
At the dayside magnetopause, magnetic reconnection often occurs in thin sheets of plasma carrying electrical currents and rotating magnetic fields. Charged particles interact strongly with the magnetic field and simultaneously their motions modify the fields. Researchers are able to simulate the macroscopic interactions between the two plasma domains on both sides of the magnetopause and, for precise results, include individual particle motions to better describe the microscopic scales. Here, observed ion and electron distributions are compared for asymmetric reconnection events with weak-, moderate-, and strong-guide fields. Several of the structures noted have been demonstrated in simulations and others have not been predicted or explained to date. We report on these observations and their persistence. In particular, we highlight counterstreaming low-energy ion distributions that are seen to persist regardless of increasing guide-field. Distributions of this type were first published by Burch and Phan [GRL, 2016] for an 8 Dec 2015 event and by Wang et al. [GRL, 2016] for a 16 Oct 2015 event. Wang et al. showed the distributions were produced by the reflection of magnetosheath ions by the normal electric field at the magnetopause. This report presents further results on the relationship between the counterstreaming ions with electron distributions, which show the ions traversing the magnetosheath, X-line, and in one case the electron stagnation point. We suggest the counterstreaming ions become the source of D-shaped distributions at points where the field line opening is indicated by the electron distributions. In addition, we suggest they become the source of ion crescent distributions that result from acceleration of ions by the reconnection electric field. Burch, J. L., and T. D. Phan (2016), Magnetic reconnection at the dayside magnetopause: Advances with MMS, Geophys. Res. Lett., 43, 8327-8338, doi:10.1002/2016GL069787. Wang, S., et al. (2016), Two-scale ion meandering caused by the polarization electric field during asymmetric reconnection, Geophys. Res. Lett., 43, 7831-7839, doi:10.1002/2016GL069842.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
NASA Technical Reports Server (NTRS)
Scudder, J. D.; Olbert, S.
1979-01-01
A kinetic theory for the velocity distribution of solar wind electrons which illustrates the global and local properties of the solar wind expansion is proposed. By means of the Boltzmann equation with the Krook collision operator accounting for Coulomb collisions, it is found that Coulomb collisions determine the population and shape of the electron distribution function in both the thermal and suprathermal energy regimes. For suprathermal electrons, the cumulative effects of Coulomb interactions are shown to take place on the scale of the heliosphere itself, whereas the Coulomb interactions of thermal electrons occur on a local scale near the point of observation (1 AU). The bifurcation of the electron distribution between thermal and suprathermal electrons is localized to the deep solar corona (1 to 10 solar radii).
A new method for analyzing IRAS data to determine the dust temperature distribution
NASA Technical Reports Server (NTRS)
Xie, Taoling; Goldsmith, Paul F.; Zhou, Weimin
1991-01-01
In attempting to analyze the four-band IRAS images of interstellar dust emission, it is found that an inversion theorem recently developed by Chen (1990) enables distribution of the dust to be determined as a function of temperature and thus the total dust column density, for each line of sight. The method and its application to a hypothetical IRAS data set created by assuming a power-law dust temperature distribution, which is characteristic of the actual IRAS data for the Monoceros R2 cloud, are reported. To use the method, the wavelength dependence of the dust emissivity is assumed and a simple function is fitted to the four intensity-wavelength data points. The method is shown to be very successful at retrieving the dust temperature distribution in this case and is expected to have wide applicability to astronomical problems of this type.
Lagrangian statistics in weakly forced two-dimensional turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera, Michael K.; Ecke, Robert E.
Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale r i. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in termsmore » of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Furthermore, implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.« less
Lagrangian statistics in weakly forced two-dimensional turbulence
Rivera, Michael K.; Ecke, Robert E.
2016-01-14
Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale r i. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in termsmore » of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Furthermore, implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.« less
Slater, P B
1985-08-01
Two distinct approaches to assessing the effect of geographic scale on spatial interactions are modeled. In the first, the question of whether a distance deterrence function, which explains interactions for one system of zones, can also succeed on a more aggregate scale, is examined. Only the two-parameter function for which it is found that distances between macrozones are weighted averaged of distances between component zones is satisfactory in this regard. Estimation of continuous (point-to-point) functions--in the form of quadrivariate cubic polynomials--for US interstate migration streams, is then undertaken. Upon numerical integration, these higher order surfaces yield predictions of interzonal and intrazonal movements at any scale of interest. Test of spatial stationarity, isotropy, and symmetry of interstate migration are conducted in this framework.
NASA Technical Reports Server (NTRS)
Forman, M. A.; Jokipii, J. R.
1978-01-01
The distribution function of cosmic rays streaming perpendicular to the mean magnetic field in a turbulent medium is reexamined. Urch's (1977) discovery that in quasi-linear theory, the flux is due to particles at 90 deg pitch angle is discussed and shown to be consistent with previous formulations of the theory. It is pointed out that this flux of particles at 90 deg cannot be arbitrarily set equal to zero, and hence the alternative theory which proceeds from this premise is dismissed. A further, basic inconsistency in Urch's transport equation is demonstrated, and the connection between quasi-linear theory and compound diffusion is discussed.
QCD-inspired spectra from Blue's functions
NASA Astrophysics Data System (ADS)
Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail
1996-02-01
We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.
Large-deviation properties of Brownian motion with dry friction.
Chen, Yaming; Just, Wolfram
2014-10-01
We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.
Distribution of lifetimes for coronal soft X-ray bright points
NASA Technical Reports Server (NTRS)
Golub, L.; Krieger, A. S.; Vaiana, G. S.
1976-01-01
The lifetime 'spectrum' of X-ray bright points (XBPs) is measured for a sample of 300 such features using soft X-ray images obtained with the S-054 X-ray spectrographic telescope aboard Skylab. 'Spectrum' here is defined as a function which gives the relative number of XBPs having a specific lifetime as a function of lifetime. The results indicate that a two-lifetime exponential can be fit to the decay curves of XBPs, that the spectrum is heavily weighted toward short lifetimes, and that the number of features lasting 20 to 30 hr or more is greater than expected. A short-lived component with an average lifetime of about 8 hr and a long-lived 1.5-day component are consistently found along with a few features lasting 50 hr or more. An examination of differences among the components shows that features lasting 2 days or less have a broad heliocentric-latitude distribution while nearly all the longer-lived features are observed within 30 deg of the solar equator.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-07-01
We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-04-01
We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.