Potential energy distribution function and its application to the problem of evaporation
NASA Astrophysics Data System (ADS)
Gerasimov, D. N.; Yurin, E. I.
2017-10-01
Distribution function on potential energy in a strong correlated system can be calculated analytically. In an equilibrium system (for instance, in the bulk of the liquid) this distribution function depends only on temperature and mean potential energy, which can be found through the specific heat of vaporization. At the surface of the liquid this distribution function differs significantly, but its shape still satisfies analytical correlation. Distribution function on potential energy nearby the evaporation surface can be used instead of the work function of the atom of the liquid.
Characterizing short-term stability for Boolean networks over any distribution of transfer functions
Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; ...
2016-07-05
Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.
Lv, Yi; Cui, Jian; Jiang, Zuimin M; Yang, Xinju
2012-11-29
The nanoscale electrical properties of individual self-assembled GeSi quantum rings (QRs) were studied by scanning probe microscopy-based techniques. The surface potential distributions of individual GeSi QRs are obtained by scanning Kelvin microscopy (SKM). Ring-shaped work function distributions are observed, presenting that the QRs' rim has a larger work function than the QRs' central hole. By combining the SKM results with those obtained by conductive atomic force microscopy and scanning capacitance microscopy, the correlations between the surface potential, conductance, and carrier density distributions are revealed, and a possible interpretation for the QRs' conductance distributions is suggested.
Working Memory and Decision-Making in a Frontoparietal Circuit Model
2017-01-01
Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental “building blocks” of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks. PMID:29114071
Working Memory and Decision-Making in a Frontoparietal Circuit Model.
Murray, John D; Jaramillo, Jorge; Wang, Xiao-Jing
2017-12-13
Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental "building blocks" of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks. Copyright © 2017 the authors 0270-6474/17/3712167-20$15.00/0.
Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function
NASA Astrophysics Data System (ADS)
Gao, Fei; Chang, Lei; Liu, Yu-xin
2017-07-01
We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.
General Path-Integral Successive-Collision Solution of the Bounded Dynamic Multi-Swarm Problem.
1983-09-23
coefficients (i.e., moments of the distribution functions), and/or (il) fnding the distribution functions themselves. The present work is concerned with the...collisions since their first appearance in the system. By definition, a swarm particle sufers a *generalized collision" either when it collides with a...studies6-rand the present work have contributed to- wards making the path-integral successive-collision method a practicable tool of transport theory
2012-03-01
Vroom , V . H . (1964). Work and motivation . New York: Wiley. 16 Distribution A: Approved for public release; Distribution unlimited. 88 ABW Cleared...sustained attention tasks. Theorists have attempted to explain vigilance decrements as a function of arousal/ motivation ( Vroom , 1964; Yerkes & Dodson...DISTRIBUTION STATEMENT. Allen J. Rowe Gregory J. Barbato Work Unit Manager
Exact probability distribution function for the volatility of cumulative production
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Klümper, Andreas
2018-04-01
In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.
Unstable density distribution associated with equatorial plasma bubble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.
2016-04-15
In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less
NASA Astrophysics Data System (ADS)
Lasuik, J.; Shalchi, A.
2018-06-01
In the current paper we explore the influence of the assumed particle statistics on the transport of energetic particles across a mean magnetic field. In previous work the assumption of a Gaussian distribution function was standard, although there have been known cases for which the transport is non-Gaussian. In the present work we combine a kappa distribution with the ordinary differential equation provided by the so-called unified non-linear transport theory. We then compute running perpendicular diffusion coefficients for different values of κ and turbulence configurations. We show that changing the parameter κ slightly increases or decreases the perpendicular diffusion coefficient depending on the considered turbulence configuration. Since these changes are small, we conclude that the assumed statistics is less significant in particle transport theory. The results obtained in the current paper support to use a Gaussian distribution function as usually done in particle transport theory.
NASA Astrophysics Data System (ADS)
Liu, Yu; Qin, Shengwei; Hao, Qingguo; Chen, Nailu; Zuo, Xunwei; Rong, Yonghua
2017-03-01
The study of internal stress in quenched AISI 4140 medium carbon steel is of importance in engineering. In this work, the finite element simulation (FES) was employed to predict the distribution of internal stress in quenched AISI 4140 cylinders with two sizes of diameter based on exponent-modified (Ex-Modified) normalized function. The results indicate that the FES based on Ex-Modified normalized function proposed is better consistent with X-ray diffraction measurements of the stress distribution than FES based on normalized function proposed by Abrassart, Desalos and Leblond, respectively, which is attributed that Ex-Modified normalized function better describes transformation plasticity. Effect of temperature distribution on the phase formation, the origin of residual stress distribution and effect of transformation plasticity function on the residual stress distribution were further discussed.
DOT National Transportation Integrated Search
2006-01-01
The project focuses on two major issues - the improvement of current work zone design practices and an analysis of : vehicle interarrival time (IAT) and speed distributions for the development of a digital computer simulation model for : queues and t...
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
Normal theory procedures for calculating upper confidence limits (UCL) on the risk function for continuous responses work well when the data come from a normal distribution. However, if the data come from an alternative distribution, the application of the normal theory procedure...
NASA Astrophysics Data System (ADS)
Stagner, L.; Heidbrink, W. W.
2017-10-01
Due to the complicated nature of the fast-ion distribution function, diagnostic velocity-space weight functions are used to analyze experimental data. In a technique known as Velocity-space Tomography (VST), velocity-space weight functions are combined with experimental measurements to create a system of linear equations that can be solved. However, VST (which by definition ignores spatial dependencies) is restricted, both by the accuracy of its forward model and also by the availability of spatially overlapping diagnostics. In this work we extend velocity-space weight functions to a full 6D generalized coordinate system and then show how to reduce them to a 3D orbit-space without loss of generality using an action-angle formulation. Furthermore, we show how diagnostic orbit-weight functions can be used to infer the full fast-ion distribution function, i.e. Orbit Tomography. Examples of orbit weights functions for different diagnostics and reconstructions of fast-ion distributions are shown for DIII-D experiments. This work was supported by the U.S. Department of Energy under DE-AC02-09CH11466 and DE-FC02-04ER54698.
Soliton sustainable socio-economic distribution
NASA Astrophysics Data System (ADS)
Dresvyannikov, M. A.; Petrova, M. V.; Tshovrebov, A. M.
2017-11-01
In the work presented, from close positions, we consider: 1) the question of the stability of socio-economic distributions; 2) the question of the possible mechanism for the formation of fractional power-law dependences in the Cobb/Douglas production function; 3) the introduction of a fractional order derivative for a general analysis of a fractional power function; 4) bringing in a state of mutual matching of the interest rate and the production function of Cobb/Douglas.
Solution of QCD⊗QED coupled DGLAP equations at NLO
NASA Astrophysics Data System (ADS)
Zarrin, S.; Boroun, G. R.
2017-09-01
In this work, we present an analytical solution for QCD⊗QED coupled Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations at the leading order (LO) accuracy in QED and next-to-leading order (NLO) accuracy in perturbative QCD using double Laplace transform. This technique is applied to obtain the singlet, gluon and photon distribution functions and also the proton structure function. We also obtain contribution of photon in proton at LO and NLO at high energy and successfully compare the proton structure function with HERA data [1] and APFEL results [2]. Some comparisons also have been done for the singlet and gluon distribution functions with the MSTW results [3]. In addition, the contribution of photon distribution function inside the proton has been compared with results of MRST [4] and with the contribution of sea quark distribution functions which obtained by MSTW [3] and CTEQ6M [5].
Advanced Inverter Functions and Communication Protocols for Distribution Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Palmintier, Bryan; Baggu, Murali
2016-05-05
This paper aims at identifying the advanced features required by distribution management systems (DMS) service providers to bring inverter-connected distributed energy resources into use as an intelligent grid resource. This work explores the standard functions needed in the future DMS for enterprise integration of distributed energy resources (DER). The important DMS functionalities such as DER management in aggregate groups, including the discovery of capabilities, status monitoring, and dispatch of real and reactive power are addressed in this paper. It is intended to provide the industry with a point of reference for DER integration with other utility applications and to providemore » guidance to research and standards development organizations.« less
Energy and enthalpy distribution functions for a few physical systems.
Wu, K L; Wei, J H; Lai, S K; Okabe, Y
2007-08-02
The present work is devoted to extracting the energy or enthalpy distribution function of a physical system from the moments of the distribution using the maximum entropy method. This distribution theory has the salient traits that it utilizes only the experimental thermodynamic data. The calculated distribution functions provide invaluable insight into the state or phase behavior of the physical systems under study. As concrete evidence, we demonstrate the elegance of the distribution theory by studying first a test case of a two-dimensional six-state Potts model for which simulation results are available for comparison, then the biphasic behavior of the binary alloy Na-K whose excess heat capacity, experimentally observed to fall in a narrow temperature range, has yet to be clarified theoretically, and finally, the thermally induced state behavior of a collection of 16 proteins.
Estimating sales and sales market share from sales rank data for consumer appliances
NASA Astrophysics Data System (ADS)
Touzani, Samir; Van Buskirk, Robert
2016-06-01
Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.
1979-12-01
Links between processes can be aLlocated strictLy to controL functions. In fact, the degree of separation of control and data is an important research is...delays or Loss of control messages. Cognoscienti agree that message-passing IPC schemes are equivalent in "power" to schemes which employ shared...THEORETICAL WORK Page 55 SECTION 6 THEORETICAL WORK 6.1 WORKING GRUP JIM REP.OR STRUCTURE of Discussion: Distributed system without central (or any) control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution.more » The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
Exact posterior computation in non-conjugate Gaussian location-scale parameters models
NASA Astrophysics Data System (ADS)
Andrade, J. A. A.; Rathie, P. N.
2017-12-01
In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.
Transverse parton distribution functions at next-to-next-to-leading order: the quark-to-quark case.
Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin
2012-12-14
We present a calculation of the perturbative quark-to-quark transverse parton distribution function at next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate for the first time that such a definition works beyond the first nontrivial order. We extract from our calculation the coefficient functions relevant for a next-to-next-to-next-to-leading logarithmic Q(T) resummation in a large class of processes at hadron colliders.
Barium-Dispenser Thermionic Cathode
NASA Technical Reports Server (NTRS)
Wintucky, Edwin G.; Green, M.; Feinleib, M.
1989-01-01
Improved reservoir cathode serves as intense source of electrons required for high-frequency and often high-output-power, linear-beam tubes, for which long operating lifetime important consideration. High emission-current densities obtained through use of emitting surface of relatively-low effective work function and narrow work-function distribution, consisting of coat of W/Os deposited by sputtering. Lower operating temperatures and enhanced electron emission consequently possible.
Distributed analysis functional testing using GangaRobot in the ATLAS experiment
NASA Astrophysics Data System (ADS)
Legger, Federica; ATLAS Collaboration
2011-12-01
Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.
Development of uncertainty-based work injury model using Bayesian structural equation modelling.
Chatterjee, Snehamoy
2014-01-01
This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.
First Renormalized Parton Distribution Functions from Lattice QCD
NASA Astrophysics Data System (ADS)
Lin, Huey-Wen; LP3 Collaboration
2017-09-01
We present the first lattice-QCD results on the nonperturbatively renormalized parton distribution functions (PDFs). Using X.D. Ji's large-momentum effective theory (LaMET) framework, lattice-QCD hadron structure calculations are able to overcome the longstanding problem of determining the Bjorken- x dependence of PDFs. This has led to numerous additional theoretical works and exciting progress. In this talk, we will address a recent development that implements a step missing from prior lattice-QCD calculations: renormalization, its effects on the nucleon matrix elements, and the resultant changes to the calculated distributions.
Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon
2016-07-27
Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials.
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Wang, Hao; Tomar, Vikas
2018-04-01
This work presents direct measurements of stress and temperature distribution during the mesoscale microstructural deformation of Inconel-617 (IN-617) during 3-point bending tests as a function of temperature. A novel nanomechanical Raman spectroscopy (NMRS)-based measurement platform was designed for simultaneous in situ temperature and stress mapping as a function of microstructure during deformation. The temperature distribution was found to be directly correlated to stress distribution for the analyzed microstructures. Stress concentration locations are shown to be directly related to higher heat conduction and result in microstructural hot spots with significant local temperature variation.
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Searching for the best thermoelectrics through the optimization of transport distribution function
NASA Astrophysics Data System (ADS)
Fan, Zheyong; Wang, Hui-Qiong; Zheng, Jin-Cheng
2011-04-01
The thermoelectric performance of materials is dependent on the interplay or competition among three key components, the electrical conductivity, thermopower, and thermal conductivity, which can be written as integrals of a single function, the transport distribution function (TDF). Mahan and Sofo [Proc. Natl. Acad. Sci. USA 93, 7436 (1996)] found that, mathematically, the thermoelectric properties could be maximized by a delta-shaped transport distribution, which was associated with a narrow distribution of the energy of the electrons participating in the transport process. In this work, we revisited the shape effect of TDF on thermoelectric figure of merit. It is confirmed both heuristically and numerically that among all the normalized TDF the Dirac delta function leads to the largest thermoelectric figure of merit. Whereas, for the case of TDF being bounded, a rectangular-shape distribution is instead found to be the most favorable one, which could be achieved through nanoroute. Our results also indicate that high thermoelectric figure of merit is associated with appropriate violations of the Wiedemann-Franz law.
NASA Astrophysics Data System (ADS)
Ochoa, Diego Alejandro; García, Jose Eduardo
2016-04-01
The Preisach model is a classical method for describing nonlinear behavior in hysteretic systems. According to this model, a hysteretic system contains a collection of simple bistable units which are characterized by an internal field and a coercive field. This set of bistable units exhibits a statistical distribution that depends on these fields as parameters. Thus, nonlinear response depends on the specific distribution function associated with the material. This model is satisfactorily used in this work to describe the temperature-dependent ferroelectric response in PZT- and KNN-based piezoceramics. A distribution function expanded in Maclaurin series considering only the first terms in the internal field and the coercive field is proposed. Changes in coefficient relations of a single distribution function allow us to explain the complex temperature dependence of hard piezoceramic behavior. A similar analysis based on the same form of the distribution function shows that the KNL-NTS properties soften around its orthorhombic to tetragonal phase transition.
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; ...
2017-04-13
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-04-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-10-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Statistics of work performed on a forced quantum oscillator.
Talkner, Peter; Burada, P Sekhar; Hänggi, Peter
2008-07-01
Various aspects of the statistics of work performed by an external classical force on a quantum mechanical system are elucidated for a driven harmonic oscillator. In this special case two parameters are introduced that are sufficient to completely characterize the force protocol. Explicit results for the characteristic function of work and the corresponding probability distribution are provided and discussed for three different types of initial states of the oscillator: microcanonical, canonical, and coherent states. Depending on the choice of the initial state the probability distributions of the performed work may greatly differ. This result in particular also holds true for identical force protocols. General fluctuation and work theorems holding for microcanonical and canonical initial states are confirmed.
Continuum-kinetic approach to sheath simulations
NASA Astrophysics Data System (ADS)
Cagas, Petr; Hakim, Ammar; Srinivasan, Bhuvana
2016-10-01
Simulations of sheaths are performed using a novel continuum-kinetic model with collisions including ionization/recombination. A discontinuous Galerkin method is used to directly solve the Boltzmann-Poisson system to obtain a particle distribution function. Direct discretization of the distribution function has advantages of being noise-free compared to particle-in-cell methods. The distribution function, which is available at each node of the configuration space, can be readily used to calculate the collision integrals in order to get ionization and recombination operators. Analytical models are used to obtain the cross-sections as a function of energy. Results will be presented incorporating surface physics with a classical sheath in Hall thruster-relevant geometry. This work was sponsored by the Air Force Office of Scientific Research under Grant Number FA9550-15-1-0193.
Object-oriented Tools for Distributed Computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1993-01-01
Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.
Recommended values of clean metal surface work functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derry, Gregory N., E-mail: gderry@loyola.edu; Kern, Megan E.; Worth, Eli H.
2015-11-15
A critical review of the experimental literature for measurements of the work functions of clean metal surfaces of single-crystals is presented. The tables presented include all results found for low-index crystal faces except cases that were known to be contaminated surfaces. These results are used to construct a recommended value of the work function for each surface examined, along with an uncertainty estimate for that value. The uncertainties are based in part on the error distribution for all measured work functions in the literature, which is included here. The metals included in this review are silver (Ag), aluminum (Al), goldmore » (Au), copper (Cu), iron (Fe), iridium (Ir), molybdenum (Mo), niobium (Nb), nickel (Ni), palladium (Pd), platinum (Pt), rhodium (Rh), ruthenium (Ru), tantalum (Ta), and tungsten (W)« less
Neti, Prasad V.S.V.; Howell, Roger W.
2010-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086
Neti, Prasad V.S.V.; Howell, Roger W.
2008-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316
Portraits of Principal Practice: Time Allocation and School Principal Work
ERIC Educational Resources Information Center
Sebastian, James; Camburn, Eric M.; Spillane, James P.
2018-01-01
Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…
Thermomechanical Fractional Model of TEMHD Rotational Flow
Hamza, F.; Abd El-Latief, A.; Khatan, W.
2017-01-01
In this work, the fractional mathematical model of an unsteady rotational flow of Xanthan gum (XG) between two cylinders in the presence of a transverse magnetic field has been studied. This model consists of two fractional parameters α and β representing thermomechanical effects. The Laplace transform is used to obtain the numerical solutions. The fractional parameter influence has been discussed graphically for the functions field distribution (temperature, velocity, stress and electric current distributions). The relationship between the rotation of both cylinders and the fractional parameters has been discussed on the functions field distribution for small and large values of time. PMID:28045941
An analytical solution to the one-dimensional heat conduction-convection equation in soil
USDA-ARS?s Scientific Manuscript database
Heat transfer in soil occurs by conduction and convection. Infiltrating water affects soil temperature distributions, and measuring soil temperature distributions below infiltrating water can provide a signal for the flux of water. In earlier work a sine wave function (hereinafter referred to as the...
Shao, J Y; Shu, C; Huang, H B; Chew, Y T
2014-03-01
A free-energy-based phase-field lattice Boltzmann method is proposed in this work to simulate multiphase flows with density contrast. The present method is to improve the Zheng-Shu-Chew (ZSC) model [Zheng, Shu, and Chew, J. Comput. Phys. 218, 353 (2006)] for correct consideration of density contrast in the momentum equation. The original ZSC model uses the particle distribution function in the lattice Boltzmann equation (LBE) for the mean density and momentum, which cannot properly consider the effect of local density variation in the momentum equation. To correctly consider it, the particle distribution function in the LBE must be for the local density and momentum. However, when the LBE of such distribution function is solved, it will encounter a severe numerical instability. To overcome this difficulty, a transformation, which is similar to the one used in the Lee-Lin (LL) model [Lee and Lin, J. Comput. Phys. 206, 16 (2005)] is introduced in this work to change the particle distribution function for the local density and momentum into that for the mean density and momentum. As a result, the present model still uses the particle distribution function for the mean density and momentum, and in the meantime, considers the effect of local density variation in the LBE as a forcing term. Numerical examples demonstrate that both the present model and the LL model can correctly simulate multiphase flows with density contrast, and the present model has an obvious improvement over the ZSC model in terms of solution accuracy. In terms of computational time, the present model is less efficient than the ZSC model, but is much more efficient than the LL model.
Whistler waves with electron temperature anisotropy and non-Maxwellian distribution functions
NASA Astrophysics Data System (ADS)
Malik, M. Usman; Masood, W.; Qureshi, M. N. S.; Mirza, Arshad M.
2018-05-01
The previous works on whistler waves with electron temperature anisotropy narrated the dependence on plasma parameters, however, they did not explore the reasons behind the observed differences. A comparative analysis of the whistler waves with different electron distributions has not been made to date. This paper attempts to address both these issues in detail by making a detailed comparison of the dispersion relations and growth rates of whistler waves with electron temperature anisotropy for Maxwellian, Cairns, kappa and generalized (r, q) distributions by varying the key plasma parameters for the problem under consideration. It has been found that the growth rate of whistler instability is maximum for flat-topped distribution whereas it is minimum for the Maxwellian distribution. This work not only summarizes and complements the previous work done on the whistler waves with electron temperature anisotropy but also provides a general framework to understand the linear propagation of whistler waves with electron temperature anisotropy that is applicable in all regions of space plasmas where the satellite missions have indicated their presence.
Chord-length and free-path distribution functions for many-body systems
NASA Astrophysics Data System (ADS)
Lu, Binglin; Torquato, S.
1993-04-01
We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.
Wang, Pei; Xianlong, Gao; Li, Haibin
2013-08-01
It is demonstrated in many thermodynamic textbooks that the equivalence of the different ensembles is achieved in the thermodynamic limit. In this present work we discuss the inequivalence of microcanonical and canonical ensembles in a finite ultracold system at low energies. We calculate the microcanonical momentum distribution function (MDF) in a system of identical fermions (bosons). We find that the microcanonical MDF deviates from the canonical one, which is the Fermi-Dirac (Bose-Einstein) function, in a finite system at low energies where the single-particle density of states and its inverse are finite.
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.
Yokoyama, Jun'ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.
A Concept for Measuring Electron Distribution Functions Using Collective Thomson Scattering
NASA Astrophysics Data System (ADS)
Milder, A. L.; Froula, D. H.
2017-10-01
A.B. Langdon proposed that stable non-Maxwellian distribution functions are realized in coronal inertial confinement fusion plasmas via inverse bremsstrahlung heating. For Zvosc2
Optimized Orthovoltage Stereotactic Radiosurgery
NASA Astrophysics Data System (ADS)
Fagerstrom, Jessica M.
Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light flatbed scanner as well as a prototype laser densitometry system. Measured beam profiles showed that the modulated beams could more closely approach rectangular function dose profiles compared to the open cones. A methodology has been described and implemented to achieve optimized SRS delivery, including the development of working prototypes. Future work may include the construction of a full treatment platform.
Novel trends in pair distribution function approaches on bulk systems with nanoscale heterogeneities
Emil S. Bozin; Billinge, Simon J. L.
2016-07-29
Novel materials for high performance applications increasingly exhibit structural order on the nanometer length scale; a domain where crystallography, the basis of Rietveld refinement, fails [1]. In such instances the total scattering approach, which treats Bragg and diffuse scattering on an equal basis, is a powerful approach. In recent years, the analysis of the total scattering data became an invaluable tool and the gold standard for studying nanocrystalline, nanoporous, and disordered crystalline materials. The data may be analyzed in reciprocal space directly, or Fourier transformed to the real-space atomic pair distribution function (PDF) and this intuitive function examined for localmore » structural information. Here we give a number of illustrative examples, for convenience picked from our own work, of recent developments and applications of total scattering and PDF analysis to novel complex materials. There are many other wonderful examples from the work of others.« less
Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-05-01
Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).
Ferrero, Alejandro; Rabal, Ana María; Campos, Joaquín; Pons, Alicia; Hernanz, María Luisa
2012-12-20
A study on the variation of the spectral bidirectional reflectance distribution function (BRDF) of four diffuse reflectance standards (matte ceramic, BaSO(4), Spectralon, and white Russian opal glass) is accomplished through this work. Spectral BRDF measurements were carried out and, using principal components analysis, its spectral and geometrical variation respect to a reference geometry was assessed from the experimental data. Several descriptors were defined in order to compare the spectral BRDF variation of the four materials.
NASA Astrophysics Data System (ADS)
Liu, Hongliang; Zhang, Xin; Xiao, Yixin; Zhang, Jiuxing
2018-03-01
The density function theory been used to calculate the electronic structures of binary and doped rare earth hexaborides (REB6), which exhibits the large density of states (DOS) near Fermi level. The d orbital elections of RE element contribute the electronic states of election emission near the Fermi level, which imply that the REB6 (RE = La, Ce, Gd) with wide distribution of high density d orbital electrons could provide a lower work function and excellent emission properties. Doping RE elements into binary REB6 can adjust DOS and the position of the Fermi energy level. The calculated work functions of considered REB6 (100) surface show that the REB6 (RE = La, Ce, Gd) have lower work function and doping RE elements with active d orbital electrons can significantly reduce work function of binary REB6. The thermionic emission test results are basically accordant with the calculated value, proving the first principles calculation could provide a good theoretical guidance for the study of electron emission properties of REB6.
Quasi solution of radiation transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
There is uncertainty with experimental data as well as with input data of theoretical calculations. The neutron distribution from the variational principle, which takes into account both theoretical and experimental data, is obtained to increase the accuracy and speed of neutronic calculations. The neutron imbalance in mesh cells and the discrepancy between experimentally measured and calculated functional of the neutron distribution are simultaneously minimized. A fast-working and simple-programming iteration method is developed to minimize the objective functional. The method can be used in the core monitoring and control system for (a) power distribution calculations, (b) in- and ex-core detector calibration,more » (c) macro-cross sections or isotope distribution correction by experimental data, and (d) core and detector diagnostics.« less
DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratt, Annabelle; Veda, Santosh; Maitra, Arindam
Efficient and effective management of the electrical distribution system requires an integrated system approach for Distribution Management Systems (DMS), Distributed Energy Resources (DERs), Distributed Energy Resources Management System (DERMS), and microgrids to work in harmony. This paper highlights some of the outcomes from a U.S. Department of Energy (DOE), Office of Electricity (OE) project, including 1) Architecture of these integrated systems, and 2) Expanded functions of two example DMS applications, Volt-VAR optimization (VVO) and Fault Location, Isolation and Service Restoration (FLISR), to accommodate DER. For these two example applications, the relevant DER Group Functions necessary to support communication between DMSmore » and Microgrid Controller (MC) in grid-tied mode are identified.« less
DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratt, Annabelle; Veda, Santosh; Maitra, Arindam
Efficient and effective management of the electric distribution system requires an integrated approach to allow various systems to work in harmony, including distribution management systems (DMS), distributed energy resources (DERs), distributed energy resources management systems, and microgrids. This study highlights some outcomes from a recent project sponsored by the US Department of Energy, Office of Electricity Delivery and Energy Reliability, including information about (i) the architecture of these integrated systems and (ii) expanded functions of two example DMS applications to accommodate DERs: volt-var optimisation and fault location, isolation, and service restoration. In addition, the relevant DER group functions necessary tomore » support communications between the DMS and a microgrid controller in grid-tied mode are identified.« less
INTERPRETING PHYSICAL AND BEHAVIORAL HEALTH SCORES FROM NEW WORK DISABILITY INSTRUMENTS
Marfeo, Elizabeth E.; Ni, Pengsheng; Chan, Leighton; Rasch, Elizabeth K.; McDonough, Christine M.; Brandt, Diane E.; Bogusz, Kara; Jette, Alan M.
2015-01-01
Objective To develop a system to guide interpretation of scores generated from 2 new instruments measuring work-related physical and behavioral health functioning (Work Disability – Physical Function (WD-PF) and WD – Behavioral Function (WD-BH)). Design Cross-sectional, secondary data from 3 independent samples to develop and validate the functional levels for physical and behavioral health functioning. Subjects Physical group: 999 general adult subjects, 1,017 disability applicants and 497 work-disabled subjects. Behavioral health group: 1,000 general adult subjects, 1,015 disability applicants and 476 work-disabled subjects. Methods Three-phase analytic approach including item mapping, a modified-Delphi technique, and known-groups validation analysis were used to develop and validate cut-points for functional levels within each of the WD-PF and WD-BH instrument’s scales. Results Four and 5 functional levels were developed for each of the scales in the WD-PF and WD-BH instruments. Distribution of the comparative samples was in the expected direction: the general adult samples consistently demonstrated scores at higher functional levels compared with the claimant and work-disabled samples. Conclusion Using an item-response theory-based methodology paired with a qualitative process appears to be a feasible and valid approach for translating the WD-BH and WD-PF scores into meaningful levels useful for interpreting a person’s work-related physical and behavioral health functioning. PMID:25729901
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crooks, Gavin; Sivak, David
Many interesting divergence measures between conjugate ensembles of nonequilibrium trajectories can be experimentally determined from the work distribution of the process. Herein, we review the statistical and physical significance of several of these measures, in particular the relative entropy (dissipation), Jeffreys divergence (hysteresis), Jensen-Shannon divergence (time-asymmetry), Chernoff divergence (work cumulant generating function), and Renyi divergence.
Graphene-based room-temperature implementation of a modified Deutsch-Jozsa quantum algorithm.
Dragoman, Daniela; Dragoman, Mircea
2015-12-04
We present an implementation of a one-qubit and two-qubit modified Deutsch-Jozsa quantum algorithm based on graphene ballistic devices working at room temperature. The modified Deutsch-Jozsa algorithm decides whether a function, equivalent to the effect of an energy potential distribution on the wave function of ballistic charge carriers, is constant or not, without measuring the output wave function. The function need not be Boolean. Simulations confirm that the algorithm works properly, opening the way toward quantum computing at room temperature based on the same clean-room technologies as those used for fabrication of very-large-scale integrated circuits.
LETTER TO THE EDITOR: Exact energy distribution function in a time-dependent harmonic oscillator
NASA Astrophysics Data System (ADS)
Robnik, Marko; Romanovski, Valery G.; Stöckmann, Hans-Jürgen
2006-09-01
Following a recent work by Robnik and Romanovski (2006 J. Phys. A: Math. Gen. 39 L35, 2006 Open Syst. Inf. Dyn. 13 197-222), we derive an explicit formula for the universal distribution function of the final energies in a time-dependent 1D harmonic oscillator, whose functional form does not depend on the details of the frequency ω(t) and is closely related to the conservation of the adiabatic invariant. The normalized distribution function is P(x) = \\pi^{-1} (2\\mu^2 - x^2)^{-\\frac{1}{2}} , where x=E_1- \\skew3\\bar{E}_1 ; E1 is the final energy, \\skew3\\bar{E}_1 is its average value and µ2 is the variance of E1. \\skew3\\bar{E}_1 and µ2 can be calculated exactly using the WKB approach to all orders.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
NASA Astrophysics Data System (ADS)
Swerdlow, Josh; Yoo, Jongsoo; Kim, Eun-Hwa; Yamada, Masaaki; Ji, Hantao
2017-10-01
Generation of whistler waves during asymmetric reconnection is studied by analyzing data from a MMS (Magnetospheric Multiscale) event. In particular, the possible role of electron temperature anisotropy in excitation of whistler waves on the magnetosphere side is discussed. The local electron distribution function is fitted into a sum of bi-Maxwellian distribution functions. Then, the dispersion relation solver, WHAMP (waves in homogeneous, anisotropic, multicomponent plasmas), is used to obtain the local dispersion relation and growth rate of the whistler waves. We compare the theoretical calculations with the measured dispersion relation. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No. DE-AC02-09CH11466.
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic
YOKOYAMA, Jun’ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231
Zero-truncated negative binomial - Erlang distribution
NASA Astrophysics Data System (ADS)
Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana
2017-11-01
The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.
NASA Astrophysics Data System (ADS)
Wysocki, J. K.
1984-02-01
The idea of Young and Clark of independent evaluation of the work function φ and electric field strength F in FEM [R.D. Young and H.E. Clark, Phys. Rev. Letters 17 (1966) 351] has been extended to the energy region above the Fermi level. The estimation of slowly varying elliptic functions, necessary to compute φ and F, using only experimental data is presented. Calculations for the W(111) plane using the field electron energy distribution and the integral field-emission current dependence on retarding voltage have been performed.
Functional connectivity among multi-channel EEGs when working memory load reaches the capacity.
Zhang, Dan; Zhao, Huipo; Bai, Wenwen; Tian, Xin
2016-01-15
Evidence from behavioral studies has suggested a capacity existed in working memory. As the concept of functional connectivity has been introduced into neuroscience research in the recent years, the aim of this study is to investigate the functional connectivity in the brain when working memory load reaches the capacity. 32-channel electroencephalographs (EEGs) were recorded for 16 healthy subjects, while they performed a visual working memory task with load 1-6. Individual working memory capacity was calculated according to behavioral results. Short-time Fourier transform was used to determine the principal frequency band (theta band) related to working memory. The functional connectivity among EEGs was measured by the directed transform function (DTF) via spectral Granger causal analysis. The capacity was 4 calculated from the behavioral results. The power was focused in the frontal midline region. The strongest connectivity strengths of EEG theta components from load 1 to 6 distributed in the frontal midline region. The curve of DTF values vs load numbers showed that DTF increased from load 1 to 4, peaked at load 4, then decreased after load 4. This study finds that the functional connectivity between EEGs, described quantitatively by DTF, became less strong when working memory load exceeded the capacity. Copyright © 2015 Elsevier B.V. All rights reserved.
DOT National Transportation Integrated Search
2006-01-01
Problem: Work zones on heavily traveled divided highways present problems to motorists in the form of traffic delays and increased accident risks due to sometimes reduced motorist guidance, dense traffic, and other driving difficulties. To minimize t...
NASA Astrophysics Data System (ADS)
Sessa, Francesco; D'Angelo, Paola; Migliorati, Valentina
2018-01-01
In this work we have developed an analytical procedure to identify metal ion coordination geometries in liquid media based on the calculation of Combined Distribution Functions (CDFs) starting from Molecular Dynamics (MD) simulations. CDFs provide a fingerprint which can be easily and unambiguously assigned to a reference polyhedron. The CDF analysis has been tested on five systems and has proven to reliably identify the correct geometries of several ion coordination complexes. This tool is simple and general and can be efficiently applied to different MD simulations of liquid systems.
Computer Vision Tracking Using Particle Filters for 3D Position Estimation
2014-03-27
the United States Air Force, the Department of Defense, or the United States Government. This material is declared a work of the U.S. Government and is...probability distribution (unless otherwise noted) π proposal distribution ω importance weight i index of normalized weights δ Dirac -delta function x...p(x) and the importance weights, where δ is the Dirac delta function [2, p. 178]. p(x) = N∑ n=1 ωnδ (x − xn) (2.14) ωn ∝ p(x) π(x) (2.15) Applying
NASA Astrophysics Data System (ADS)
Elias, P. Q.; Jarrige, J.; Cucchetti, E.; Cannat, F.; Packan, D.
2017-09-01
Measuring the full ion velocity distribution function (IVDF) by non-intrusive techniques can improve our understanding of the ionization processes and beam dynamics at work in electric thrusters. In this paper, a Laser-Induced Fluorescence (LIF) tomographic reconstruction technique is applied to the measurement of the IVDF in the plume of a miniature Hall effect thruster. A setup is developed to move the laser axis along two rotation axes around the measurement volume. The fluorescence spectra taken from different viewing angles are combined using a tomographic reconstruction algorithm to build the complete 3D (in phase space) time-averaged distribution function. For the first time, this technique is used in the plume of a miniature Hall effect thruster to measure the full distribution function of the xenon ions. Two examples of reconstructions are provided, in front of the thruster nose-cone and in front of the anode channel. The reconstruction reveals the features of the ion beam, in particular on the thruster axis where a toroidal distribution function is observed. These findings are consistent with the thruster shape and operation. This technique, which can be used with other LIF schemes, could be helpful in revealing the details of the ion production regions and the beam dynamics. Using a more powerful laser source, the current implementation of the technique could be improved to reduce the measurement time and also to reconstruct the temporal evolution of the distribution function.
The Intrinsic Eddington Ratio Distribution of Active Galactic Nuclei in Young Galaxies from SDSS
NASA Astrophysics Data System (ADS)
Jones, Mackenzie L.; Hickox, Ryan C.; Black, Christine; Hainline, Kevin Nicholas; DiPompeo, Michael A.
2016-04-01
An important question in extragalactic astronomy concerns the distribution of black hole accretion rates, i.e. the Eddington ratio distribution, of active galactic nuclei (AGN). Specifically, it is matter of debate whether AGN follow a broad distribution in accretion rates, or if the distribution is more strongly peaked at characteristic Eddington ratios. Using a sample of galaxies from SDSS DR7, we test whether an intrinsic Eddington ratio distribution that takes the form of a broad Schechter function is in fact consistent with previous work that suggests instead that young galaxies in optical surveys have a more strongly peaked lognormal Eddington ratio distribution. Furthermore, we present an improved method for extracting the AGN distribution using BPT diagnostics that allows us to probe over one order of magnitude lower in Eddington ratio, counteracting the effects of dilution by star formation. We conclude that the intrinsic Eddington ratio distribution of optically selected AGN is consistent with a power law with an exponential cutoff, as is observed in the X-rays. This work was supported in part by a NASA Jenkins Fellowship.
Distributed Constrained Optimization with Semicoordinate Transformations
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2006-01-01
Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.
Measurements of neutral and ion velocity distribution functions in a Hall thruster
NASA Astrophysics Data System (ADS)
Svarnas, Panagiotis; Romadanov, Iavn; Diallo, Ahmed; Raitses, Yevgeny
2015-11-01
Hall thruster is a plasma device for space propulsion. It utilizes a cross-field discharge to generate a partially ionized weakly collisional plasma with magnetized electrons and non-magnetized ions. The ions are accelerated by the electric field to produce the thrust. There is a relatively large number of studies devoted to characterization of accelerated ions, including measurements of ion velocity distribution function using laser-induced fluorescence diagnostic. Interactions of these accelerated ions with neutral atoms in the thruster and the thruster plume is a subject of on-going studies, which require combined monitoring of ion and neutral velocity distributions. Herein, laser-induced fluorescence technique has been employed to study neutral and single-charged ion velocity distribution functions in a 200 W cylindrical Hall thruster operating with xenon propellant. An optical system is installed in the vacuum chamber enabling spatially resolved axial velocity measurements. The fluorescence signals are well separated from the plasma background emission by modulating the laser beam and using lock-in detectors. Measured velocity distribution functions of neutral atoms and ions at different operating parameters of the thruster are reported and analyzed. This work was supported by DOE contract DE-AC02-09CH11466.
XBoard: A Framework for Integrating and Enhancing Collaborative Work Practices
NASA Technical Reports Server (NTRS)
Shab, Ted
2006-01-01
Teams typically collaborate in different modes including face-to-face meetings, meetings that are synchronous (i. e. require parties to participate at the same time) but distributed geographically, and meetings involving asynchronously working on common tasks at different times. The XBoard platform was designed to create an integrated environment for creating applications that enhance collaborative work practices. Specifically, it takes large, touch-screen enabled displays as the starting point for enhancing face-to-face meetings by providing common facilities such as whiteboarding/electronic flipcharts, laptop projection, web access, screen capture and content distribution. These capabilities are built upon by making these functions inherently distributed by allowing these sessions to be easily connected between two or more systems at different locations. Finally, an information repository is integrated into the functionality to provide facilities for work practices that involve work being done at different times, such as reports that span different shifts. The Board is designed to be extendible allowing customization of both the general functionality and by adding new functionality to the core facilities by means of a plugin architecture. This, in essence, makes it a collaborative framework for extending or integrating work practices for different mission scenarios. XBoard relies heavily on standards such as Web Services and SVG, and is built using predominately Java and well-known open-source products such as Apache and Postgres. Increasingly, organizations are geographically dispersed, and rely on "virtual teams" that are assembled from a pool of various partner organizations. These organizations often have different infrastructures of applications and workflows. The XBoard has been designed to be a good partner in these situations, providing the flexibility to integrate with typical legacy applications while providing a standards-based infrastructure that is readily accepted by most organizations. The XBoard has been used on the Mars Exploration Rovers mission at JPL, and is currently being used or considered for use in pilot projects at Johnson Space Center (JSC) Mission Control, the University of Arizona Lunar and Planetav Laboratory (Phoenix Mars Lander), and MBART (Monterey Bay Aquarium Research Institute).
Working Memory: Maintenance, Updating, and the Realization of Intentions
Nyberg, Lars; Eriksson, Johan
2016-01-01
“Working memory” refers to a vast set of mnemonic processes and associated brain networks, relates to basic intellectual abilities, and underlies many real-world functions. Working-memory maintenance involves frontoparietal regions and distributed representational areas, and can be based on persistent activity in reentrant loops, synchronous oscillations, or changes in synaptic strength. Manipulation of content of working memory depends on the dorsofrontal cortex, and updating is realized by a frontostriatal ‘“gating” function. Goals and intentions are represented as cognitive and motivational contexts in the rostrofrontal cortex. Different working-memory networks are linked via associative reinforcement-learning mechanisms into a self-organizing system. Normal capacity variation, as well as working-memory deficits, can largely be accounted for by the effectiveness and integrity of the basal ganglia and dopaminergic neurotransmission. PMID:26637287
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A
2011-09-26
The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America
Hathaway, R.M.; McNellis, J.M.
1989-01-01
Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously. The new approaches and expanded use of computers will require substantial increases in the quantity and sophistication of the Division 's computer resources. The requirements presented in this report will be used to develop technical specifications that describe the computer resources needed during the 1990's. (USGS)
NASA Technical Reports Server (NTRS)
Smith, J. R.
1969-01-01
Electron work functions, surface potentials, and electron number density distributions and electric fields in the surface region of 26 metals were calculated from first principles within the free electron model. Calculation proceeded from an expression of the total energy as a functional of the electron number density, including exchange and correlation energies, as well as a first inhomogeneity term. The self-consistent solution was obtained via a variational procedure. Surface barriers were due principally to many-body effects; dipole barriers were small only for some alkali metals, becoming quite large for the transition metals. Surface energies were inadequately described by this model, which neglects atomistic effects. Reasonable results were obtained for electron work functions and surface potential characteristics, maximum electron densities varying by a factor of over 60.
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
NASA Technical Reports Server (NTRS)
Holland, C.; Brodie, I.
1985-01-01
A test stand has been set up to measure the current fluctuation noise properties of B- and M-type dispenser cathodes in a typical TWT gun structure. Noise techniques were used to determine the work function distribution on the cathode surfaces. Significant differences between the B and M types and significant changes in the work function distribution during activation and life are found. In turn, knowledge of the expected work function can be used to accurately determine the cathode-operating temperatures in a TWT structure. Noise measurements also demonstrate more sensitivity to space charge effects than the Miram method. Full automation of the measurements and computations is now required to speed up data acquisition and reduction. The complete set of equations for the space charge limited diode were programmed so that given four of the five measurable variables (J, J sub O, T, D, and V) the fifth could be computed. Using this program, we estimated that an rms fluctuation in the diode spacing d in the frequency range of 145 Hz about 20 kHz of only about 10 to the -5 power A would account for the observed noise in a space charge limited diode with 1 mm spacing.
In Search of Joy in Practice: A Report of 23 High-Functioning Primary Care Practices
Sinsky, Christine A.; Willard-Grace, Rachel; Schutzbank, Andrew M.; Sinsky, Thomas A.; Margolius, David; Bodenheimer, Thomas
2013-01-01
We highlight primary care innovations gathered from high-functioning primary care practices, innovations we believe can facilitate joy in practice and mitigate physician burnout. To do so, we made site visits to 23 high-performing primary care practices and focused on how these practices distribute functions among the team, use technology to their advantage, improve outcomes with data, and make the job of primary care feasible and enjoyable as a life’s vocation. Innovations identified include (1) proactive planned care, with previsit planning and previsit laboratory tests; (2) sharing clinical care among a team, with expanded rooming protocols, standing orders, and panel management; (3) sharing clerical tasks with collaborative documentation (scribing), nonphysician order entry, and streamlined prescription management; (4) improving communication by verbal messaging and in-box management; and (5) improving team functioning through co-location, team meetings, and work flow mapping. Our observations suggest that a shift from a physician-centric model of work distribution and responsibility to a shared-care model, with a higher level of clinical support staff per physician and frequent forums for communication, can result in high-functioning teams, improved professional satisfaction, and greater joy in practice. PMID:23690328
Random walk to a nonergodic equilibrium concept
NASA Astrophysics Data System (ADS)
Bel, G.; Barkai, E.
2006-01-01
Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.
USDA-ARS?s Scientific Manuscript database
Accurately predicting phenology in crop simulation models is critical for correctly simulating crop production. While extensive work in modeling phenology has focused on the temperature response function (resulting in robust phenology models), limited work on quantifying the phenological responses t...
Distributed environmental control
NASA Technical Reports Server (NTRS)
Cleveland, Gary A.
1992-01-01
We present an architecture of distributed, independent control agents designed to work with the Computer Aided System Engineering and Analysis (CASE/A) simulation tool. CASE/A simulates behavior of Environmental Control and Life Support Systems (ECLSS). We describe a lattice of agents capable of distributed sensing and overcoming certain sensor and effector failures. We address how the architecture can achieve the coordinating functions of a hierarchical command structure while maintaining the robustness and flexibility of independent agents. These agents work between the time steps of the CASE/A simulation tool to arrive at command decisions based on the state variables maintained by CASE/A. Control is evaluated according to both effectiveness (e.g., how well temperature was maintained) and resource utilization (the amount of power and materials used).
Measurement of the Angular Distribution of the Electron from $$W \\to e + \
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos, Manuel Martin
1996-10-01
The goal of this thesis is to scan the extensive literature dealing with the properties of the W and Z bosons. Iit is clear that, besides the measurements confirming the weak interactions theory, no specific work related to the angular distributions of the emerging particles from the leptonic decay of the boson has been done. The aim of the work is to obtain experimentally the values of α 2, as function of the transverse momentum of the W, that appear in the expression 0.3 and to compare the values obtained with the theoretical predictions.
Monotonic sequences related to zeros of Bessel functions
NASA Astrophysics Data System (ADS)
Lorch, Lee; Muldoon, Martin
2008-12-01
In the course of their work on Salem numbers and uniform distribution modulo 1, A. Akiyama and Y. Tanigawa proved some inequalities concerning the values of the Bessel function J 0 at multiples of π, i.e., at the zeros of J 1/2. This raises the question of inequalities and monotonicity properties for the sequences of values of one cylinder function at the zeros of another such function. Here we derive such results by differential equations methods.
Diagnosing Autism Spectrum Disorder through Brain Functional Magnetic Resonance Imaging
2016-03-01
Diagnosing Autism Spectrum Disorder through Brain Functional Magnetic Resonance Imaging THESIS MARCH 2016 Kyle A. Palko, Second Lieutenant, USAF AFIT...declared a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENC-MS-16-M-123 DIAGNOSING AUTISM SPECTRUM...PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENC-MS-16-M-123 DIAGNOSING AUTISM SPECTRUM DISORDER THROUGH BRAIN FUNCTIONAL MAGNETIC RESONANCE IMAGING Kyle
Low work function, stable compound clusters and generation process
Dinh, Long N.; Balooch, Mehdi; Schildbach, Marcus A.; Hamza, Alex V.; McLean, II, William
2000-01-01
Low work function, stable compound clusters are generated by co-evaporation of a solid semiconductor (i.e., Si) and alkali metal (i.e., Cs) elements in an oxygen environment. The compound clusters are easily patterned during deposition on substrate surfaces using a conventional photo-resist technique. The cluster size distribution is narrow, with a peak range of angstroms to nanometers depending on the oxygen pressure and the Si source temperature. Tests have shown that compound clusters when deposited on a carbon substrate contain the desired low work function property and are stable up to 600.degree. C. Using the patterned cluster containing plate as a cathode baseplate and a faceplate covered with phosphor as an anode, one can apply a positive bias to the faceplate to easily extract electrons and obtain illumination.
Nonequilibrium approach regarding metals from a linearised kappa distribution
NASA Astrophysics Data System (ADS)
Domenech-Garret, J. L.
2017-10-01
The widely used kappa distribution functions develop high-energy tails through an adjustable kappa parameter. The aim of this work is to show that such a parameter can itself be regarded as a function, which entangles information about the sources of disequilibrium. We first derive and analyse an expanded Fermi-Dirac kappa distribution. Later, we use this expanded form to obtain an explicit analytical expression for the kappa parameter of a heated metal on which an external electric field is applied. We show that such a kappa index causes departures from equilibrium depending on the physical magnitudes. Finally, we study the role of temperature and electric field on such a parameter, which characterises the electron population of a metal out of equilibrium.
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Howlett, Cullan
2018-06-01
In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.
The complete two-loop integrated jet thrust distribution in soft-collinear effective theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Manteuffel, Andreas; Schabinger, Robert M.; Zhu, Hua Xing
2014-03-01
In this work, we complete the calculation of the soft part of the two-loop integrated jet thrust distribution in e+e- annihilation. This jet mass observable is based on the thrust cone jet algorithm, which involves a veto scale for out-of-jet radiation. The previously uncomputed part of our result depends in a complicated way on the jet cone size, r, and at intermediate stages of the calculation we actually encounter a new class of multiple polylogarithms. We employ an extension of the coproduct calculus to systematically exploit functional relations and represent our results concisely. In contrast to the individual contributions, themore » sum of all global terms can be expressed in terms of classical polylogarithms. Our explicit two-loop calculation enables us to clarify the small r picture discussed in earlier work. In particular, we show that the resummation of the logarithms of r that appear in the previously uncomputed part of the two-loop integrated jet thrust distribution is inextricably linked to the resummation of the non-global logarithms. Furthermore, we find that the logarithms of r which cannot be absorbed into the non-global logarithms in the way advocated in earlier work have coefficients fixed by the two-loop cusp anomalous dimension. We also show that in many cases one can straightforwardly predict potentially large logarithmic contributions to the integrated jet thrust distribution at L loops by making use of analogous contributions to the simpler integrated hemisphere soft function.« less
On the latitudinal distribution of Titan's haze at the Voyager epoch
NASA Astrophysics Data System (ADS)
Negrao, A.; Roos-Serote, M.; Rannou, P.; Rages, K.; McKay, C.
2002-09-01
In this work, we re-analyse a total of 10 high phase angle images of Titan (2 from Voyager 1 and 8 from Voyager 2). The images were acquired in different filters of the Voyager Imaging Sub System in 1980 - 1981. We apply a model, developed and used by Rannou etal. (1997) and Cabane etal. (1992), that calculates the vertical (1-D) distribution of haze particles and the I/F radial profiles as a function of a series of parameters. Two of these parameters, the haze particle production rate (P) and imaginary refractive index (xk), are used to obtain fits to the observed I/F profiles at different latitudes. Differerent from previous studies is that we consider all filters simultaneously, in an attempt to better fix the parameter values. We also include the filter response functions, not considered previously. The results show that P does not change significantly as a function of latitude, eventhough somewhat lower values are found at high northern latitudes. xk seems to increase towards southern latitudes. We will compare our results with GCM runs, that can give the haze distribution at the epoch of the observations. Work financed by portuguese Foundation for Science and Tecnology (FCT), contract ESO/PRO/40157/2000
NASA Astrophysics Data System (ADS)
Hosseinkhani, H.; Modarres, M.; Olanj, N.
2017-07-01
Transverse momentum dependent (TMD) parton distributions, also referred to as unintegrated parton distribution functions (UPDFs), are produced via the Kimber-Martin-Ryskin (KMR) prescription. The GJR08 set of parton distribution functions (PDFs) which are based on the valence-like distributions is used, at the leading order (LO) and the next-to-leading order (NLO) approximations, as inputs of the KMR formalism. The general and the relative behaviors of the generated TMD PDFs at LO and NLO and their ratios in a wide range of the transverse momentum values, i.e. kt2 = 10, 102, 104 and 108GeV2 are investigated. It is shown that the properties of the parent valence-like PDFs are imprinted on the daughter TMD PDFs. Imposing the angular ordering constraint (AOC) leads to the dynamical variable limits on the integrals which in turn increase the contributions from the lower scales at lower kt2. The results are compared with our previous studies based on the MSTW2008 input PDFs and it is shown that the present calculation gives flatter TMD PDFs. Finally, a comparison of longitudinal structure function (FL) is made by using the produced TMD PDFs and those that were generated through the MSTW2008-LO PDF from our previous work and the corresponding data from H1 and ZEUS collaborations and a reasonable agreement is found.
NASA Astrophysics Data System (ADS)
Diestra Cruz, Heberth Alexander
The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.
The resilient hybrid fiber sensor network with self-healing function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shibo, E-mail: Shibo-Xu@tju.edu.cn; Liu, Tiegen; Ge, Chunfeng
This paper presents a novel resilient fiber sensor network (FSN) with multi-ring architecture, which could interconnect various kinds of fiber sensors responsible for more than one measurands. We explain how the intelligent control system provides sensors with self-healing function meanwhile sensors are working properly, besides each fiber in FSN is under real-time monitoring. We explain the software process and emergency mechanism to respond failures or other circumstances. To improve the efficiency in the use of limited spectrum resources in some situations, we have two different structures to distribute the light sources rationally. Then, we propose a hybrid sensor working inmore » FSN which is a combination of a distributed sensor and a FBG (Fiber Bragg Grating) array fused in a common fiber sensing temperature and vibrations simultaneously with neglectable crosstalk to each other. By making a failure to a working fiber in experiment, the feasibility and effectiveness of the network with a hybrid sensor has been demonstrated, hybrid sensors could not only work as designed but also survive from destructive failures with the help of resilient network and smart and quick self-healing actions. The network has improved the viability of the fiber sensors and diversity of measurands.« less
Characterization of the Electron Energy Distribution Function in a Penning Discharge
NASA Astrophysics Data System (ADS)
Skoutnev, Valentin; Dourbal, Paul; Raitses, Yevgeny
2017-10-01
Slow and fast sweeping Langmuir probe diagnostics were implemented to measure the electron energy distribution function (EEDF) in a cross-field Penning discharge undergoing rotating spoke phenomenon. The EEDF was measured using the Druyvesteyn method. Rotating spoke occurs in a variety of ExB devices and is characterized primarily by azimuthal light, density, and potential fluctuations on the order of a few kHz, but is theoretically still not well understood. Characterization of a time-resolved EEDF of the spoke would be important for understanding physical mechanisms responsible for the spoke and its effects on Penning discharges, Hall thrusters, sputtering magnetrons, and other ExB devices. In this work, preliminary results of measurements of the EEDF using slow and fast Langmuir probes that sweep below and above the fundamental spoke frequency will be discussed. This work was supported by the Air Force Office of Scientific Research (AFOSR).
Evolution of HI from Z=5 to the present
NASA Technical Reports Server (NTRS)
Storrie-Lombardi, L. J.
2002-01-01
Studies of damped Lya systems provide us with a good measure of the evolution of the HI column density distribution function and the contribution to the comoving mass density in neutral gas out to redshifts of z = 5 . The column density distribution function at high redshift steepens for the highest column density HI absorbers, though the contribution to the comoving mass density of neutral gas remains fiat from 2 < z < 5 . Results from studies at z < 2 are finding substantial numbers of damped absorbers identified from MgII absorption, compared to previous blind surveys. These results indicate that the contribution to the comoving mass density in neutral gas may be constant from z 0 to z 5. Details of recent work in the redshift range z < 2 work is covered elsewhere in this volume (see D. Nestor). We review here recent results for the redshift range 2 < z < 5.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
NASA Astrophysics Data System (ADS)
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
Paulesu, Eraldo; Shallice, Tim; Danelli, Laura; Sberna, Maurizio; Frackowiak, Richard S J; Frith, Chris D
2017-01-01
Cognitive skills are the emergent property of distributed neural networks. The distributed nature of these networks does not necessarily imply a lack of specialization of the individual brain structures involved. However, it remains questionable whether discrete aspects of high-level behavior might be the result of localized brain activity of individual nodes within such networks. The phonological loop of working memory, with its simplicity, seems ideally suited for testing this possibility. Central to the development of the phonological loop model has been the description of patients with focal lesions and specific deficits. As much as the detailed description of their behavior has served to refine the phonological loop model, a classical anatomoclinical correlation approach with such cases falls short in telling whether the observed behavior is based on the functions of a neural system resembling that seen in normal subjects challenged with phonological loop tasks or whether different systems have taken over. This is a crucial issue for the cross correlation of normal cognition, normal physiology, and cognitive neuropsychology. Here we describe the functional anatomical patterns of JB, a historical patient originally described by Warrington et al. (1971), a patient with a left temporo-parietal lesion and selective short phonological store deficit. JB was studied with the H 2 15 O PET activation technique during a rhyming task, which primarily depends on the rehearsal system of the phonological loop. No residual function was observed in the left temporo-parietal junction, a region previously associated with the phonological buffer of working memory. However, Broca's area, the major counterpart of the rehearsal system, was the major site of activation during the rhyming task. Specific and autonomous activation of Broca's area in the absence of afferent inputs from the other major anatomical component of the phonological loop shows that a certain degree of functional independence or modularity exists in this distributed anatomical-cognitive system.
Paulesu, Eraldo; Shallice, Tim; Danelli, Laura; Sberna, Maurizio; Frackowiak, Richard S. J.; Frith, Chris D.
2017-01-01
Cognitive skills are the emergent property of distributed neural networks. The distributed nature of these networks does not necessarily imply a lack of specialization of the individual brain structures involved. However, it remains questionable whether discrete aspects of high-level behavior might be the result of localized brain activity of individual nodes within such networks. The phonological loop of working memory, with its simplicity, seems ideally suited for testing this possibility. Central to the development of the phonological loop model has been the description of patients with focal lesions and specific deficits. As much as the detailed description of their behavior has served to refine the phonological loop model, a classical anatomoclinical correlation approach with such cases falls short in telling whether the observed behavior is based on the functions of a neural system resembling that seen in normal subjects challenged with phonological loop tasks or whether different systems have taken over. This is a crucial issue for the cross correlation of normal cognition, normal physiology, and cognitive neuropsychology. Here we describe the functional anatomical patterns of JB, a historical patient originally described by Warrington et al. (1971), a patient with a left temporo-parietal lesion and selective short phonological store deficit. JB was studied with the H215O PET activation technique during a rhyming task, which primarily depends on the rehearsal system of the phonological loop. No residual function was observed in the left temporo-parietal junction, a region previously associated with the phonological buffer of working memory. However, Broca's area, the major counterpart of the rehearsal system, was the major site of activation during the rhyming task. Specific and autonomous activation of Broca's area in the absence of afferent inputs from the other major anatomical component of the phonological loop shows that a certain degree of functional independence or modularity exists in this distributed anatomical-cognitive system. PMID:28567009
The Design of Distributed Micro Grid Energy Storage System
NASA Astrophysics Data System (ADS)
Liang, Ya-feng; Wang, Yan-ping
2018-03-01
Distributed micro-grid runs in island mode, the energy storage system is the core to maintain the micro-grid stable operation. For the problems that it is poor to adjust at work and easy to cause the volatility of micro-grid caused by the existing energy storage structure of fixed connection. In this paper, an array type energy storage structure is proposed, and the array type energy storage system structure and working principle are analyzed. Finally, the array type energy storage structure model is established based on MATLAB, the simulation results show that the array type energy storage system has great flexibility, which can maximize the utilization of energy storage system, guarantee the reliable operation of distributed micro-grid and achieve the function of peak clipping and valley filling.
Olds, Daniel; Wang, Hsiu -Wen; Page, Katharine L.
2015-09-04
In this work we discuss the potential problems and currently available solutions in modeling powder-diffraction based pair-distribution function (PDF) data from systems where morphological feature information content includes distances in the nanometer length scale, such as finite nanoparticles, nanoporous networks, and nanoscale precipitates in bulk materials. The implications of an experimental finite minimum Q-value are addressed by simulation, which also demonstrates the advantages of combining PDF data with small angle scattering data (SAS). In addition, we introduce a simple Fortran90 code, DShaper, which may be incorporated into PDF data fitting routines in order to approximate the so-called shape-function for anymore » atomistic model.« less
Schondelmeyer, Stephen W.; Hadsall, Ronald S.; Schommer, Jon C.
2008-01-01
Objectives To describe PharmD students' work experiences and activities; examine their attitudes towards their work; examine perceptions of preceptor pharmacists they worked with; and determine important issues associated with career preference. Methods A written survey was administered to third-year doctor of pharmacy (PharmD) students at 8 colleges and schools of pharmacy in the Midwest. Results Five hundred thirty-three students (response rate = 70.4%) completed the survey instrument. Nearly 100% of PharmD students reported working in a pharmacy by the time their advanced pharmacy practice experiences (APPEs) began. Seventy-eight percent reported working in a community pharmacy, and 67% had worked in a chain community pharmacy. For all practice settings, students reported spending 69% of their time on activities such as compounding, dispensing, and distribution of drug products. Conclusions Most students are working in community pharmacy (mainly chain) positions where their primary function is traditional drug product dispensing and distribution. Having a controllable work schedule was the variable most strongly associated with career choice for all students. PMID:18698391
Quantum nuclear effects in water using centroid molecular dynamics
NASA Astrophysics Data System (ADS)
Kondratyuk, N. D.; Norman, G. E.; Stegailov, V. V.
2018-01-01
The quantum nuclear effects are studied in water using the method of centroid molecular dynamics (CMD). The aim is the calibration of CMD implementation in LAMMPS. The calculated intramolecular energy, atoms gyration radii and radial distribution functions are shown in comparison with previous works. The work is assumed to be the step toward to solution of the discrepancy between the simulation results and the experimental data of liquid n-alkane properties in our previous works.
Characterization of mixing of suspension in a mechanically stirred precipitation system
NASA Astrophysics Data System (ADS)
Farkas, B.; Blickle, T.; Ulbert, Zs.; Hasznos-Nezdei, M.
1996-09-01
In the case of precipitational crystallization, the particle size distribution of the resulting product is greatly influenced by the mixing rate of the system. We have worked out a method of characterizing the mixing of precipitated suspensions by applying a function of mean residence time and particle size distribution. For the experiments a precipitated suspension of β-lactam-type antibiotic has been used in a mechanically stirred tank.
Transverse Momentum Distributions of Electron in Simulated QED Model
NASA Astrophysics Data System (ADS)
Kaur, Navdeep; Dahiya, Harleen
2018-05-01
In the present work, we have studied the transverse momentum distributions (TMDs) for the electron in simulated QED model. We have used the overlap representation of light-front wave functions where the spin-1/2 relativistic composite system consists of spin-1/2 fermion and spin-1 vector boson. The results have been obtained for T-even TMDs in transverse momentum plane for fixed value of longitudinal momentum fraction x.
Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS
NASA Technical Reports Server (NTRS)
Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew
2016-01-01
EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.
NASA Astrophysics Data System (ADS)
Gsponer, Andre
2009-01-01
The objective of this introduction to Colombeau algebras of generalized functions (in which distributions can be freely multiplied) is to explain in elementary terms the essential concepts necessary for their application to basic nonlinear problems in classical physics. Examples are given in hydrodynamics and electrodynamics. The problem of the self-energy of a point electric charge is worked out in detail: the Coulomb potential and field are defined as Colombeau generalized functions, and integrals of nonlinear expressions corresponding to products of distributions (such as the square of the Coulomb field and the square of the delta function) are calculated. Finally, the methods introduced in Gsponer (2007 Eur. J. Phys. 28 267, 2007 Eur. J. Phys. 28 1021 and 2007 Eur. J. Phys. 28 1241), to deal with point-like singularities in classical electrodynamics are confirmed.
Al-Awadhi, E A; Wolstencroft, S J; Blake, M
2006-01-01
To evaluate the service purchased from contracted orthodontic laboratories used by HSE (SWA) regional orthodontic unit, St. James's Hospital, Dublin and identify deficiencies in the current service. A data collection questionnaire was designed and distributed to the departmental orthodontists for a period of three months (October-December 2004). Gold standards, drawn up based on the authors' ideal requirements and published guidelines, were supplied to grade the work returned. During the study period 363 items of laboratory work were requested. 20% of the laboratory work arrived late and most of the delayed work was delayed for more than 24 hours. Most laboratory delays occurred with functional appliances, retainers and study models. Prior to fit, 20% of the appliances required adjustments for more than 30 seconds. 65% of laboratory work returned to the department met all of the gold standards. 10% of appliances were considered unsatisfactory. Functional appliances were most often ill fitting accounting for almost half of the unsatisfactory laboratory work. The majority of the laboratory work returned to the department met our gold standards and arrived on time. Forty six percent of the appliances required adjustments. Functional appliances required the most adjustments; one in five of all functional appliances ordered were considered unsatisfactory.
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2017-10-01
The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.
Continued Development of Expert System Tools for NPSS Engine Diagnostics
NASA Technical Reports Server (NTRS)
Lewandowski, Henry
1996-01-01
The objectives of this grant were to work with previously developed NPSS (Numerical Propulsion System Simulation) tools and enhance their functionality; explore similar AI systems; and work with the High Performance Computing Communication (HPCC) K-12 program. Activities for this reporting period are briefly summarized and a paper addressing the implementation, monitoring and zooming in a distributed jet engine simulation is included as an attachment.
Bani, Farhad; Bodaghi, Ali; Dadkhah, Abbas; Movahedi, Soodabeh; Bodaghabadi, Narges; Sadeghizadeh, Majid; Adeli, Mohsen
2018-05-01
In this work, we reported a facile method to produce stable aqueous graphene dispersion through direct exfoliation of graphite by modified hyperbranched polyglycerol. Size of graphene sheets was manipulated by simultaneous exfoliation and sonication of graphite, and functionalized graphene sheets with narrow size distribution were obtained. The polyglycerol-functionalized graphene sheets exhibited highly efficient cellular uptake and photothermal conversion, enabling it to serve as a photothermal agent for cancer therapy.
Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Malik, Pradeep; Swaminathan, A.
2010-11-01
In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.
Petri net controllers for distributed robotic systems
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, George N.
1992-01-01
Petri nets are a well established modelling technique for analyzing parallel systems. When coupled with an event-driven operating system, Petri nets can provide an effective means for integrating and controlling the functions of distributed robotic applications. Recent work has shown that Petri net graphs can also serve as remarkably intuitive operator interfaces. In this paper, the advantages of using Petri nets as high-level controllers to coordinate robotic functions are outlined, the considerations for designing Petri net controllers are discussed, and simple Petri net structures for implementing an interface for operator supervision are presented. A detailed example is presented which illustrates these concepts for a sensor-based assembly application.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Analyzing coastal environments by means of functional data analysis
NASA Astrophysics Data System (ADS)
Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.
2017-07-01
Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.
Free energy calculations, enhanced by a Gaussian ansatz, for the "chemical work" distribution.
Boulougouris, Georgios C
2014-05-15
The evaluation of the free energy is essential in molecular simulation because it is intimately related with the existence of multiphase equilibrium. Recently, it was demonstrated that it is possible to evaluate the Helmholtz free energy using a single statistical ensemble along an entire isotherm by accounting for the "chemical work" of transforming each molecule, from an interacting one, to an ideal gas. In this work, we show that it is possible to perform such a free energy perturbation over a liquid vapor phase transition. Furthermore, we investigate the link between a general free energy perturbation scheme and the novel nonequilibrium theories of Crook's and Jarzinsky. We find that for finite systems away from the thermodynamic limit the second law of thermodynamics will always be an inequality for isothermal free energy perturbations, resulting always to a dissipated work that may tend to zero only in the thermodynamic limit. The work, the heat, and the entropy produced during a thermodynamic free energy perturbation can be viewed in the context of the Crooks and Jarzinsky formalism, revealing that for a given value of the ensemble average of the "irreversible" work, the minimum entropy production corresponded to a Gaussian distribution for the histogram of the work. We propose the evaluation of the free energy difference in any free energy perturbation based scheme on the average irreversible "chemical work" minus the dissipated work that can be calculated from the variance of the distribution of the logarithm of the work histogram, within the Gaussian approximation. As a consequence, using the Gaussian ansatz for the distribution of the "chemical work," accurate estimates for the chemical potential and the free energy of the system can be performed using much shorter simulations and avoiding the necessity of sampling the computational costly tails of the "chemical work." For a more general free energy perturbation scheme that the Gaussian ansatz may not be valid, the free energy calculation can be expressed in terms of the moment generating function of the "chemical work" distribution. Copyright © 2014 Wiley Periodicals, Inc.
Learning that Prepares for More Learning: Symbolic Mathematics in Physical Chemistry
ERIC Educational Resources Information Center
Zielinski, Theresa Julia
2004-01-01
The well-crafted templates are useful to learn the new concepts of chemistry. The templates focus on pressure-volume work, the Boltzmann distribution, the Gibbs free energy function, intermolecular potentials, the second virial coefficient and quantum mechanical tunneling.
Distributed Adaptive Control: Beyond Single-Instant, Discrete Variables
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2005-01-01
In extensive form noncooperative game theory, at each instant t, each agent i sets its state x, independently of the other agents, by sampling an associated distribution, q(sub i)(x(sub i)). The coupling between the agents arises in the joint evolution of those distributions. Distributed control problems can be cast the same way. In those problems the system designer sets aspects of the joint evolution of the distributions to try to optimize the goal for the overall system. Now information theory tells us what the separate q(sub i) of the agents are most likely to be if the system were to have a particular expected value of the objective function G(x(sub 1),x(sub 2), ...). So one can view the job of the system designer as speeding an iterative process. Each step of that process starts with a specified value of E(G), and the convergence of the q(sub i) to the most likely set of distributions consistent with that value. After this the target value for E(sub q)(G) is lowered, and then the process repeats. Previous work has elaborated many schemes for implementing this process when the underlying variables x(sub i) all have a finite number of possible values and G does not extend to multiple instants in time. That work also is based on a fixed mapping from agents to control devices, so that the the statistical independence of the agents' moves means independence of the device states. This paper also extends that work to relax all of these restrictions. This extends the applicability of that work to include continuous spaces and Reinforcement Learning. This paper also elaborates how some of that earlier work can be viewed as a first-principles justification of evolution-based search algorithms.
Solvation of Na^+ in water from first-principles molecular dynamics
NASA Astrophysics Data System (ADS)
White, J. A.; Schwegler, E.; Galli, G.; Gygi, F.
2000-03-01
We have carried out ab initio molecular dynamics (MD) simulations of the Na^+ ion in water with an MD cell containing a single alkali ion and 53 water molecules. The electron-electron and electron-ion interactions were modeled by density functional theory with a generalized gradient approximation for the exchange-correlation functional. The computed radial distribution functions, coordination numbers, and angular distributions are consistent with available experimental data. The first solvation shell contains 5.2±0.6 water molecules, with some waters occasionally exchanging with those of the second shell. The computed Na^+ hydration number is larger than that from calculations for water clusters surrounding an Na^+ ion, but is consistent with that derived from x-ray measurements. Our results also indicate that the first hydration shell is better defined for Na^+ than for K^+ [1], as indicated by the first minimum in the Na-O pair distribution function. [1] L.M. Ramaniah, M. Bernasconi, and M. Parrinello, J. Chem. Phys. 111, 1587 (1999). This work was performed for DOE under contract W-7405-ENG-48.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.
Balamurugan, Kanagasabai; Baskar, Prathab; Kumar, Ravva Mahesh; Das, Sumitesh; Subramanian, Venkatesan
2014-11-28
The present work utilizes classical molecular dynamics simulations to investigate the covalent functionalization of carbon nanotubes (CNTs) and their interaction with ethylene glycol (EG) and water molecules. The MD simulation reveals the dispersion of functionalized carbon nanotubes and the prevention of aggregation in aqueous medium. Further, residue-wise radial distribution function (RRDF) and atomic radial distribution function (ARDF) calculations illustrate the extent of interaction of -OH and -COOH functionalized CNTs with water molecules and the non-functionalized CNT surface with EG. As the presence of the number of functionalized nanotubes increases, enhancement in the propensity for the interaction with water molecules can be observed. However, the same trend decreases for the interaction of EG molecules. In addition, the ONIOM (M06-2X/6-31+G**:AM1) calculations have also been carried out on model systems to quantitatively determine the interaction energy (IE). It is found from these calculations that the relative enhancement in the interaction of water molecules with functionalized CNTs is highly favorable when compared to the interaction of EG.
Device-independent security of quantum cryptography against collective attacks.
Acín, Antonio; Brunner, Nicolas; Gisin, Nicolas; Massar, Serge; Pironio, Stefano; Scarani, Valerio
2007-06-08
We present the optimal collective attack on a quantum key distribution protocol in the "device-independent" security scenario, where no assumptions are made about the way the quantum key distribution devices work or on what quantum system they operate. Our main result is a tight bound on the Holevo information between one of the authorized parties and the eavesdropper, as a function of the amount of violation of a Bell-type inequality.
Correlated resistive/capacitive state variability in solid TiO2 based memory devices
NASA Astrophysics Data System (ADS)
Li, Qingjiang; Salaoru, Iulia; Khiat, Ali; Xu, Hui; Prodromakis, Themistoklis
2017-05-01
In this work, we experimentally demonstrated the correlated resistive/capacitive switching and state variability in practical TiO2 based memory devices. Based on filamentary functional mechanism, we argue that the impedance state variability stems from the randomly distributed defects inside the oxide bulk. Finally, our assumption was verified via a current percolation circuit model, by taking into account of random defects distribution and coexistence of memristor and memcapacitor.
Neuroergonomics - Analyzing Brain Function to Enhance Human Performance in Complex Systems
2008-12-02
NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) George Mason...University 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM( S ) 11. SPONSOR...MONITOR’S REPORT NUMBER( S ) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES See
Properties of field functionals and characterization of local functionals
NASA Astrophysics Data System (ADS)
Brouder, Christian; Dang, Nguyen Viet; Laurent-Gengoux, Camille; Rejzner, Kasia
2018-02-01
Functionals (i.e., functions of functions) are widely used in quantum field theory and solid-state physics. In this paper, functionals are given a rigorous mathematical framework and their main properties are described. The choice of the proper space of test functions (smooth functions) and of the relevant concept of differential (Bastiani differential) are discussed. The relation between the multiple derivatives of a functional and the corresponding distributions is described in detail. It is proved that, in a neighborhood of every test function, the support of a smooth functional is uniformly compactly supported and the order of the corresponding distribution is uniformly bounded. Relying on a recent work by Dabrowski, several spaces of functionals are furnished with a complete and nuclear topology. In view of physical applications, it is shown that most formal manipulations can be given a rigorous meaning. A new concept of local functionals is proposed and two characterizations of them are given: the first one uses the additivity (or Hammerstein) property, the second one is a variant of Peetre's theorem. Finally, the first step of a cohomological approach to quantum field theory is carried out by proving a global Poincaré lemma and defining multi-vector fields and graded functionals within our framework.
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1990-01-01
The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.
A comparison of non-local electron transport models relevant to inertial confinement fusion
NASA Astrophysics Data System (ADS)
Sherlock, Mark; Brodrick, Jonathan; Ridgers, Christopher
2017-10-01
We compare the reduced non-local electron transport model developed by Schurtz et al. to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a 1-dimensional hohlraum ablation problem. We find the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced model reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Work fluctuations for Bose particles in grand canonical initial states.
Yi, Juyeon; Kim, Yong Woon; Talkner, Peter
2012-05-01
We consider bosons in a harmonic trap and investigate the fluctuations of the work performed by an adiabatic change of the trap curvature. Depending on the reservoir conditions such as temperature and chemical potential that provide the initial equilibrium state, the exponentiated work average (EWA) defined in the context of the Crooks relation and the Jarzynski equality may diverge if the trap becomes wider. We investigate how the probability distribution function (PDF) of the work signals this divergence. It is shown that at low temperatures the PDF is highly asymmetric with a steep fall-off at one side and an exponential tail at the other side. For high temperatures it is closer to a symmetric distribution approaching a Gaussian form. These properties of the work PDF are discussed in relation to the convergence of the EWA and to the existence of the hypothetical equilibrium state to which those thermodynamic potential changes refer that enter both the Crooks relation and the Jarzynski equality.
Disentangling rotational velocity distribution of stars
NASA Astrophysics Data System (ADS)
Curé, Michel; Rial, Diego F.; Cassetti, Julia; Christen, Alejandra
2017-11-01
Rotational speed is an important physical parameter of stars: knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. However, rotational speed cannot be measured directly and is instead the convolution between the rotational speed and the sine of the inclination angle vsin(i). The problem itself can be described via a Fredhoml integral of the first kind. A new method (Curé et al. 2014) to deconvolve this inverse problem and obtain the cumulative distribution function for stellar rotational velocities is based on the work of Chandrasekhar & Münch (1950). Another method to obtain the probability distribution function is Tikhonov regularization method (Christen et al. 2016). The proposed methods can be also applied to the mass ratio distribution of extrasolar planets and brown dwarfs (in binary systems, Curé et al. 2015). For stars in a cluster, where all members are gravitationally bounded, the standard assumption that rotational axes are uniform distributed over the sphere is questionable. On the basis of the proposed techniques a simple approach to model this anisotropy of rotational axes has been developed with the possibility to ``disentangling'' simultaneously both the rotational speed distribution and the orientation of rotational axes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... a musical work; or (4) Performs the functions of marketing and authorizing the distribution of a... definition of “Service revenue,” and subject to U.S. Generally Accepted Accounting Principles, service... Accepted Accounting Principles, and including for the avoidance of doubt barter or nonmonetary...
Radial basis function and its application in tourism management
NASA Astrophysics Data System (ADS)
Hu, Shan-Feng; Zhu, Hong-Bin; Zhao, Lei
2018-05-01
In this work, several applications and the performances of the radial basis function (RBF) are briefly reviewed at first. After that, the binomial function combined with three different RBFs including the multiquadric (MQ), inverse quadric (IQ) and inverse multiquadric (IMQ) distributions are adopted to model the tourism data of Huangshan in China. Simulation results showed that all the models match very well with the sample data. It is found that among the three models, the IMQ-RBF model is more suitable for forecasting the tourist flow.
NASA Astrophysics Data System (ADS)
Kitamura, Naoto; Vogel, Sven C.; Idemoto, Yasushi
2013-06-01
In this work, we focused on La0.95Ba0.05Ga0.8Mg0.2O3-δ with the perovskite structure, and investigated the local structure around the oxygen vacancy by pair distribution function (PDF) method and density functional theory (DFT) calculation. By comparing the G(r) simulated based on the DFT calculation and the experimentally-observed G(r), it was suggested that the oxygen vacancy was trapped by Ba2+ at the La3+ site at least at room temperature. Such a defect association may be one of the reasons why the La0.95Ba0.05Ga0.8Mg0.2O3-δ showed lower oxide-ion conductivity than (La,Sr)(Ga,Mg)O3-δ which was widely-used as an electrolyte of the solid oxide fuel cell.
NASA Astrophysics Data System (ADS)
Shao, Lin; Peng, Luohan
2009-12-01
Although multiple scattering theories have been well developed, numerical calculation is complicated and only tabulated values have been available, which has caused inconvenience in practical use. We have found that a Pearson VII distribution function can be used to fit Lugujjo and Mayer's probability curves in describing the dechanneling phenomenon in backscattering analysis, over a wide range of disorder levels. Differentiation of the obtained function gives another function to calculate angular dispersion of the beam in the frameworks by Sigmund and Winterbon. The present work provides an easy calculation of both dechanneling probability and angular dispersion for any arbitrary combination of beam and target having a reduced thickness ⩾0.6, which can be implemented in modeling of channeling spectra. Furthermore, we used a Monte Carlo simulation program to calculate the deflection probability and compared them with previously tabulated data. A good agreement was reached.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
NASA Astrophysics Data System (ADS)
Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.
The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.
The Distribution and Annihilation of Dark Matter Around Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy D.
2015-01-01
We use a Monte Carlo code to calculate the geodesic orbits of test particles around Kerr black holes, generating a distribution function of both bound and unbound populations of dark matter (DM) particles. From this distribution function, we calculate annihilation rates and observable gamma-ray spectra for a few simple DM models. The features of these spectra are sensitive to the black hole spin, observer inclination, and detailed properties of the DM annihilation cross-section and density profile. Confirming earlier analytic work, we find that for rapidly spinning black holes, the collisional Penrose process can reach efficiencies exceeding 600%, leading to a high-energy tail in the annihilation spectrum. The high particle density and large proper volume of the region immediately surrounding the horizon ensures that the observed flux from these extreme events is non-negligible.
Ayzner, Alexander L; Mei, Jianguo; Appleton, Anthony; DeLongchamp, Dean; Nardes, Alexandre; Benight, Stephanie; Kopidakis, Nikos; Toney, Michael F; Bao, Zhenan
2015-12-30
Conjugated polymers are widely used materials in organic photovoltaic devices. Owing to their extended electronic wave functions, they often form semicrystalline thin films. In this work, we aim to understand whether distribution of crystallographic orientations affects exciton diffusion using a low-band-gap polymer backbone motif that is representative of the donor/acceptor copolymer class. Using the fact that the polymer side chain can tune the dominant crystallographic orientation in the thin film, we have measured the quenching of polymer photoluminescence, and thus the extent of exciton dissociation, as a function of crystal orientation with respect to a quenching substrate. We find that the crystallite orientation distribution has little effect on the average exciton diffusion length. We suggest several possibilities for the lack of correlation between crystallographic texture and exciton transport in semicrystalline conjugated polymer films.
Prototype of Partial Cutting Tool of Geological Map Images Distributed by Geological Web Map Service
NASA Astrophysics Data System (ADS)
Nonogaki, S.; Nemoto, T.
2014-12-01
Geological maps and topographical maps play an important role in disaster assessment, resource management, and environmental preservation. These map information have been distributed in accordance with Web services standards such as Web Map Service (WMS) and Web Map Tile Service (WMTS) recently. In this study, a partial cutting tool of geological map images distributed by geological WMTS was implemented with Free and Open Source Software. The tool mainly consists of two functions: display function and cutting function. The former function was implemented using OpenLayers. The latter function was implemented using Geospatial Data Abstraction Library (GDAL). All other small functions were implemented by PHP and Python. As a result, this tool allows not only displaying WMTS layer on web browser but also generating a geological map image of intended area and zoom level. At this moment, available WTMS layers are limited to the ones distributed by WMTS for the Seamless Digital Geological Map of Japan. The geological map image can be saved as GeoTIFF format and WebGL format. GeoTIFF is one of the georeferenced raster formats that is available in many kinds of Geographical Information System. WebGL is useful for confirming a relationship between geology and geography in 3D. In conclusion, the partial cutting tool developed in this study would contribute to create better conditions for promoting utilization of geological information. Future work is to increase the number of available WMTS layers and the types of output file format.
NASA Astrophysics Data System (ADS)
Block, Martin M.; Durand, Loyal
2011-11-01
We recently derived a very accurate and fast new algorithm for numerically inverting the Laplace transforms needed to obtain gluon distributions from the proton structure function F2^{γ p}(x,Q2). We numerically inverted the function g( s), s being the variable in Laplace space, to G( v), where v is the variable in ordinary space. We have since discovered that the algorithm does not work if g( s)→0 less rapidly than 1/ s as s→∞, e.g., as 1/ s β for 0< β<1. In this note, we derive a new numerical algorithm for such cases, which holds for all positive and non-integer negative values of β. The new algorithm is exact if the original function G( v) is given by the product of a power v β-1 and a polynomial in v. We test the algorithm numerically for very small positive β, β=10-6 obtaining numerical results that imitate the Dirac delta function δ( v). We also devolve the published MSTW2008LO gluon distribution at virtuality Q 2=5 GeV2 down to the lower virtuality Q 2=1.69 GeV2. For devolution, β is negative, giving rise to inverse Laplace transforms that are distributions and not proper functions. This requires us to introduce the concept of Hadamard Finite Part integrals, which we discuss in detail.
Phase space explorations in time dependent density functional theory
NASA Astrophysics Data System (ADS)
Rajam, Aruna K.
Time dependent density functional theory (TDDFT) is one of the useful tools for the study of the dynamic behavior of correlated electronic systems under the influence of external potentials. The success of this formally exact theory practically relies on approximations for the exchange-correlation potential which is a complicated functional of the co-ordinate density, non-local in space and time. Adiabatic approximations (such as ALDA), which are local in time, are most commonly used in the increasing applications of the field. Going beyond ALDA, has been proved difficult leading to mathematical inconsistencies. We explore the regions where the theory faces challenges, and try to answer some of them via the insights from two electron model systems. In this thesis work we propose a phase-space extension of the TDDFT. We want to answer the challenges the theory is facing currently by exploring the one-body phase-space. We give a general introduction to this theory and its mathematical background in the first chapter. In second chapter, we carryout a detailed study of instantaneous phase-space densities and argue that the functionals of distributions can be a better alternative to the nonlocality issue of the exchange-correlation potentials. For this we study in detail the interacting and the non-interacting phase-space distributions for Hookes atom model. The applicability of ALDA-based TDDFT for the dynamics in strongfields can become severely problematic due to the failure of single-Slater determinant picture.. In the third chapter, we analyze how the phase-space distributions can shine some light into this problem. We do a comparative study of Kohn-Sham and interacting phase-space and momentum distributions for single ionization and double ionization systems. Using a simple model of two-electron systems, we have showed that the momentum distribution computed directly from the exact KS system contains spurious oscillations: a non-classical description of the essentially classical two-electron dynamics. In Time dependent density matrix functional theory (TDDMFT), the evolution scheme of the 1RDM (first order reduced density matrix) contains second-order reduced density matrix (2RDM), which has to be expressed in terms of 1RDMs. Any non-correlated approximations (Hartree-Fock) for 2RDM would fail to capture the natural occupations of the system. In our fourth chapter, we show that by applying the quasi-classical and semi-classical approximations one can capture the natural occupations of the excited systems. We study a time-dependent Moshinsky atom model for this. The fifth chapter contains a comparative work on the existing non-local exchange-correlation kernels that are based on current density response frame work and the co-moving frame work. We show that the two approaches though coinciding with each other in linear response regime, actually turn out to be different in non-linear regime.
Generalized formula for electron emission taking account of the polaron effect
NASA Astrophysics Data System (ADS)
Barengolts, Yu A.; Beril, S. I.; Barengolts, S. A.
2018-01-01
A generalized formula is derived for the electron emission current as a function of temperature, field, and electron work function in a metal-dielectric system that takes account of the quantum nature of the image forces. In deriving the formula, the Fermi-Dirac distribution for electrons in a metal and the quantum potential of the image obtained in the context of electron polaron theory are used.
Self-equilibration of the radius distribution in self-catalyzed GaAs nanowires
NASA Astrophysics Data System (ADS)
Leshchenko, E. D.; Turchina, M. A.; Dubrovskii, V. G.
2016-08-01
This work addresses the evolution of radius distribution function in self-catalyzed vapor-liquid-solid growth of GaAs nanowires from Ga droplets. Different growth regimes are analyzed depending on the V/III flux ratio. In particular, we find a very unusual selfequilibration regime in which the radius distribution narrows up to a certain stationary radius regardless of the initial size distribution of Ga droplets. This requires that the arsenic vapor flux is larger than the gallium one and that the V/III influx imbalance is compensated by a diffusion flux of gallium adatoms. Approximate analytical solution is compared to the numerical radius distribution obtained by solving the corresponding Fokker-Planck equation by the implicit difference scheme.
NASA Astrophysics Data System (ADS)
Cui, Li; Wang, Wenjun; Ding, Guowen; Chen, Ke; Zhao, Junming; Jiang, Tian; Zhu, Bo; Feng, Yijun
2017-11-01
In this paper, we design a bi-functional metasurface with different spatial distribution of reflection phase responses depending on the incident polarization. The metasurface with a thickness of only 0.067 λ0 (λ0 is the working wavelength) is constructed by unit cells composing two orthogonal I-shaped metallic structures, and the reflection phase for x- and y-linearly polarized incidence can be independently controlled by the geometric parameters. The metasurface can work as a flat parabolic reflector antenna with a maximum gain reaching about 22 dBi around 9.5 GHz, when it is illuminated by the x-polarized feed source of an offset open-ended waveguide antenna. Meanwhile, designed with randomly distributed reflection phases, the proposed metasurface can behave as an electromagnetic (EM) diffusion-like surface, which is capable of suppressing the backward scattering in a broadband from 8.5 GHz to 14 GHz for y-polarized incidence. By this strategy of EM functionality integration, a metasurface reflector antenna equipped with stealth technique to achieve simultaneously high gain and low backward scattering is obtained. Finally, experiments have been carried out to demonstrate this design principle, which agree with the simulation results. The proposed metasurface could offer a promising route for designing EM devices with polarization-dependent multi-functionalities.
Method of simulation and visualization of FDG metabolism based on VHP image
NASA Astrophysics Data System (ADS)
Cui, Yunfeng; Bai, Jing
2005-04-01
FDG ([18F] 2-fluoro-2-deoxy-D-glucose) is the typical tracer used in clinical PET (positron emission tomography) studies. The FDG-PET is an important imaging tool for early diagnosis and treatment of malignant tumor and functional disease. The main purpose of this work is to propose a method that represents FDG metabolism in human body through the simulation and visualization of 18F distribution process dynamically based on the segmented VHP (Visible Human Project) image dataset. First, the plasma time-activity curve (PTAC) and the tissues time-activity curves (TTAC) are obtained from the previous studies and the literatures. According to the obtained PTAC and TTACs, a set of corresponding values are assigned to the segmented VHP image, Thus a set of dynamic images are derived to show the 18F distribution in the concerned tissues for the predetermined sampling schedule. Finally, the simulated FDG distribution images are visualized in 3D and 2D formats, respectively, incorporated with principal interaction functions. As compared with original PET image, our visualization result presents higher resolution because of the high resolution of VHP image data, and show the distribution process of 18F dynamically. The results of our work can be used in education and related research as well as a tool for the PET operator to design their PET experiment program.
High frequency fishbone driven by passing energetic ions in tokamak plasmas
NASA Astrophysics Data System (ADS)
Wang, Feng; Yu, L. M.; Fu, G. Y.; Shen, Wei
2017-05-01
High frequency fishbone instability driven by passing energetic ions was first reported in the Princeton beta experiment with tangential neutral-beam-injection (Heidbrink et al 1986 Phys. Rev. Lett. 57 835-8). It could play an important role for ITER-like burning plasmas, where α particles are mostly passing particles. In this work, a generalized energetic ion distribution function and finite drift orbit width effect are considered to improve the theoretical model for passing particle driving fishbone instability. For purely passing energetic ions with zero drift orbit width, the kinetic energy δ {{W}k} is derived analytically. The derived analytic expression is more accurate as compared to the result of previous work (Wang 2001 Phys. Rev. Lett. 86 5286-8). For a generalized energetic ion distribution function, the fishbone dispersion relation is derived and is solved numerically. Numerical results show that broad and off-axis beam density profiles can significantly increase the beam ion beta threshold {βc} for instability and decrease mode frequency.
High frequency fishbone driven by passing energetic ions in tokamak plasmas
Wang, Feng; Yu, L. M.; Fu, G. Y.; ...
2017-03-22
High frequency fishbone instability driven by passing energetic ions was first reported in the Princeton beta experiment with tangential neutral-beam-injection (Heidbrink et al 1986 Phys. Rev. Lett. 57 835–8). It could play an important role for ITER-like burning plasmas, where α particles are mostly passing particles. In this work, a generalized energetic ion distribution function and finite drift orbit width effect are considered to improve the theoretical model for passing particle driving fishbone instability. For purely passing energetic ions with zero drift orbit width, the kinetic energymore » $$\\delta {{W}_{k}}$$ is derived analytically. The derived analytic expression is more accurate as compared to the result of previous work. For a generalized energetic ion distribution function, the fishbone dispersion relation is derived and is solved numerically. As a result, numerical results show that broad and off-axis beam density profiles can significantly increase the beam ion beta threshold $${{\\beta}_{c}}$$ for instability and decrease mode frequency.« less
Multistage degradation modeling for BLDC motor based on Wiener process
NASA Astrophysics Data System (ADS)
Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai
2018-05-01
Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.
From Bethe–Salpeter Wave functions to Generalised Parton Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2016-06-06
We review recent works on the modelling of Generalised Parton Distributions within the Dyson-Schwinger formalism. We highlight how covariant computations, using the impulse approximation, allows one to fulfil most of the theoretical constraints of the GPDs. A specific attention is brought to chiral properties and especially the so-called soft pion theorem, and its link with the Axial-Vector Ward-Takahashi identity. The limitation of the impulse approximation are also explained. Beyond impulse approximation computations are reviewed in the forward case. Finally, we stress the advantages of the overlap of lightcone wave functions, and possible ways to construct covariant GPD models within thismore » framework, in a two-body approximation« less
Mixtures of amino-acid based ionic liquids and water.
Chaban, Vitaly V; Fileti, Eudes Eterno
2015-09-01
New ionic liquids (ILs) involving increasing numbers of organic and inorganic ions are continuously being reported. We recently developed a new force field; in the present work, we applied that force field to investigate the structural properties of a few novel imidazolium-based ILs in aqueous mixtures via molecular dynamics (MD) simulations. Using cluster analysis, radial distribution functions, and spatial distribution functions, we argue that organic ions (imidazolium, deprotonated alanine, deprotonated methionine, deprotonated tryptophan) are well dispersed in aqueous media, irrespective of the IL content. Aqueous dispersions exhibit desirable properties for chemical engineering. The ILs exist as ion pairs in relatively dilute aqueous mixtures (10 mol%), while more concentrated mixtures feature a certain amount of larger ionic aggregates.
The Unquiet State of Violent Relaxation
NASA Astrophysics Data System (ADS)
Henriksen, Richard
2005-08-01
In 1967 Lynden-Bell presented a statistical mechanical theory for the relaxation of collisionless systems. Since then this theory has been studied numerically and theoretically by many authors. Nakamura in 2000 gave an alternate theory that differed from that of Lynden- Bell by predicting a Gaussian equilibrium distribution function rather than Fermi-Dirac. More recently Henriksen in 2004 has used a coarsegraining technique on cosmological infall systems that also predicts a Gaussian equilibrium distribution function. These relaxed states are thought to occur from the centre of the system outwards. Simulations of cosmological cold dark-matter halos however persist in finding central density cusps (the NFWprofile), which are inconsistent with the predicted distribution functions and perhaps with the observations of some galaxies. Some numerical studies (e.g.Merrall & Henriksen 2003) that attempt to measure the distribution function of dark matter do find Gaussian functions, provided that the initial asymmetry is not too great. Moreover recent work at Queen's reported here by MacMillan, suggests that it is the growth of asymmetry during the infall that produces the cusped behaviour. So put briefly, the essential physics of dark-matter relaxation remains "obscure" as does the validity of the theoretical predictions. "Violent virialization" occurs rapidly, well before subscale relaxation, but the scale at which the relaxation stops (and why) remains unclear. I will present some results that argue for wave-particle relaxation (Landau damping as frequently suggested by Kandrup) and in addition I will suggest that the evolution of isolated systems is very different from that of systems constantly disturbed by infall. Isolated systems may become trapped in an unrelaxed state by the development or existence of multipolar internal structure. Nevertheless a suitable coarse graining of the system may restore the predicted distribution functions.
NASA Astrophysics Data System (ADS)
Selakovic, S.; Cozzoli, F.; Leuven, J.; Van Braeckel, A.; Speybroeck, J.; Kleinhans, M. G.; Bouma, T.
2017-12-01
Interactions between organisms and landscape forming processes play an important role in evolution of coastal landscapes. In particular, biota has a strong potential to interact with important geomorphological processes such as sediment dynamics. Although many studies worked towards quantifying the impact of different species groups on sediment dynamics, information has been gathered on an ad hoc base. Depending on species' traits and distribution, functional groups of ecoengineering species may have differential effects on sediment deposition and erosion. We hypothesize that the spatial distributions of sediment-stabilizing and destabilizing species across the channel and along the whole salinity gradient of an estuary partly determine the planform shape and channel-shoal morphology of estuaries. To test this hypothesis, we analyze vegetation and macrobenthic data taking the Scheldt river-estuarine continuum as model ecosystem. We identify species traits with important effects on sediment dynamics and use them to form functional groups. By using linearized mixed modelling, we are able to accurately describe the distributions of the different functional groups. We observe a clear distinction of dominant ecosystem engineering functional groups and their potential effects on the sediment in the river-estuarine continuum. The first results of longitudinal cross section show the highest effects of stabilizing plant species in riverine and sediment bioturbators in weak polyhaline part of continuum. The distribution of functional groups in transverse cross sections shows dominant stabilizing effect in supratidal zone compared to dominant destabilizing effect in the lower intertidal zone. This analysis offers a new and more general conceptualization of distributions of sediment stabilizing and destabilizing functional groups and their potential impacts on sediment dynamics, shoal patterns, and planform shapes in river-estuarine continuum. We intend to test this in future modelling and experiments.
Length distributions of nanowires: Effects of surface diffusion versus nucleation delay
NASA Astrophysics Data System (ADS)
Dubrovskii, Vladimir G.
2017-04-01
It is often thought that the ensembles of semiconductor nanowires are uniform in length due to the initial organization of the growth seeds such as lithographically defined droplets or holes in the substrate. However, several recent works have already demonstrated that most nanowire length distributions are broader than Poissonian. Herein, we consider theoretically the length distributions of non-interacting nanowires that grow by the material collection from the entire length of their sidewalls and with a delay of nucleation of the very first nanowire monolayer. The obtained analytic length distribution is controlled by two parameters that describe the strength of surface diffusion and the nanowire nucleation rate. We show how the distribution changes from the symmetrical Polya shape without the nucleation delay to a much broader and asymmetrical one for longer delays. In the continuum limit (for tall enough nanowires), the length distribution is given by a power law times an incomplete gamma-function. We discuss interesting scaling properties of this solution and give a recipe for analyzing and tailoring the experimental length histograms of nanowires which should work for a wide range of material systems and growth conditions.
Stein, Paul C; di Cagno, Massimiliano; Bauer-Brandl, Annette
2011-09-01
In this work a new, accurate and convenient technique for the measurement of distribution coefficients and membrane permeabilities based on nuclear magnetic resonance (NMR) is described. This method is a novel implementation of localized NMR spectroscopy and enables the simultaneous analysis of the drug content in the octanol and in the water phase without separation. For validation of the method, the distribution coefficients at pH = 7.4 of four active pharmaceutical ingredients (APIs), namely ibuprofen, ketoprofen, nadolol, and paracetamol (acetaminophen), were determined using a classical approach. These results were compared to the NMR experiments which are described in this work. For all substances, the respective distribution coefficients found with the two techniques coincided very well. Furthermore, the NMR experiments make it possible to follow the distribution of the drug between the phases as a function of position and time. Our results show that the technique, which is available on any modern NMR spectrometer, is well suited to the measurement of distribution coefficients. The experiments present also new insight into the dynamics of the water-octanol interface itself and permit measurement of the interface permeability.
The Cluster Variation Method: A Primer for Neuroscientists.
Maren, Alianna J
2016-09-30
Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.
The Cluster Variation Method: A Primer for Neuroscientists
Maren, Alianna J.
2016-01-01
Effective Brain–Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found. PMID:27706022
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
Towards a Full Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2015-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.
Particle detection and non-detection in a quantum time of arrival measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sombillo, Denny Lane B., E-mail: dsombillo@nip.upd.edu.ph; Galapon, Eric A.
2016-01-15
The standard time-of-arrival distribution cannot reproduce both the temporal and the spatial profile of the modulus squared of the time-evolved wave function for an arbitrary initial state. In particular, the time-of-arrival distribution gives a non-vanishing probability even if the wave function is zero at a given point for all values of time. This poses a problem in the standard formulation of quantum mechanics where one quantizes a classical observable and uses its spectral resolution to calculate the corresponding distribution. In this work, we show that the modulus squared of the time-evolved wave function is in fact contained in one ofmore » the degenerate eigenfunctions of the quantized time-of-arrival operator. This generalizes our understanding of quantum arrival phenomenon where particle detection is not a necessary requirement, thereby providing a direct link between time-of-arrival quantization and the outcomes of the two-slit experiment. -- Highlights: •The time-evolved position density is contained in the standard TOA distribution. •Particle may quantum mechanically arrive at a given point without being detected. •The eigenstates of the standard TOA operator are linked to the two-slit experiment.« less
Measurement of argon neutral velocity distribution functions near an absorbing boundary in a plasma
NASA Astrophysics Data System (ADS)
Short, Zachary; Thompson, Derek; Good, Timothy; Scime, Earl
2016-10-01
Neutral particle distributions are critical to the study of plasma boundary interactions, where ion-neutral collisions, e.g. via charge exchange, may modify energetic particle populations impacting the boundary surface. Neutral particle behavior at absorbing boundaries thus underlies a number of important plasma physics issues, such as wall loading in fusion devices and anomalous erosion in Hall thruster channels. Neutral velocity distribution functions (NVDFs) are measured using laser-induced fluorescence (LIF). Our LIF scheme excites the 1s4 non-metastable state of neutral argon with 667.913 nm photons. The subsequent decay emission at 750.590 nm is recorded synchronously with injection laser frequency. Measurements are performed near a grounded boundary immersed in a cylindrical helicon plasma, with the boundary plate oriented at an oblique angle to the magnetic field. NVDFs are recorded in multiple velocity dimensions and in a three-dimensional volume, enabling point-to-point comparisons with NVDF predictions from particle-in-cell models as well as comparisons with ion velocity distribution function measurements obtained in the same regions through Ar-II LIF. This work is supported by US National Science Foundation Grant Number PHYS-1360278.
NASA Astrophysics Data System (ADS)
Sulyman, Alex; Chrystal, Colin; Haskey, Shaun; Burrell, Keith; Grierson, Brian
2017-10-01
The possible observation of non-Maxwellian ion distribution functions in the pedestal of DIII-D will be investigated with a synthetic diagnostic that accounts for the effect of finite neutral beam size. Ion distribution functions in tokamak plasmas are typically assumed to be Maxwellian, however non-Gaussian features observed in impurity charge exchange spectra have challenged this concept. Two possible explanations for these observations are spatial averaging over a finite beam size and a local ion distribution that is non-Maxwellian. Non-Maxwellian ion distribution functions could be driven by orbit loss effects in the edge of the plasma, and this has implications for momentum transport and intrinsic rotation. To investigate the potential effect of finite beam size on the observed spectra, a synthetic diagnostic has been created that uses FIDAsim to model beam and halo neutral density. Finite beam size effects are investigated for vertical and tangential views in the core and pedestal region with varying gradient scale lengths. Work supported in part by US DoE under the Science Undergraduate Laboratory Internship (SULI) program, DE-FC02-04ER54698, and DE-AC02-09CH11466.
A comparison of decentralized, distributed, and centralized vibro-acoustic control.
Frampton, Kenneth D; Baumann, Oliver N; Gardonio, Paolo
2010-11-01
Direct velocity feedback control of structures is well known to increase structural damping and thus reduce vibration. In multi-channel systems the way in which the velocity signals are used to inform the actuators ranges from decentralized control, through distributed or clustered control to fully centralized control. The objective of distributed controllers is to exploit the anticipated performance advantage of the centralized control while maintaining the scalability, ease of implementation, and robustness of decentralized control. However, and in seeming contradiction, some investigations have concluded that decentralized control performs as well as distributed and centralized control, while other results have indicated that distributed control has significant performance advantages over decentralized control. The purpose of this work is to explain this seeming contradiction in results, to explore the effectiveness of decentralized, distributed, and centralized vibro-acoustic control, and to expand the concept of distributed control to include the distribution of the optimization process and the cost function employed.
Asymptotic behavior of the daily increment distribution of the IPC, the mexican stock market index
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-02-01
In this work, a statistical analysis of the distribution of daily fluctuations of the IPC, the Mexican Stock Market Index is presented. A sample of the IPC covering the 13-year period 04/19/1990 - 08/21/2003 was analyzed and the cumulative probability distribution of its daily logarithmic variations studied. Results showed that the cumulative distribution function for extreme variations, can be described by a Pareto-Levy model with shape parameters alpha=3.634 +- 0.272 and alpha=3.540 +- 0.278 for its positive and negative tails respectively. This result is consistent with previous studies, where it has been found that 2.5< alpha <4 for other financial markets worldwide.
NASA Astrophysics Data System (ADS)
Masago, Akira; Fukushima, Tetsuya; Sato, Kazunori; Katayama-Yoshida, Hiroshi
2015-03-01
Eu-doped GaN has attracted much attention, because the red light luminescence ability provides us with expectations to realize monolithic full-color LEDs, which work on seamless conditions such as substrates, electrodes, and operating bias voltages. Toward implementation of multifunctional activity into the luminescent materials using the spinodal nano-structures, we investigate atomic configurations and magnetic structures of the GaN crystal codoped with Eu, Mg, Si, O, and/or the vacancies using the density functional method (DFT) calculations. Our calculations show that the impurity clusterized distributions are energetically favorable more than the homogeneous distribution. Moreover, analyses of the formation energy and binding energy suggest that the clusterized distributions are spontaneously formed by the nano-spinodal decomposition. Though the host matrix has no magnetic moments, the cluster has finite magnetic moments, where Zener's p-f exchange interaction works between the Eu f-state and the nearby N p-states.
On the effect of velocity gradients on the depth of correlation in μPIV
NASA Astrophysics Data System (ADS)
Mustin, B.; Stoeber, B.
2016-03-01
The present work revisits the effect of velocity gradients on the depth of the measurement volume (depth of correlation) in microscopic particle image velocimetry (μPIV). General relations between the μPIV weighting functions and the local correlation function are derived from the original definition of the weighting functions. These relations are used to investigate under which circumstances the weighting functions are related to the curvature of the local correlation function. Furthermore, this work proposes a modified definition of the depth of correlation that leads to more realistic results than previous definitions for the case when flow gradients are taken into account. Dimensionless parameters suitable to describe the effect of velocity gradients on μPIV cross correlation are derived and visual interpretations of these parameters are proposed. We then investigate the effect of the dimensionless parameters on the weighting functions and the depth of correlation for different flow fields with spatially constant flow gradients and with spatially varying gradients. Finally this work demonstrates that the results and dimensionless parameters are not strictly bound to a certain model for particle image intensity distributions but are also meaningful when other models for particle images are used.
Li-Ion Localization and Energetics as a Function of Anode Structure.
McNutt, Nicholas W; McDonnell, Marshall; Rios, Orlando; Keffer, David J
2017-03-01
In this work, we study the effect of carbon composite anode structure on the localization and energetics of Li-ions. A computational molecular dynamics study is combined with experimental results from neutron scattering experiments to understand the effect of composite density, crystallite size, volume fraction of crystalline carbon, and ion loading on the nature of ion storage in novel, lignin-derived composite materials. In a recent work, we demonstrated that these carbon composites display a fundamentally different mechanism for Li-ion storage than traditional graphitic anodes. The edges of the crystalline and amorphous fragments of aromatic carbon that exist in these composites are terminated by hydrogen atoms, which play a crucial role in adsorption. In this work, we demonstrate how differences in composite structure due to changes in the processing conditions alter the type and extent of the interface between the amorphous and crystalline domains, thus impacting the nature of Li-ion storage. The effects of structural properties are evaluated using a suite of pair distribution functions as well as an original technique to extract archetypal structures, in the form of three-dimensional atomic density distributions, from highly disordered systems. The energetics of Li-ion binding are understood by relating changes in the energy and charge distributions to changes in structural properties. The distribution of Li-ion energies reveals that some structures lead to greater chemisorption, while others have greater physisorption. Carbon composites with a high volume fraction of small crystallites demonstrate the highest ion storage capacity because of the high interfacial area between the crystalline and amorphous domains. At these interfaces, stable H atoms, terminating the graphitic crystallites, provide favorable sites for reversible Li adsorption.
NASA Astrophysics Data System (ADS)
Huang, D.; Liu, Y.
2014-12-01
The effects of subgrid cloud variability on grid-average microphysical rates and radiative fluxes are examined by use of long-term retrieval products at the Tropical West Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) Program. Four commonly used distribution functions, the truncated Gaussian, Gamma, lognormal, and Weibull distributions, are constrained to have the same mean and standard deviation as observed cloud liquid water content. The PDFs are then used to upscale relevant physical processes to obtain grid-average process rates. It is found that the truncated Gaussian representation results in up to 30% mean bias in autoconversion rate whereas the mean bias for the lognormal representation is about 10%. The Gamma and Weibull distribution function performs the best for the grid-average autoconversion rate with the mean relative bias less than 5%. For radiative fluxes, the lognormal and truncated Gaussian representations perform better than the Gamma and Weibull representations. The results show that the optimal choice of subgrid cloud distribution function depends on the nonlinearity of the process of interest and thus there is no single distribution function that works best for all parameterizations. Examination of the scale (window size) dependence of the mean bias indicates that the bias in grid-average process rates monotonically increases with increasing window sizes, suggesting the increasing importance of subgrid variability with increasing grid sizes.
Exercise of the SSM/PMAD Breadboard. [Space Station Module/Power Management And Distribution
NASA Technical Reports Server (NTRS)
Walls, Bryan
1989-01-01
The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard is a test facility designed for advanced development of space power automation. Originally designed for 20-kHz power, the system is being converted to work with direct current (dc). Power levels are on a par with those expected for a Space Station module. Some of the strengths and weaknesses of the SSM/PMAD system in design and function are examined, and the future directions foreseen for the system are outlined.
Distribution and determination of cholinesterases in mammals
Holmstedt, Bo
1971-01-01
This paper reviews the distribution of cholinesterases in the central nervous system, the ganglia, the striated muscle, and the blood of mammals, and discusses the correlation between the histochemical localization and the function of neuronal cholinesterase. Different methods for the determination of cholinesterase levels are reviewed, with particular reference to their practical value for field work. The Warburg method and the Tintometer and Acholest colorimetric methods are compared on the basis of cholinesterase levels determined in normal persons and in those suffering from parathion intoxication. PMID:4999484
A Debugger for Computational Grid Applications
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2000-01-01
The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.
Magnetic Pumping as a Source of Particle Heating and Power-law Distributions in the Solar Wind
NASA Astrophysics Data System (ADS)
Lichko, E.; Egedal, J.; Daughton, W.; Kasper, J.
2017-12-01
Based on the rate of expansion of the solar wind, the plasma should cool rapidly as a function of distance to the Sun. Observations show this is not the case. In this work, a magnetic pumping model is developed as a possible explanation for the heating and the generation of power-law distribution functions observed in the solar wind plasma. Most previous studies in this area focus on the role that the dissipation of turbulent energy on microscopic kinetic scales plays in the overall heating of the plasma. However, with magnetic pumping, particles are energized by the largest-scale turbulent fluctuations, thus bypassing the energy cascade. In contrast to other models, we include the pressure anisotropy term, providing a channel for the large-scale fluctuations to heat the plasma directly. A complete set of coupled differential equations describing the evolution, and energization, of the distribution function are derived, as well as an approximate closed-form solution. Numerical simulations using the VPIC kinetic code are applied to verify the model’s analytical predictions. The results of the model for realistic solar wind scenario are computed, where thermal streaming of particles are important for generating a phase shift between the magnetic perturbations and the pressure anisotropy. In turn, averaged over a pump cycle, the phase shift permits mechanical work to be converted directly to heat in the plasma. The results of this scenario show that magnetic pumping may account for a significant portion of the solar wind energization.
NASA Astrophysics Data System (ADS)
Marsch, Eckart; Yao, Shuo; Tu, Chuanyi; Schwenn, Rainer
This work presents in-situ solar wind observations of three magnetic clouds that contain certain cold high-density material, when Helios 2 was located at 0.3 AU, on 9 May 1979, 0.5 AU on 30 March 1976, and 0.7 AU on 24 December 1978, respectively. In the cold high-density regions embedded in the ICMEs we find that (1) the number density of protons is higher than in other regions inside the magnetic cloud (MC), (2)the possible existence of He+, (3) the thermal velocity distribution functions (VDFs) are more isotropic and appear to be colder than in the other regions of the MC, and the proton temperature is lower than that of the ambient plasma, (4) the associated magnetic field configuration can for all three MC events be identified as a flux rope. This cold high-density region is located at the polarity inversion line in the center of the bipolar structure of the MC magnetic field (consistent with previous work of solar observation that a prominence lies over the neutral line of the related bipolar solar magnetic field ). It is the first time that prominence ejecta are identified by both the plasma and magnetic field features inside 1 AU, and that thermal ion velocity distribution functions are used to investigate the microstate of the prominence material. Overall, our in situ observations are consistent with the three-part CME models.
Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan
2016-10-01
A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.
Assessing a Tornado Climatology from Global Tornado Intensity Distributions.
NASA Astrophysics Data System (ADS)
Feuerstein, Bernold; Dotzek, Nikolai; Grieser, Jürgen
2005-02-01
Recent work demonstrated that the shape of tornado intensity distributions from various regions worldwide is well described by Weibull functions. This statistical modeling revealed a strong correlation between the fit parameters c for shape and b for scale regardless of the data source. In the present work it is shown that the quality of the Weibull fits is optimized if only tornado reports of F1 and higher intensity are used and that the c-b correlation does indeed reflect a universal feature of the observed tornado intensity distributions. For regions with likely supercell tornado dominance, this feature is the number ratio of F4 to F3 tornado reports R(F4/F3) = 0.238. The c-b diagram for the Weibull shape and scale parameters is used as a climatological chart, which allows different types of tornado climatology to be distinguished, presumably arising from supercell versus nonsupercell tornadogenesis. Assuming temporal invariance of the climatology and using a detection efficiency function for tornado observations, a stationary climatological probability distribution from large tornado records (U.S. decadal data 1950-99) is extracted. This can be used for risk assessment, comparative studies on tornado intensity distributions worldwide, and estimates of the degree of underreporting for areas with poor databases. For the 1990s U.S. data, a likely tornado underreporting of the weak events (F0, F1) by a factor of 2 can be diagnosed, as well as asymptotic climatological c,b values of c = 1.79 and b = 2.13, to which a convergence in the 1950-99 U.S. decadal data is verified.
NASA Astrophysics Data System (ADS)
Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.
2017-12-01
Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.
Evolution of Linux operating system network
NASA Astrophysics Data System (ADS)
Xiao, Guanping; Zheng, Zheng; Wang, Haoqin
2017-01-01
Linux operating system (LOS) is a sophisticated man-made system and one of the most ubiquitous operating systems. However, there is little research on the structure and functionality evolution of LOS from the prospective of networks. In this paper, we investigate the evolution of the LOS network. 62 major releases of LOS ranging from versions 1.0 to 4.1 are modeled as directed networks in which functions are denoted by nodes and function calls are denoted by edges. It is found that the size of the LOS network grows almost linearly, while clustering coefficient monotonically decays. The degree distributions are almost the same: the out-degree follows an exponential distribution while both in-degree and undirected degree follow power-law distributions. We further explore the functionality evolution of the LOS network. It is observed that the evolution of functional modules is shown as a sequence of seven events (changes) succeeding each other, including continuing, growth, contraction, birth, splitting, death and merging events. By means of a statistical analysis of these events in the top 4 largest components (i.e., arch, drivers, fs and net), it is shown that continuing, growth and contraction events occupy more than 95% events. Our work exemplifies a better understanding and describing of the dynamics of LOS evolution.
Focusing on attention: the effects of working memory capacity and load on selective attention.
Ahmed, Lubna; de Fockert, Jan W
2012-01-01
Working memory (WM) is imperative for effective selective attention. Distractibility is greater under conditions of high (vs. low) concurrent working memory load (WML), and in individuals with low (vs. high) working memory capacity (WMC). In the current experiments, we recorded the flanker task performance of individuals with high and low WMC during low and high WML, to investigate the combined effect of WML and WMC on selective attention. In Experiment 1, distractibility from a distractor at a fixed distance from the target was greater when either WML was high or WMC was low, but surprisingly smaller when both WML was high and WMC low. Thus we observed an inverted-U relationship between reductions in WM resources and distractibility. In Experiment 2, we mapped the distribution of spatial attention as a function of WMC and WML, by recording distractibility across several target-to-distractor distances. The pattern of distractor effects across the target-to-distractor distances demonstrated that the distribution of the attentional window becomes dispersed as WM resources are limited. The attentional window was more spread out under high compared to low WML, and for low compared to high WMC individuals, and even more so when the two factors co-occurred (i.e., under high WML in low WMC individuals). The inverted-U pattern of distractibility effects in Experiment 1, replicated in Experiment 2, can thus be explained by differences in the spread of the attentional window as a function of WM resource availability. The current findings show that limitations in WM resources, due to either WML or individual differences in WMC, affect the spatial distribution of attention. The difference in attentional constraining between high and low WMC individuals demonstrated in the current experiments helps characterise the nature of previously established associations between WMC and controlled attention.
heterogeneous mixture distributions for multi-source extreme rainfall
NASA Astrophysics Data System (ADS)
Ouarda, T.; Shin, J.; Lee, T. S.
2013-12-01
Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.
Valence-quark distribution functions in the kaon and pion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chen; Chang, Lei; Roberts, Craig D.
2016-04-18
We describe expressions for pion and kaon dressed-quark distribution functions that incorporate contributions from gluons which bind quarks into these mesons and hence overcome a flaw of the commonly used handbag approximation. The distributions therewith obtained are purely valence in character, ensuring that dressed quarks carry all the meson’s momentum at a characteristic hadronic scale and vanish as ( 1 - x ) 2 when Bjorken- x → 1 . Comparing such distributions within the pion and kaon, it is apparent that the size of S U ( 3 ) -flavor symmetry breaking in meson parton distribution functions is modulatedmore » by the flavor dependence of dynamical chiral symmetry breaking. Corrections to these leading-order formulas may be divided into two classes, responsible for shifting dressed-quark momentum into glue and sea quarks. Working with available empirical information, we build an algebraic framework that is capable of expressing the principal impact of both classes of corrections. This enables a realistic comparison with experiment which allows us to identify and highlight basic features of measurable pion and kaon valence-quark distributions. We find that whereas roughly two thirds of the pion’s light-front momentum is carried by valence dressed quarks at a characteristic hadronic scale; this fraction rises to 95% in the kaon; evolving distributions with these features to a scale typical of available Drell-Yan data produces a kaon-to-pion ratio of u -quark distributions that is in agreement with the single existing data set, and predicts a u -quark distribution within the pion that agrees with a modern reappraisal of π N Drell-Yan data. Precise new data are essential in order to validate this reappraisal and because a single modest-quality measurement of the kaon-to-pion ratio cannot be considered definitive.« less
NASA Astrophysics Data System (ADS)
Maciejewska, Beata; Błasiak, Sławomir; Piasecka, Magdalena
This work discusses the mathematical model for laminar-flow heat transfer in a minichannel. The boundary conditions in the form of temperature distributions on the outer sides of the channel walls were determined from experimental data. The data were collected from the experimental stand the essential part of which is a vertical minichannel 1.7 mm deep, 16 mm wide and 180 mm long, asymmetrically heated by a Haynes-230 alloy plate. Infrared thermography allowed determining temperature changes on the outer side of the minichannel walls. The problem was analysed numerically through either ANSYS CFX software or special calculation procedures based on the Finite Element Method and Trefftz functions in the thermal boundary layer. The Trefftz functions were used to construct the basis functions. Solutions to the governing differential equations were approximated with a linear combination of Trefftz-type basis functions. Unknown coefficients of the linear combination were calculated by minimising the functional. The results of the comparative analysis were represented in a graphical form and discussed.
In order to ensure that the pumps are successful when installed for the community, working prototypes were tested, analyzed, and modified. The chief concerns of our functional analysis were the flow rate of the pump, the stability/durability of the system, total pumping head, ...
Collaborative Control of Media Playbacks in SCDNs
ERIC Educational Resources Information Center
Fortino, Giancarlo; Russo, Wilma; Palau, Carlos E.
2006-01-01
In this paper we present a CDN-based system, namely the COMODIN system, which is a media on-demand platform for synchronous cooperative work which supports an explicitly-formed cooperative group of distributed users with the following integrated functionalities: request of an archived multimedia session, sharing of its playback, and collaboration…
Nicholas J. Bouskill; Tana E. Wood; Richard Baran; Zhao Hao; Zaw Ye; Ben P. Bowen; Hsiao Chien Lim; Peter S. Nico; Hoi-Ying Holman; Benjamin Gilbert; Whendee L. Silver; Trent R. Northen; Eoin L. Brodie
2016-01-01
Climate model projections for tropical regions show clear perturbation of precipitation patterns leading to increased frequency and severity of drought in some regions. Previous work has shown declining soil moisture to be a strong driver of changes in microbial trait distribution, however...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezzacappa, Anthony; Endeve, Eirik; Hauck, Cory D.
We extend the positivity-preserving method of Zhang & Shu [49] to simulate the advection of neutral particles in phase space using curvilinear coordinates. The ability to utilize these coordinates is important for non-equilibrium transport problems in general relativity and also in science and engineering applications with specific geometries. The method achieves high-order accuracy using Discontinuous Galerkin (DG) discretization of phase space and strong stabilitypreserving, Runge-Kutta (SSP-RK) time integration. Special care in taken to ensure that the method preserves strict bounds for the phase space distribution function f; i.e., f ϵ [0, 1]. The combination of suitable CFL conditions and themore » use of the high-order limiter proposed in [49] is su cient to ensure positivity of the distribution function. However, to ensure that the distribution function satisfies the upper bound, the discretization must, in addition, preserve the divergencefree property of the phase space ow. Proofs that highlight the necessary conditions are presented for general curvilinear coordinates, and the details of these conditions are worked out for some commonly used coordinate systems (i.e., spherical polar spatial coordinates in spherical symmetry and cylindrical spatial coordinates in axial symmetry, both with spherical momentum coordinates). Results from numerical experiments - including one example in spherical symmetry adopting the Schwarzschild metric - demonstrate that the method achieves high-order accuracy and that the distribution function satisfies the maximum principle.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakhmanov, E A; Suetin, S P
2013-09-30
The distribution of the zeros of the Hermite-Padé polynomials of the first kind for a pair of functions with an arbitrary even number of common branch points lying on the real axis is investigated under the assumption that this pair of functions forms a generalized complex Nikishin system. It is proved (Theorem 1) that the zeros have a limiting distribution, which coincides with the equilibrium measure of a certain compact set having the S-property in a harmonic external field. The existence problem for S-compact sets is solved in Theorem 2. The main idea of the proof of Theorem 1 consists in replacing a vector equilibrium problem in potentialmore » theory by a scalar problem with an external field and then using the general Gonchar-Rakhmanov method, which was worked out in the solution of the '1/9'-conjecture. The relation of the result obtained here to some results and conjectures due to Nuttall is discussed. Bibliography: 51 titles.« less
NASA Astrophysics Data System (ADS)
Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian
2017-04-01
Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.
Moosavi, Majid; Khashei, Fatemeh; Sedghamiz, Elaheh
2017-12-20
In this work, the structural and dynamical properties of two imidazolium-based geminal dicationic ionic liquids (GDILs), i.e. [C n (mim) 2 ][NTf 2 ] 2 with n = 3 and 5, have been studied to obtain a fundamental understanding of the molecular basis of the macroscopic and microscopic properties of the bulk liquid phase. To achieve this purpose, molecular dynamics (MD) simulation, density functional theory (DFT) and atoms in molecule (AIM) methods were used. Interaction energies, charge transfers and hydrogen bonds between the cation and anions of each studied GDIL were investigated by DFT calculations and also AIM. The mean square displacement (MSD), self-diffusion coefficient, and transference number of the cation and anions, and also the density, viscosity and electrical conductivity of the studied GDILs, were computed at 333.15 K and at 1 atm. The simulated values were in good agreement with the experimental data. The effect of linkage alkyl chain length on the thermodynamic, transport and structural properties of these GDILs has been investigated. The structural features of these GDILs were characterized by calculating the partial site-site radial distribution functions (RDFs) and spatial distribution functions (SDFs). The heterogeneity order parameter (HOP) has been used to describe the spatial structures of these GDILs and the distribution of the angles formed between two cation heads and the middle carbon atom of the linkage alkyl chain was analyzed in these ILs. To investigate the temporal heterogeneity of the studied GDILs, the deviation of the self-part of the van Hove correlation function, G s (r[combining right harpoon above],t), from the Gaussian distribution of particle displacement and also the second-order non-Gaussian parameter, α 2 (t), were used. Since, the transport and interfacial properties and ionic characteristics of these GDILs were studied experimentally in our previous studies as a function of linkage chain length and temperature, in this work, we try to give a better perspective of the structure and dynamics of these systems at a molecular level.
Small- x asymptotics of the quark helicity distribution
Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.
2017-01-30
We construct a numerical solution of the small-x evolution equations derived in our recent work for the (anti)quark transverse momentum dependent helicity TMDs and parton distribution functions (PDFs) as well as the g 1 structure function. We focus on the case of large N c, where one finds a closed set of equations. Employing the extracted intercept, we are able to predict directly from theory the behavior of the quark helicity PDFs at small x, which should have important phenomenological consequences. Finally, we also give an estimate of how much of the proton’s spin carried by the quarks may bemore » at small x and what impact this has on the spin puzzle.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleta, M. E.; Eleotério, M.; Mesquita, A.
2017-07-29
This work reports the setting up of the X-ray diffraction and spectroscopy beamline at the Brazilian Synchrotron Light Laboratory for performing total scattering experiments to be analyzed by atomic pair distribution function (PDF) studies. The results of a PDF refinement for Al 2O 3 standard are presented and compared with data acquired at a beamline of the Advanced Photon Source, where it is common to perform this type of experiment. A preliminary characterization of the Pb 1–xLa xZr 0.40Ti 0.60O 3 ferroelectric system, withx= 0.11, 0.12 and 0.15, is also shown.
Plasmablasts and plasma cells: reconsidering teleost immune system organization.
Ye, Jianmin; Kaattari, Ilsa; Kaattari, Stephen
2011-12-01
Comparative immunologists have expended extensive efforts in the characterization of early fish B cell development; however, analysis of the post-antigen induction stages of antibody secreting cell (ASC) differentiation has been limited. In contrast, work with murine ASCs has resolved the physically and functionally distinct cells known as plasmablasts, the short-lived plasma cells and long-lived plasma cells. Teleost ASCs are now known to also possess comparable subpopulations, which can greatly differ in such basic functions as lifespan, antigen sensitivity, antibody secretion rate, differentiative potential, and distribution within the body. Understanding the mechanisms by which these subpopulations are produced and distributed is essential for both basic understanding in comparative immunology and practical vaccine engineering. Copyright © 2011 Elsevier Ltd. All rights reserved.
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2016-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.
NASA Astrophysics Data System (ADS)
Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping
2012-09-01
Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.
NASA Astrophysics Data System (ADS)
Zerpa, L.; Gao, F.; Wang, S.
2017-12-01
There are two major types of natural gas hydrate distributions in porous media: pore filling and contact cementing. The difference between these two distribution types is related to hydrate nucleation and growth processes. In the pore filling distribution, hydrate nucleates from a gas-dissolved aqueous phase at the grain boundary and grows away from grain contacts and surfaces into the pore space. In the contact cementing distribution, hydrate nucleates and grows at the gas-water interface and at intergranular contacts. Previous attempts to correlate changes on porosity and permeability during hydrate formation/dissociation were based on the length difference between the pore body and pore throat, and only considered contact cementing hydrate distribution. This work consists of a study of mathematical models of permeability and porosity as a function of gas hydrate saturation during formation and dissociation of gas hydrates in porous media. In this work, first we derive the permeability equation for the pore filling hydrate deposition as a function of hydrate saturation. Then, a more comprehensive model considering both types of gas hydrate deposition is developed to represent changes in permeability and porosity during hydrate formation and dissociation. This resulted in a model that combines pore filling and contact cementing deposition types in the same reservoir. Finally, the TOUGH+Hydrate numerical reservoir simulator was modified to include these models to analyze the response of production and saturation during a depressurization process, considering different combinations of pore filling and contact cementing hydrate distributions. The empirical exponent used in the permeability adjustment factor model influences both production profile and saturation results. This empirical factor describes the permeability dependence to changes in porosity caused by solid phase formation in the porous medium. The use of the permeability exponent decreases the permeability of the system for a given hydrate saturation, which affects the hydraulic performance of the system. However, from published experimental work, there is only a rough estimation of this permeability exponent. This factor could be represented with an empirical equation if more laboratory and field data becomes available.
Asymptotic approximations to posterior distributions via conditional moment equations
Yee, J.L.; Johnson, W.O.; Samaniego, F.J.
2002-01-01
We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, R.
1992-01-01
The key elements in the second year (1991-92) of our project are: (1) implementation of the distributed system prototype; (2) successful passing of the candidacy examination and a PhD proposal acceptance by the funded student; (3) design of storage efficient schemes for replicated distributed systems; and (4) modeling of gracefully degrading reliable computing systems. In the third year of the project (1992-93), we propose to: (1) complete the testing of the prototype; (2) enhance the functionality of the modules by enabling the experimentation with more complex protocols; (3) use the prototype to verify the theoretically predicted performance of locking protocols, etc.; and (4) work on issues related to real-time distributed systems. This should result in efficient protocols for these systems.
Charter for Systems Engineer Working Group
NASA Technical Reports Server (NTRS)
Suffredini, Michael T.; Grissom, Larry
2015-01-01
This charter establishes the International Space Station Program (ISSP) Mobile Servicing System (MSS) Systems Engineering Working Group (SEWG). The MSS SEWG is established to provide a mechanism for Systems Engineering for the end-to-end MSS function. The MSS end-to-end function includes the Space Station Remote Manipulator System (SSRMS), the Mobile Remote Servicer (MRS) Base System (MBS), Robotic Work Station (RWS), Special Purpose Dexterous Manipulator (SPDM), Video Signal Converters (VSC), and Operations Control Software (OCS), the Mobile Transporter (MT), and by interfaces between and among these elements, and United States On-Orbit Segment (USOS) distributed systems, and other International Space Station Elements and Payloads, (including the Power Data Grapple Fixtures (PDGFs), MSS Capture Attach System (MCAS) and the Mobile Transporter Capture Latch (MTCL)). This end-to-end function will be supported by the ISS and MSS ground segment facilities. This charter defines the scope and limits of the program authority and document control that is delegated to the SEWG and it also identifies the panel core membership and specific operating policies.
Two-component scattering model and the electron density spectrum
NASA Astrophysics Data System (ADS)
Zhou, A. Z.; Tan, J. Y.; Esamdin, A.; Wu, X. J.
2010-02-01
In this paper, we discuss a rigorous treatment of the refractive scintillation caused by a two-component interstellar scattering medium and a Kolmogorov form of density spectrum. It is assumed that the interstellar scattering medium is composed of a thin-screen interstellar medium (ISM) and an extended interstellar medium. We consider the case that the scattering of the thin screen concentrates in a thin layer represented by a δ function distribution and that the scattering density of the extended irregular medium satisfies the Gaussian distribution. We investigate and develop equations for the flux density structure function corresponding to this two-component ISM geometry in the scattering density distribution and compare our result with the observations. We conclude that the refractive scintillation caused by this two-component ISM scattering gives a more satisfactory explanation for the observed flux density variation than does the single extended medium model. The level of refractive scintillation is strongly sensitive to the distribution of scattering material along the line of sight (LOS). The theoretical modulation indices are comparatively less sensitive to the scattering strength of the thin-screen medium, but they critically depend on the distance from the observer to the thin screen. The logarithmic slope of the structure function is sensitive to the scattering strength of the thin-screen medium, but is relatively insensitive to the thin-screen location. Therefore, the proposed model can be applied to interpret the structure functions of flux density observed in pulsar PSR B2111 + 46 and PSR B0136 + 57. The result suggests that the medium consists of a discontinuous distribution of plasma turbulence embedded in the interstellar medium. Thus our work provides some insight into the distribution of the scattering along the LOS to the pulsar PSR B2111 + 46 and PSR B0136 + 57.
Eliciting the Functional Taxonomy from protein annotations and taxa
Falda, Marco; Lavezzo, Enrico; Fontana, Paolo; Bianco, Luca; Berselli, Michele; Formentin, Elide; Toppo, Stefano
2016-01-01
The advances of omics technologies have triggered the production of an enormous volume of data coming from thousands of species. Meanwhile, joint international efforts like the Gene Ontology (GO) consortium have worked to provide functional information for a vast amount of proteins. With these data available, we have developed FunTaxIS, a tool that is the first attempt to infer functional taxonomy (i.e. how functions are distributed over taxa) combining functional and taxonomic information. FunTaxIS is able to define a taxon specific functional space by exploiting annotation frequencies in order to establish if a function can or cannot be used to annotate a certain species. The tool generates constraints between GO terms and taxa and then propagates these relations over the taxonomic tree and the GO graph. Since these constraints nearly cover the whole taxonomy, it is possible to obtain the mapping of a function over the taxonomy. FunTaxIS can be used to make functional comparative analyses among taxa, to detect improper associations between taxa and functions, and to discover how functional knowledge is either distributed or missing. A benchmark test set based on six different model species has been devised to get useful insights on the generated taxonomic rules. PMID:27534507
NASA Astrophysics Data System (ADS)
Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei
2018-01-01
In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.
Sawakuchi, Gabriel O; Yukihara, Eduardo G
2012-01-21
The objective of this work is to test analytical models to calculate the luminescence efficiency of Al(2)O(3):C optically stimulated luminescence detectors (OSLDs) exposed to heavy charged particles with energies relevant to space dosimetry and particle therapy. We used the track structure model to obtain an analytical expression for the relative luminescence efficiency based on the average radial dose distribution produced by the heavy charged particle. We compared the relative luminescence efficiency calculated using seven different radial dose distribution models, including a modified model introduced in this work, with experimental data. The results obtained using the modified radial dose distribution function agreed within 20% with experimental data from Al(2)O(3):C OSLDs relative luminescence efficiency for particles with atomic number ranging from 1 to 54 and linear energy transfer in water from 0.2 up to 1368 keV µm(-1). In spite of the significant improvement over other radial dose distribution models, understanding of the underlying physical processes associated with these radial dose distribution models remain elusive and may represent a limitation of the track structure model.
Using hazard functions to assess changes in processing capacity in an attentional cuing paradigm.
Wenger, Michael J; Gibson, Bradley S
2004-08-01
Processing capacity--defined as the relative ability to perform mental work in a unit of time--is a critical construct in cognitive psychology and is central to theories of visual attention. The unambiguous use of the construct, experimentally and theoretically, has been hindered by both conceptual confusions and the use of measures that are at best only coarsely mapped to the construct. However, more than 25 years ago, J. T. Townsend and F. G. Ashby (1978) suggested that the hazard function on the response time (RT) distribution offered a number of conceptual advantages as a measure of capacity. The present study suggests that a set of statistical techniques, well-known outside the cognitive and perceptual literatures, offers the ability to perform hypothesis tests on RT-distribution hazard functions. These techniques are introduced, and their use is illustrated in application to data from the contingent attentional capture paradigm.
Effective equilibrium states in mixtures of active particles driven by colored noise
NASA Astrophysics Data System (ADS)
Wittmann, René; Brader, J. M.; Sharma, A.; Marconi, U. Marini Bettolo
2018-01-01
We consider the steady-state behavior of pairs of active particles having different persistence times and diffusivities. To this purpose we employ the active Ornstein-Uhlenbeck model, where the particles are driven by colored noises with exponential correlation functions whose intensities and correlation times vary from species to species. By extending Fox's theory to many components, we derive by functional calculus an approximate Fokker-Planck equation for the configurational distribution function of the system. After illustrating the predicted distribution in the solvable case of two particles interacting via a harmonic potential, we consider systems of particles repelling through inverse power-law potentials. We compare the analytic predictions to computer simulations for such soft-repulsive interactions in one dimension and show that at linear order in the persistence times the theory is satisfactory. This work provides the toolbox to qualitatively describe many-body phenomena, such as demixing and depletion, by means of effective pair potentials.
Evidence for hubs in human functional brain networks
Power, Jonathan D; Schlaggar, Bradley L; Lessov-Schlaggar, Christina N; Petersen, Steven E
2013-01-01
Summary Hubs integrate and distribute information in powerful ways due to the number and positioning of their contacts in a network. Several resting state functional connectivity MRI reports have implicated regions of the default mode system as brain hubs; we demonstrate that previous degree-based approaches to hub identification may have identified portions of large brain systems rather than critical nodes of brain networks. We utilize two methods to identify hub-like brain regions: 1) finding network nodes that participate in multiple sub-networks of the brain, and 2) finding spatial locations where several systems are represented within a small volume. These methods converge on a distributed set of regions that differ from previous reports on hubs. This work identifies regions that support multiple systems, leading to spatially constrained predictions about brain function that may be tested in terms of lesions, evoked responses, and dynamic patterns of activity. PMID:23972601
NASA Astrophysics Data System (ADS)
Maxworth, A. S.; Golkowski, M.; Malaspina, D.; Jaynes, A. N.
2017-12-01
Whistler mode waves play a dominant role in the energy dynamics of the Earth's magnetosphere. Trajectory of whistler mode waves can be predicted by raytracing. Raytracing is a numerical method which solves the Haselgrove's equations at each time step taking the background plasma parameters in to account. The majority of previous raytracing work was conducted assuming a cold (0 K) background magnetospheric plasma. Here we perform raytracing in a finite temperature plasma with background electron and ion temperatures of a few eV. When encountered with a high energy (>10 keV) electron distribution, whistler mode waves can undergo a power attenuation and/or growth, depending on resonance conditions which are a function of wave frequency, wave normal angle and particle energy. In this work we present the wave power attenuation and growth analysis of whistler mode waves, during the interaction with a high energy electron distribution. We have numerically modelled the high energy electron distribution as an isotropic velocity distribution, as well as an anisotropic bi-Maxwellian distribution. Both cases were analyzed with and without the temperature effects for the background magnetospheric plasma. Finally we compare our results with the whistler mode energy distribution obtained by the EMFISIS instrument hosted at the Van Allen Probe spacecraft.
Ion-Acoustic Double-Layers in Plasmas with Nonthermal Electrons
NASA Astrophysics Data System (ADS)
Rios, L. A.; Galvão, R. M. O.
2014-12-01
A double layer (DL) consists of a positive/negative Debye sheath, connecting two quasineutral regions of a plasma. These nonlinear structures can be found in a variety of plasmas, from discharge tubes to space plasmas. It has applications to plasma processing and space propulsion, and its concept is also important for areas such as applied geophysics. In the present work we investigate the ion-acoustic double-layers (IADLs). It is believed that these structures are responsible for the acceleration of auroral electrons, for example. The plasma distributions near a DL are usually non-Maxwellian and can be modeled via a κ distribution function. In its reduced form, the standard κ distribution is equivalent to the distribution function obtained from the maximization of the Tsallis entropy, the q distribution. The parameters κ and q measure the deviation from the Maxwellian equilibrium ("nonthermality"), with -κ=1/(1-q) (in the limit κ → ∞ (q → 1) the Maxwellian distribution is recovered). The existence of obliquely propagating IADLs in magnetized two-electron plasmas is investigated, with the hot electron population modeled via a κ distribution function [1]. Our analysis shows that only subsonic and rarefactive DLs exist for the entire range of parameters investigated. The small amplitude DLs exist only for τ=Th/Tc greater than a critical value, which grows as κ decreases. We also observe that these structures exist only for large values of δ=Nh0/N0, but never for δ=1. In our model, which assumes a quasineutral condition, the Mach number M grows as θ decreases (θ is the angle between the directions of the external magnetic field and wave propagation). However, M as well as the DL amplitude are reduced as a consequence of nonthermality. The relation of the quasineutral condition and the functional form of the distribution function with the nonexistence of IADLs has also been analyzed and some interesting results have been obtained. A more detailed discussion about this topic will be presented during the conference. References: [1] L. A. Rios and R. M. O. Galvão, Phys. Plasmas 20, 112301 (2013).
A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.
2014-12-01
A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.
Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.
Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo
2016-08-26
Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.
Application of the mobility power flow approach to structural response from distributed loading
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
The problem of the vibration power flow through coupled substructures when one of the substructures is subjected to a distributed load is addressed. In all the work performed thus far, point force excitation was considered. However, in the case of the excitation of an aircraft fuselage, distributed loading on the whole surface of a panel can be as important as the excitation from directly applied forces at defined locations on the structures. Thus using a mobility power flow approach, expressions are developed for the transmission of vibrational power between two coupled plate substructures in an L configuration, with one of the surfaces of one of the plate substructures being subjected to a distributed load. The types of distributed loads that are considered are a force load with an arbitrary function in space and a distributed load similar to that from acoustic excitation.
NASA Astrophysics Data System (ADS)
Medvedev, Igor G.
2017-11-01
We study the tunnel current through a one-level redox molecule immersed into the electrolyte solution for the case when the coupling of the molecule to one of the working electrodes is strong while it is arbitrary to the other electrode. Using the Feynman-Vernon influence functional theory and the perturbation expansion of the effective action of the classical oscillator coupled both to the valence level of the redox molecule and to the thermal bath representing the classical fluctuations of the polarization of the solvent, we obtain, following the canonical way, the Langevin equation for the oscillator. It is found that for the aqueous electrolyte solution, the damping and the stochastic forces which arise due to the tunnel current are much smaller than those due to the thermal bath and therefore can be neglected. We estimate the higher-order corrections to the effective action and show that the Langevin dynamics takes place in this case for arbitrary parameters of the tunneling junction under the condition of the strong coupling of the redox molecule to one of the working electrodes. Then the steady-state coordinate distribution function of the oscillator resulting from the corresponding Fokker-Planck equation is the Boltzmann distribution function which is determined by the adiabatic free energy surface arising from the mean current-induced force. It enables us to obtain the expression for the tunnel current in the case when the coupling of the redox molecule to one of the working electrodes is strong while it is arbitrary to the other electrode.
Medvedev, Igor G
2017-11-21
We study the tunnel current through a one-level redox molecule immersed into the electrolyte solution for the case when the coupling of the molecule to one of the working electrodes is strong while it is arbitrary to the other electrode. Using the Feynman-Vernon influence functional theory and the perturbation expansion of the effective action of the classical oscillator coupled both to the valence level of the redox molecule and to the thermal bath representing the classical fluctuations of the polarization of the solvent, we obtain, following the canonical way, the Langevin equation for the oscillator. It is found that for the aqueous electrolyte solution, the damping and the stochastic forces which arise due to the tunnel current are much smaller than those due to the thermal bath and therefore can be neglected. We estimate the higher-order corrections to the effective action and show that the Langevin dynamics takes place in this case for arbitrary parameters of the tunneling junction under the condition of the strong coupling of the redox molecule to one of the working electrodes. Then the steady-state coordinate distribution function of the oscillator resulting from the corresponding Fokker-Planck equation is the Boltzmann distribution function which is determined by the adiabatic free energy surface arising from the mean current-induced force. It enables us to obtain the expression for the tunnel current in the case when the coupling of the redox molecule to one of the working electrodes is strong while it is arbitrary to the other electrode.
Population patterns in World’s administrative units
Miramontes, Pedro; Cocho, Germinal
2017-01-01
Whereas there has been an extended discussion concerning city population distribution, little has been said about that of administrative divisions. In this work, we investigate the population distribution of second-level administrative units of 150 countries and territories and propose the discrete generalized beta distribution (DGBD) rank-size function to describe the data. After testing the balance between the goodness of fit and number of parameters of this function compared with a power law, which is the most common model for city population, the DGBD is a good statistical model for 96% of our datasets and preferred over a power law in almost every case. Moreover, the DGBD is preferred over a power law for fitting country population data, which can be seen as the zeroth-level administrative unit. We present a computational toy model to simulate the formation of administrative divisions in one dimension and give numerical evidence that the DGBD arises from a particular case of this model. This model, along with the fitting of the DGBD, proves adequate in reproducing and describing local unit evolution and its effect on the population distribution. PMID:28791153
Modeling electronic trap state distributions in nanocrystalline anatase
NASA Astrophysics Data System (ADS)
Le, Nam; Schweigert, Igor
The charge transport properties of nanocrystalline TiO2 films, and thus the catalytic performance of devices that incorporate them, are affected strongly by the spatial and energetic distribution of localized electronic trap states. Such traps may arise from a variety of defects: Ti interstitials, O vacancies, step edges at surfaces, and grain boundaries. We have developed a procedure for applying density functional theory (DFT) and density functional tight binding (DFTB) calculations to characterize distributions of localized states arising from multiple types of defects. We have applied the procedure to investigate how the morphologies of interfaces between pairs of attached anatase nanoparticles determine the energies of trap states therein. Our results complement recent experimental findings that subtle changes in the morphology of highly porous TiO2 aerogel networks can have a dramatic effect on catalytic performance, which was attributed to changes in the distribution of trap states. This work was supported by the U.S. Naval Research Laboratory via the National Research Council and by the Office of Naval Research through the U.S. Naval Research Laboratory.
A hybrid model of biased inductively coupled discharges1
NASA Astrophysics Data System (ADS)
Wen, Deqi; Lieberman, Michael A.; Zhang, Quanzhi; Liu, Yongxin; Wang, Younian
2016-09-01
A hybrid model, i.e. a global model coupled bidirectionally with a parallel Monte-Carlo collision (MCC) sheath model, is developed to investigate an inductively coupled discharge with a bias source. To validate this model, both bulk plasma density and ion energy distribution functions (IEDFs) are compared with experimental measurements in an argon discharge, and a good agreement is obtained. On this basis, the model is extended to weakly electronegative Ar/O2 plasma. The ion energy and angular distribution functions versus bias voltage amplitude are examined. The different ion species (Ar+, O2+,O+) have various behaviors because of the different masses. A low bias voltage, Ar+ has a single energy peak distribution and O+ has a bimodal distribution. At high bias voltage, the energy peak separation of O+ is wider than Ar+. 1This work has been supported by the National Nature Science Foundation of China (Grant No. 11335004) and Specific project (Grant No 2011X02403-001) and partially supported by Department of Energy Office of Fusion Energy Science Contract DE-SC000193 and a gift from the Lam Research Corporation.
A grid spacing control technique for algebraic grid generation methods
NASA Technical Reports Server (NTRS)
Smith, R. E.; Kudlinski, R. A.; Everton, E. L.
1982-01-01
A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.
Comment on ``Nonlinear gyrokinetic theory with polarization drift'' [Phys. Plasmas 17, 082304 (2010)
NASA Astrophysics Data System (ADS)
Leerink, S.; Parra, F. I.; Heikkinen, J. A.
2010-12-01
In this comment, we show that by using the discrete particle distribution function the changes of the phase-space volume of gyrocenter coordinates due to the fluctuating E ×B velocity do not explicitly appear in the Poisson equation and the [Sosenko et al., Phys. Scr. 64, 264 (2001)] result is recovered. It is demonstrated that there is no contradiction between the work presented by Sosenko et al. and the work presented by [Wang et al., Phys. Plasmas 17, 082304 (2010)].
Photoelectron spectra of the decomposition of ethylene on /110/ tungsten
NASA Technical Reports Server (NTRS)
Plummer, E. W.; Waclawski, B. J.; Vorburger, T. V.
1974-01-01
The experimental apparatus used in the investigation consisted of an ultrahigh-vacuum chamber, a triple-grid, a microwave-excited resonance lamp, and an electron energy analyzer. The chemical nature of the chemisorbed species was studied, taking into account the energy distribution of photoemitted electrons, work function determinations, and low-energy electron diffraction patterns.
Prototype-Distortion Category Learning: A Two-Phase Learning Process across a Distributed Network
ERIC Educational Resources Information Center
Little, Deborah M.; Thulborn, Keith R.
2006-01-01
This paper reviews a body of work conducted in our laboratory that applies functional magnetic resonance imaging (fMRI) to better understand the biological response and change that occurs during prototype-distortion learning. We review results from two experiments (Little, Klein, Shobat, McClure, & Thulborn, 2004; Little & Thulborn, 2005) that…
NASA Astrophysics Data System (ADS)
Raitses, Yevgeny; Donnelly, Vincent M.; Kaganovich, Igor D.; Godyak, Valery
2013-10-01
The application of the magnetic field in a low pressure plasma can cause a spatial separation of cold and hot electron groups. This so-called magnetic filter effect is not well understood and is the subject of our studies. In this work, we investigate electron energy distribution function in a DC-RF plasma discharge with crossed electric and magnetic field operating at sub-mtorr pressure range of xenon gas. Experimental studies showed that the increase of the magnetic field leads to a more uniform profile of the electron temperature across the magnetic field. This surprising result indicates the importance of anomalous electron transport that causes mixing of hot and cold electrons. High-speed imaging and probe measurements revealed a coherent structure rotating in E cross B direction with frequency of a few kHz. Similar to spoke oscillations reported for Hall thrusters, this rotating structure conducts the largest fraction of the cross-field current. This work was supported by DOE contract DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Raitses, Yevgeny; Donnelly, Vincent; Kaganovich, Igor; Godyak, Valery
2013-09-01
The application of the magnetic field in a low pressure plasma can cause a spatial separation of cold and hot electron groups. This so-called magnetic filter effect is not well understood and is the subject of our studies. In this work, we investigate electron energy distribution function in a DC-RF plasma discharge with crossed electric and magnetic field operating at sub-mtorr pressure range of xenon gas. Experimental studies showed that the increase of the magnetic field leads to a more uniform profile of the electron temperature across the magnetic field. This surprising result indicates the importance of anomalous electron transport that causes mixing of hot and cold electrons. High-speed imaging and probe measurements revealed a coherent structure rotating in E cross B direction with frequency of a few kHz. Similar to spoke oscillations reported for Hall thrusters, this rotating structure conducts the largest fraction of the cross-field current. This work was supported by the US DOE under Contract DE-AC02-09CH11466.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yuan; Ning, Chuangang, E-mail: ningcg@tsinghua.edu.cn; Collaborative Innovation Center of Quantum Matter, Beijing
2015-10-14
Recently, the development of photoelectron velocity map imaging makes it much easier to obtain the photoelectron angular distributions (PADs) experimentally. However, explanations of PADs are only qualitative in most cases, and very limited works have been reported on how to calculate PAD of anions. In the present work, we report a method using the density-functional-theory Kohn-Sham orbitals to calculate the photodetachment cross sections and the anisotropy parameter β. The spherical average over all random molecular orientation is calculated analytically. A program which can handle both the Gaussian type orbital and the Slater type orbital has been coded. The testing calculationsmore » on Li{sup −}, C{sup −}, O{sup −}, F{sup −}, CH{sup −}, OH{sup −}, NH{sub 2}{sup −}, O{sub 2}{sup −}, and S{sub 2}{sup −} show that our method is an efficient way to calculate the photodetachment cross section and anisotropy parameter β for anions, thus promising for large systems.« less
NASA Astrophysics Data System (ADS)
Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.
2018-06-01
The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.
Removing function model and experiments on ultrasonic polishing molding die
NASA Astrophysics Data System (ADS)
Huang, Qitai; Ni, Ying; Yu, Jingchi
2010-10-01
Low temperature glass molding technology is the main method on volume-producing high precision middle and small diameter optical cells in the future. While the accuracy of the molding die will effect the cell precision, so the high precision molding die development is one of the most important part of the low temperature glass molding technology. The molding die is manufactured from high rigid and crisp metal alloy, with the ultrasonic vibration character of high vibration frequency and concentrative energy distribution; abrasive particles will impact the rigid metal alloy surface with very high speed that will remove the material from the work piece. Ultrasonic can make the rigid metal alloy molding die controllable polishing and reduce the roughness and surface error. Different from other ultrasonic fabrication method, untouched ultrasonic polishing is applied on polish the molding die, that means the tool does not touch the work piece in the process of polishing. The abrasive particles vibrate around the balance position with high speed and frequency under the drive of ultrasonic vibration in the liquid medium and impact the workspace surface, the energy of abrasive particles come from ultrasonic vibration, while not from the direct hammer blow of the tool. So a nummular vibrator simple harmonic vibrates on an infinity plane surface is considered as a model of ultrasonic polishing working condition. According to Huygens theory the sound field distribution on a plane surface is analyzed and calculated, the tool removing function is also deduced from this distribution. Then the simple point ultrasonic polishing experiment is proceeded to certificate the theory validity.
Generalized Cross Entropy Method for estimating joint distribution from incomplete information
NASA Astrophysics Data System (ADS)
Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.
2016-07-01
Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as ;Generalized Cross Entropy Method; (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.
Measurements of charge distributions of the fragments in the low energy fission reaction
NASA Astrophysics Data System (ADS)
Wang, Taofeng; Han, Hongyin; Meng, Qinghua; Wang, Liming; Zhu, Liping; Xia, Haihong
2013-01-01
The measurement for charge distributions of fragments in spontaneous fission 252Cf has been performed by using a unique style of detector setup consisting of a typical grid ionization chamber and a ΔΕ-Ε particle telescope, in which a thin grid ionization chamber served as the ΔΕ-section and the E-section was an Au-Si surface barrier detector. The typical physical quantities of fragments, such as mass number and kinetic energies as well as the deposition in the gas ΔΕ detector and E detector were derived from the coincident measurement data. The charge distributions of the light fragments for the fixed mass number A2* and total kinetic energy (TKE) were obtained by the least-squares fits for the response functions of the ΔΕ detector with multi-Gaussian functions representing the different elements. The results of the charge distributions for some typical fragments are shown in this article which indicates that this detection setup has the charge distribution capability of Ζ:ΔΖ>40:1. The experimental method developed in this work for determining the charge distributions of fragments is expected to be employed in the neutron induced fissions of 232Th and 238U or other low energy fission reactions.
Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory
NASA Astrophysics Data System (ADS)
Taylor, Jamie M.
2016-09-01
This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.
NASA Astrophysics Data System (ADS)
Hager, Robert; Yoon, E. S.; Ku, S.; D'Azevedo, E. F.; Worley, P. H.; Chang, C. S.
2015-11-01
We describe the implementation, and application of a time-dependent, fully nonlinear multi-species Fokker-Planck-Landau collision operator based on the single-species work of Yoon and Chang [Phys. Plasmas 21, 032503 (2014)] in the full-function gyrokinetic particle-in-cell codes XGC1 [Ku et al., Nucl. Fusion 49, 115021 (2009)] and XGCa. XGC simulations include the pedestal and scrape-off layer, where significant deviations of the particle distribution function from a Maxwellian can occur. Thus, in order to describe collisional effects on neoclassical and turbulence physics accurately, the use of a non-linear collision operator is a necessity. Our collision operator is based on a finite volume method using the velocity-space distribution functions sampled from the marker particles. Since the same fine configuration space mesh is used for collisions and the Poisson solver, the workload due to collisions can be comparable to or larger than the workload due to particle motion. We demonstrate that computing time spent on collisions can be kept affordable by applying advanced parallelization strategies while conserving mass, momentum, and energy to reasonable accuracy. We also show results of production scale XGCa simulations in the H-mode pedestal and compare to conventional theory. Work supported by US DOE OFES and OASCR.
NASA Astrophysics Data System (ADS)
Capdeville, H.; Pédoussat, C.; Pitchford, L. C.
2002-02-01
The work presented in the article is a study of the heavy particle (ion and neutral) energy flux distributions to the cathode in conditions typical of discharges used for luminous signs for advertising ("neon" signs). The purpose of this work is to evaluate the effect of the gas mixture on the sputtering of the cathode. We have combined two models for this study: a hybrid model of the electrical properties of the cathode region of a glow discharge and a Monte Carlo simulation of the heavy particle trajectories. Using known sputtering yields for Ne, Ar, and Xe on iron cathodes, we estimate the sputtered atom flux for mixtures of Ar/Ne and Xe/Ne as a function of the percent neon in the mixture.
Soft materials design via self assembly of functionalized icosahedral particles
NASA Astrophysics Data System (ADS)
Muthukumar, Vidyalakshmi Chockalingam
In this work we simulate self assembly of icosahedral building blocks using a coarse grained model of the icosahedral capsid of virus 1m1c. With significant advancements in site-directed functionalization of these macromolecules [1], we propose possible application of such self-assembled materials for drug delivery. While there have been some reports on organization of viral particles in solution through functionalization, exploiting this behaviour for obtaining well-ordered stoichiometric structures has not yet been explored. Our work is in well agreement with the earlier simulation studies of icosahedral gold nanocrystals, giving chain like patterns [5] and also broadly in agreement with the wet lab works of Finn, M.G. et al., who have shown small predominantly chain-like aggregates with mannose-decorated Cowpea Mosaic Virus (CPMV) [22] and small two dimensional aggregates with oligonucleotide functionalization on the CPMV capsid [1]. To quantify the results of our Coarse Grained Molecular Dynamics Simulations I developed analysis routines in MATLAB using which we found the most preferable nearest neighbour distances (from the radial distribution function (RDF) calculations) for different lengths of the functional groups and under different implicit solvent conditions, and the most frequent coordination number for a virus particle (histogram plots further using the information from RDF). Visual inspection suggests that our results most likely span the low temperature limits explored in the works of Finn, M.G. et al., and show a good degree of agreement with the experimental results in [1] at an annealing temperature of 4°C. Our work also reveals the possibility of novel stoichiometric N-mer type aggregates which could be synthesized using these capsids with appropriate functionalization and solvent conditions.
High-Reynolds Number Viscous Flow Simulations on Embedded-Boundary Cartesian Grids
2016-05-05
d ) 2 χ ≥ 0 −cw1 ( ν̃d ) 2 otherwise (6) 2 DISTRIBUTION A: Distribution approved for public release. with νt = ν̃fv1 and the usual definitions of fw...1 The wall function is coupled to the underlying Cartesian grid through its endpoints. This is illustrated schematically in Fig. 2 . At the wall it is...by interpolation from the Cartesian grid . This eliminates the problem of uτ → 0 , since this works in physical coordinates and not plus coordinates. We
Nucleation and growth in one dimension. I. The generalized Kolmogorov-Johnson-Mehl-Avrami model
NASA Astrophysics Data System (ADS)
Jun, Suckjoon; Zhang, Haiyang; Bechhoefer, John
2005-01-01
Motivated by a recent application of the Kolmogorov-Johnson-Mehl-Avrami (KJMA) model to the study of DNA replication, we consider the one-dimensional (1D) version of this model. We generalize previous work to the case where the nucleation rate is an arbitrary function I(t) and obtain analytical results for the time-dependent distributions of various quantities (such as the island distribution). We also present improved computer simulation algorithms to study the 1D KJMA model. The analytical results and simulations are in excellent agreement.
Distributed Control with Collective Intelligence
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Wheeler, Kevin R.; Tumer, Kagan
1998-01-01
We consider systems of interacting reinforcement learning (RL) algorithms that do not work at cross purposes , in that their collective behavior maximizes a global utility function. We call such systems COllective INtelligences (COINs). We present the theory of designing COINs. Then we present experiments validating that theory in the context of two distributed control problems: We show that COINs perform near-optimally in a difficult variant of Arthur's bar problem [Arthur] (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance in the master-slave problem.
Magnetic Pumping as a Source of Particle Heating and Power-Law Distributions in the Solar Wind
Lichko, Emily Rose; Egedal, Jan; Daughton, William Scott; ...
2017-11-27
Based on the rate of expansion of the solar wind, the plasma should cool rapidly as a function of distance to the Sun. Observations show this is not the case. In this work, a magnetic pumping model is developed as a possible explanation for the heating and the generation of power-law distribution functions observed in the solar wind plasma. Most previous studies in this area focus on the role that the dissipation of turbulent energy on microscopic kinetic scales plays in the overall heating of the plasma. However, with magnetic pumping, particles are energized by the largest-scale turbulent fluctuations, thusmore » bypassing the energy cascade. In contrast to other models, we include the pressure anisotropy term, providing a channel for the large-scale fluctuations to heat the plasma directly. A complete set of coupled differential equations describing the evolution, and energization, of the distribution function are derived, as well as an approximate closed-form solution. Numerical simulations using the VPIC kinetic code are applied to verify the model's analytical predictions. The results of the model for realistic solar wind scenario are computed, where thermal streaming of particles are important for generating a phase shift between the magnetic perturbations and the pressure anisotropy. In turn, averaged over a pump cycle, the phase shift permits mechanical work to be converted directly to heat in the plasma. Here, the results of this scenario show that magnetic pumping may account for a significant portion of the solar wind energization.« less
Magnetic Pumping as a Source of Particle Heating and Power-Law Distributions in the Solar Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lichko, Emily Rose; Egedal, Jan; Daughton, William Scott
Based on the rate of expansion of the solar wind, the plasma should cool rapidly as a function of distance to the Sun. Observations show this is not the case. In this work, a magnetic pumping model is developed as a possible explanation for the heating and the generation of power-law distribution functions observed in the solar wind plasma. Most previous studies in this area focus on the role that the dissipation of turbulent energy on microscopic kinetic scales plays in the overall heating of the plasma. However, with magnetic pumping, particles are energized by the largest-scale turbulent fluctuations, thusmore » bypassing the energy cascade. In contrast to other models, we include the pressure anisotropy term, providing a channel for the large-scale fluctuations to heat the plasma directly. A complete set of coupled differential equations describing the evolution, and energization, of the distribution function are derived, as well as an approximate closed-form solution. Numerical simulations using the VPIC kinetic code are applied to verify the model's analytical predictions. The results of the model for realistic solar wind scenario are computed, where thermal streaming of particles are important for generating a phase shift between the magnetic perturbations and the pressure anisotropy. In turn, averaged over a pump cycle, the phase shift permits mechanical work to be converted directly to heat in the plasma. Here, the results of this scenario show that magnetic pumping may account for a significant portion of the solar wind energization.« less
Action-angle formulation of generalized, orbit-based, fast-ion diagnostic weight functions
NASA Astrophysics Data System (ADS)
Stagner, L.; Heidbrink, W. W.
2017-09-01
Due to the usually complicated and anisotropic nature of the fast-ion distribution function, diagnostic velocity-space weight functions, which indicate the sensitivity of a diagnostic to different fast-ion velocities, are used to facilitate the analysis of experimental data. Additionally, when velocity-space weight functions are discretized, a linear equation relating the fast-ion density and the expected diagnostic signal is formed. In a technique known as velocity-space tomography, many measurements can be combined to create an ill-conditioned system of linear equations that can be solved using various computational methods. However, when velocity-space weight functions (which by definition ignore spatial dependencies) are used, velocity-space tomography is restricted, both by the accuracy of its forward model and also by the availability of spatially overlapping diagnostic measurements. In this work, we extend velocity-space weight functions to a full 6D generalized coordinate system and then show how to reduce them to a 3D orbit-space without loss of generality using an action-angle formulation. Furthermore, we show how diagnostic orbit-weight functions can be used to infer the full fast-ion distribution function, i.e., orbit tomography. In depth derivations of orbit weight functions for the neutron, neutral particle analyzer, and fast-ion D-α diagnostics are also shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Mackenzie L.; Hickox, Ryan C.; Black, Christine S.
An important question in extragalactic astronomy concerns the distribution of black hole accretion rates of active galactic nuclei (AGNs). Based on observations at X-ray wavelengths, the observed Eddington ratio distribution appears as a power law, while optical studies have often yielded a lognormal distribution. There is increasing evidence that these observed discrepancies may be due to contamination by star formation and other selection effects. Using a sample of galaxies from the Sloan Digital Sky Survey Data Release 7, we test whether or not an intrinsic Eddington ratio distribution that takes the form of a Schechter function is consistent with previousmore » work suggesting that young galaxies in optical surveys have an observed lognormal Eddington ratio distribution. We simulate the optical emission line properties of a population of galaxies and AGNs using a broad, instantaneous luminosity distribution described by a Schechter function near the Eddington limit. This simulated AGN population is then compared to observed galaxies via their positions on an emission line excitation diagram and Eddington ratio distributions. We present an improved method for extracting the AGN distribution using BPT diagnostics that allows us to probe over one order of magnitude lower in Eddington ratio, counteracting the effects of dilution by star formation. We conclude that for optically selected AGNs in young galaxies, the intrinsic Eddington ratio distribution is consistent with a possibly universal, broad power law with an exponential cutoff, as this distribution is observed in old, optically selected galaxies and X-rays.« less
Forecasting the impact of transport improvements on commuting and residential choice
NASA Astrophysics Data System (ADS)
Elhorst, J. Paul; Oosterhaven, Jan
2006-03-01
This paper develops a probabilistic, competing-destinations, assignment model that predicts changes in the spatial pattern of the working population as a result of transport improvements. The choice of residence is explained by a new non-parametric model, which represents an alternative to the popular multinominal logit model. Travel times between zones are approximated by a normal distribution function with different mean and variance for each pair of zones, whereas previous models only use average travel times. The model’s forecast error of the spatial distribution of the Dutch working population is 7% when tested on 1998 base-year data. To incorporate endogenous changes in its causal variables, an almost ideal demand system is estimated to explain the choice of transport mode, and a new economic geography inter-industry model (RAEM) is estimated to explain the spatial distribution of employment. In the application, the model is used to forecast the impact of six mutually exclusive Dutch core-periphery railway proposals in the projection year 2020.
Impact of compressibility and a guide field on Fermi acceleration during magnetic island coalescence
NASA Astrophysics Data System (ADS)
Montag, Peter; Egedal, Jan; Lichko, Emily; Wetherton, Blake
2017-10-01
Previous work has shown that Fermi acceleration can be an effective heating mechanism during magnetic island coalescence, where electrons may undergo repeated reflections as the magnetic field lines contract. This energization has the potential to account for the power-law distributions of particle energy inferred from observations of solar flares. Here, we develop a generalized framework for the analysis of Fermi acceleration that can incorporate the effects of compressibility and non-uniformity along field lines, which have commonly been neglected in previous treatments of the problem. Applying this framework to the simplified case of the uniform flux tube allows us to find both the power-law scaling of the distribution function and the rate at which the power-law behavior develops. We find that a guide magnetic field of order unity effectively suppresses the development of power-law distributions. The work was supported by NASA Grant No. NNX14AC68G, NSF GEM Grant No. 1405166, NSF Award 1404166, and NASA Award NNX15AJ73G.
Application-oriented architecture for multimedia teleservices
NASA Astrophysics Data System (ADS)
Vanrijssen, Erwin; Widya, Ing; Michiels, Eddie
This paper looks into communications capabilities that are required by distributed multimedia applications to achieve relation preserving information exchange. These capabilities are derived by analyzing the notion of 'information exchange' and are embodied in communications functionalities. To emphasize the importance of the users' view, a top-down approach is applied. The revised Open Systems Interconnection (OSI) Application Layer Structure (OSI-ALS) is used to model the communications functionalities and to develop an architecture for composition of multimedia teleservices with these functionalities. This work may therefore be considered an exercise to evaluate the suitability of OSI-ALS for composition of multimedia teleservices.
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-04-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
Refractive laser beam shaping by means of a functional differential equation based design approach.
Duerr, Fabian; Thienpont, Hugo
2014-04-07
Many laser applications require specific irradiance distributions to ensure optimal performance. Geometric optical design methods based on numerical calculation of two plano-aspheric lenses have been thoroughly studied in the past. In this work, we present an alternative new design approach based on functional differential equations that allows direct calculation of the rotational symmetric lens profiles described by two-point Taylor polynomials. The formalism is used to design a Gaussian to flat-top irradiance beam shaping system but also to generate a more complex dark-hollow Gaussian (donut-like) irradiance distribution with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of both calculated solutions and emphasize the potential of this design approach for refractive beam shaping applications.
Small-x Asymptotics of the Quark Helicity Distribution.
Kovchegov, Yuri V; Pitonyak, Daniel; Sievert, Matthew D
2017-02-03
We construct a numerical solution of the small-x evolution equations derived in our recent work [J. High Energy Phys. 01 (2016) 072.JHEPFG1029-847910.1007/JHEP01(2016)072] for the (anti)quark transverse momentum dependent helicity TMDs and parton distribution functions (PDFs) as well as the g_{1} structure function. We focus on the case of large N_{c}, where one finds a closed set of equations. Employing the extracted intercept, we are able to predict directly from theory the behavior of the quark helicity PDFs at small x, which should have important phenomenological consequences. We also give an estimate of how much of the proton's spin carried by the quarks may be at small x and what impact this has on the spin puzzle.
Collective intelligence for control of distributed dynamical systems
NASA Astrophysics Data System (ADS)
Wolpert, D. H.; Wheeler, K. R.; Tumer, K.
2000-03-01
We consider the El Farol bar problem, also known as the minority game (W. B. Arthur, The American Economic Review, 84 (1994) 406; D. Challet and Y. C. Zhang, Physica A, 256 (1998) 514). We view it as an instance of the general problem of how to configure the nodal elements of a distributed dynamical system so that they do not "work at cross purposes", in that their collective dynamics avoids frustration and thereby achieves a provided global goal. We summarize a mathematical theory for such configuration applicable when (as in the bar problem) the global goal can be expressed as minimizing a global energy function and the nodes can be expressed as minimizers of local free energy functions. We show that a system designed with that theory performs nearly optimally for the bar problem.
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrie, Michael; Shadwick, B. A.
2016-01-04
Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org
2016-01-15
We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less
A two-layered classifier based on the radial basis function for the screening of thalassaemia.
Masala, G L; Golosio, B; Cutzu, R; Pola, R
2013-11-01
The thalassaemias are blood disorders with hereditary transmission. Their distribution is global, with particular incidence in areas affected by malaria. Their diagnosis is mainly based on haematologic and genetic analyses. The aim of this study was to differentiate between persons with the thalassaemia trait and normal subjects by inspecting characteristics of haemochromocytometric data. The paper proposes an original method that is useful in screening activity for thalassaemia classification. A complete working system with a friendly graphical user interface is presented. A unique feature of the presented work is the adoption of a two-layered classification system based on Radial basis function, which improves the performance of the system. © 2013 Elsevier Ltd. All rights reserved.
Forte, R; Pesce, C; De Vito, G; Boreham, C A G
2017-01-01
To examine the relationship between regional and whole body fat accumulation and core cognitive executive functions. Cross-sectional study. 78 healthy men and women aged between 65 and 75 years recruited through consumer's database. DXA measured percentage total body fat, android, gynoid distribution and android/gynoid ratio; inhibition and working memory updating through Random Number Generation test and cognitive flexibility by Trail Making test. First-order partial correlations between regional body fat and cognitive executive function were computed partialling out the effects of whole body fat. Moderation analysis was performed to verify the effect of gender on the body fat-cognition relationship. Results showed a differentiated pattern of fat-cognition relationship depending on fat localization and type of cognitive function. Statistically significant relationships were observed between working memory updating and: android fat (r = -0.232; p = 0.042), gynoid fat (r = 0.333; p = 0.003) and android/gynoid ratio (r = -0.272; p = 0.017). Separating genders, the only significant relationship was observed in females between working memory updating and gynoid fat (r = 0.280; p = 0.045). In spite of gender differences in both working memory updating and gynoid body fat levels, moderation analysis did not show an effect of gender on the relationship between gynoid fat and working memory updating. Results suggest a protective effect of gynoid body fat and a deleterious effect of android body fat. Although excessive body fat increases the risk of developing CDV, metabolic and cognitive problems, maintaining a certain proportion of gynoid fat may help prevent cognitive decline, particularly in older women. Guidelines for optimal body composition maintenance for the elderly should not target indiscriminate weight loss, but weight maintenance through body fat/lean mass control based on non-pharmacological tools such as physical exercise, known to have protective effects against CVD risk factors and age-related cognitive deterioration.
Basic features of the pion valence-quark distribution function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Lei; Mezrag, Cédric; Moutarde, Hervé
2014-10-07
The impulse-approximation expression used hitherto to define the pion's valence-quark distribution function is flawed because it omits contributions from the gluons which bind quarks into the pion. A corrected leading-order expression produces the model-independent result that quarks dressed via the rainbow–ladder truncation, or any practical analogue, carry all the pion's light-front momentum at a characteristic hadronic scale. Corrections to the leading contribution may be divided into two classes, responsible for shifting dressed-quark momentum into glue and sea-quarks. Working with available empirical information, we use an algebraic model to express the principal impact of both classes of corrections. This enables amore » realistic comparison with experiment that allows us to highlight the basic features of the pion's measurable valence-quark distribution, q π(x); namely, at a characteristic hadronic scale, q π(x)~(1-x) 2 for x≳0.85; and the valence-quarks carry approximately two-thirds of the pion's light-front momentum.« less
A single molecule perspective on the functional diversity of in vitro evolved β-glucuronidase.
Liebherr, Raphaela B; Renner, Max; Gorris, Hans H
2014-04-23
The mechanisms that drive the evolution of new enzyme activity have been investigated by comparing the kinetics of wild-type and in vitro evolved β-glucuronidase (GUS) at the single molecule level. Several hundred single GUS molecules were separated in large arrays of 62,500 ultrasmall reaction chambers etched into the surface of a fused silica slide to observe their individual substrate turnover rates in parallel by fluorescence microscopy. Individual GUS molecules feature long-lived but divergent activity states, and their mean activity is consistent with classic Michaelis-Menten kinetics. The large number of single molecule substrate turnover rates is representative of the activity distribution within an entire enzyme population. Partially evolved GUS displays a much broader activity distribution among individual enzyme molecules than wild-type GUS. The broader activity distribution indicates a functional division of work between individual molecules in a population of partially evolved enzymes that-as so-called generalists-are characterized by their promiscuous activity with many different substrates.
NASA Astrophysics Data System (ADS)
Benzon, K. B.; Sheena, Mary Y.; Panicker, C. Yohannan; Armaković, Stevan; Armaković, Sanja J.; Pradhan, Kiran; Nanda, Ashis Kumar; Van Alsenoy, C.
2017-02-01
In this work we have investigated in details the spectroscopic and reactive properties of newly synthesized imidazole derivative, namely the 1-hydroxy-2-(4-hydroxyphenyl)-4,5-dimethyl-imidazole 3-oxide (HHPDI). FT-IR and NMR spectra were measured and compared with theoretically obtained data provided by calculations of potential energy distribution and chemical shifts, respectively. Insight into the global reactivity properties has been obtained by analysis of frontier molecular orbitals, while local reactivity properties have been investigated by analysis of charge distribution, ionization energies and Fukui functions. NBO analysis was also employed to understand the stability of molecule, while hyperpolarizability has been calculated in order to assess the nonlinear optical properties of title molecule. Sensitivity towards autoxidation and hydrolysis mechanisms has been investigated by calculations of bond dissociation energies and radial distribution functions, respectively. Molecular docking study was also performed, in order to determine the pharmaceutical potential of the investigated molecule.
Distributed Operations Planning
NASA Technical Reports Server (NTRS)
Fox, Jason; Norris, Jeffrey; Powell, Mark; Rabe, Kenneth; Shams, Khawaja
2007-01-01
Maestro software provides a secure and distributed mission planning system for long-term missions in general, and the Mars Exploration Rover Mission (MER) specifically. Maestro, the successor to the Science Activity Planner, has a heavy emphasis on portability and distributed operations, and requires no data replication or expensive hardware, instead relying on a set of services functioning on JPL institutional servers. Maestro works on most current computers with network connections, including laptops. When browsing down-link data from a spacecraft, Maestro functions similarly to being on a Web browser. After authenticating the user, it connects to a database server to query an index of data products. It then contacts a Web server to download and display the actual data products. The software also includes collaboration support based upon a highly reliable messaging system. Modifications made to targets in one instance are quickly and securely transmitted to other instances of Maestro. The back end that has been developed for Maestro could benefit many future missions by reducing the cost of centralized operations system architecture.
Zhao, Lei; Liao, Xiu-jun; Yang, Guan-gen; Mao, Wei-ming; Zhang, Xiu-feng; Deng, Qun; Wu, Wen-jing
2014-10-01
To explore the distribution characteristics of basic syndromes and its related factors in patients with chronic functional constipation (CFC). The complete data of 538 patients with CFC were collected and initial database was established with Epidata 3. 0. TCM syndrome typing was performed. The distribution characteristics of basic syndromes were analyzed using SPSS 17. 0 Software. The univariate and multivariate Logistic regression analyses were performed with SPSS 17. 0 Software to determine basic syndrome related factors such as age, engaged professionals, sleep quality, depression, mental stress, interpersonal relations, work fatigue, stimulating beverage, exercise conditions, Western medicine type of constipation, and so on. The TCM syndrome frequency of CFC patients was sequenced from high to low as qi deficiency syndrome (380 cases, 70.6%), qi stagnation syndrome (337 cases, 62.6%), blood deficiency syndrome (234 cases, 43.5%), yin deficiency syndrome (220 cases, 40.9%), yang deficiency syndrome (197 cases, 36.6%), and others(58 cases, 10. 8%) . Most patients were complicated with complex syndromes, and the most common complex syndromes were qi deficiency complicated qi stagnation syndrome (275 cases, 51.1%) and qi deficiency complicated blood deficiency syndrome (222 cases, 41.3%). Aging, work fatigue, and exercise conditions were main related factors for qi deficiency syndrome (P <0. 01, P <0. 05). Poor emotional (depression and anxiety tendencies), mental stress, interpersonal relations, defecation barriers constipation were main related factors for qi stagnation syndrome (P <0.01). Sleep quality and poor emotional (depression and anxiety tendencies) were main related factors for blood deficiency syndrome (P <0. 01, P < 0.05). Stimulating beverages were main related factor for yin deficiency syndrome (P <0.05). Engaged in mental work and slow transit constipation were main related factors for yang deficiency syndrome (P < 0. 01, P <0. 05). CFC is featured as complex syndromes. The most common complex syndromes were qi deficiency complicated qi stagnation syndrome and qi deficiency complicated blood deficiency syndrome. Basic syndrome related factors such as age, engaged professionals, sleep quality, poor emotional (depression and anxiety tendencies), mental stress, interpersonal relations, work fatigue, stimulating beverage, exercise conditions, Western medicine type of constipation were associated with the distribution of CFC syndromes.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Enhanced electron emission from coated metal targets: Effect of surface thickness on performance
NASA Astrophysics Data System (ADS)
Madas, Saibabu; Mishra, S. K.; Upadhyay Kahaly, Mousumi
2018-03-01
In this work, we establish an analytical formalism to address the temperature dependent electron emission from a metallic target with thin coating, operating at a finite temperature. Taking into account three dimensional parabolic energy dispersion for the target (base) material and suitable thickness dependent energy dispersion for the coating layer, Fermi Dirac statistics of electron energy distribution and Fowler's mechanism of the electron emission, we discuss the dependence of the emission flux on the physical properties such as the Fermi level, work function, thickness of the coating material, and operating temperature. Our systematic estimation of how the thickness of coating affects the emission current demonstrates superior emission characteristics for thin coating layer at high temperature (above 1000 K), whereas in low temperature regime, a better response is expected from thicker coating layer. This underlying fundamental behavior appears to be essentially identical for all configurations when work function of the coating layer is lower than that of the bulk target work function. The analysis and predictions could be useful in designing new coated materials with suitable thickness for applications in the field of thin film devices and field emitters.
Dynamic reconfiguration of frontal brain networks during executive cognition in humans
Braun, Urs; Schäfer, Axel; Walter, Henrik; Erk, Susanne; Romanczuk-Seiferth, Nina; Haddad, Leila; Schweiger, Janina I.; Grimm, Oliver; Heinz, Andreas; Tost, Heike; Meyer-Lindenberg, Andreas; Bassett, Danielle S.
2015-01-01
The brain is an inherently dynamic system, and executive cognition requires dynamically reconfiguring, highly evolving networks of brain regions that interact in complex and transient communication patterns. However, a precise characterization of these reconfiguration processes during cognitive function in humans remains elusive. Here, we use a series of techniques developed in the field of “dynamic network neuroscience” to investigate the dynamics of functional brain networks in 344 healthy subjects during a working-memory challenge (the “n-back” task). In contrast to a control condition, in which dynamic changes in cortical networks were spread evenly across systems, the effortful working-memory condition was characterized by a reconfiguration of frontoparietal and frontotemporal networks. This reconfiguration, which characterizes “network flexibility,” employs transient and heterogeneous connectivity between frontal systems, which we refer to as “integration.” Frontal integration predicted neuropsychological measures requiring working memory and executive cognition, suggesting that dynamic network reconfiguration between frontal systems supports those functions. Our results characterize dynamic reconfiguration of large-scale distributed neural circuits during executive cognition in humans and have implications for understanding impaired cognitive function in disorders affecting connectivity, such as schizophrenia or dementia. PMID:26324898
NASA Astrophysics Data System (ADS)
Wu, Xiongwu; Brooks, Bernard R.
2011-11-01
The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.
Delgado-Baquerizo, Manuel; Fry, Ellen L; Eldridge, David J; de Vries, Franciska T; Manning, Peter; Hamonts, Kelly; Kattge, Jens; Boenisch, Gerhard; Singh, Brajesh K; Bardgett, Richard D
2018-04-19
We lack strong empirical evidence for links between plant attributes (plant community attributes and functional traits) and the distribution of soil microbial communities at large spatial scales. Using datasets from two contrasting regions and ecosystem types in Australia and England, we report that aboveground plant community attributes, such as diversity (species richness) and cover, and functional traits can predict a unique portion of the variation in the diversity (number of phylotypes) and community composition of soil bacteria and fungi that cannot be explained by soil abiotic properties and climate. We further identify the relative importance and evaluate the potential direct and indirect effects of climate, soil properties and plant attributes in regulating the diversity and community composition of soil microbial communities. Finally, we deliver a list of examples of common taxa from Australia and England that are strongly related to specific plant traits, such as specific leaf area index, leaf nitrogen and nitrogen fixation. Together, our work provides new evidence that plant attributes, especially plant functional traits, can predict the distribution of soil microbial communities at the regional scale and across two hemispheres. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.
ERIC Educational Resources Information Center
Bastiaansen, Marcel C. M.; Oostenveld, Robert; Jensen, Ole; Hagoort, Peter
2008-01-01
An influential hypothesis regarding the neural basis of the mental lexicon is that semantic representations are neurally implemented as distributed networks carrying sensory, motor and/or more abstract functional information. This work investigates whether the semantic properties of words partly determine the topography of such networks. Subjects…
Measurement and modeling of diameter distributions of particulate matter in terrestrial solutions
NASA Astrophysics Data System (ADS)
Levia, Delphis F.; Michalzik, Beate; Bischoff, Sebastian; NäThe, Kerstin; Legates, David R.; Gruselle, Marie-Cecile; Richter, Susanne
2013-04-01
Particulate matter (PM) plays an important role in biogeosciences, affecting biosphere-atmosphere interactions and ecosystem health. This is the first known study to quantify and model PM diameter distributions of bulk precipitation, throughfall, stemflow, and organic layer (Oa) solution. Solutions were collected from a European beech (Fagus sylvatica L.) forest during leafed and leafless periods. Following scanning electron microscopy and image analysis, PM distributions were quantified and then modeled with the Box-Cox transformation. Based on an analysis of 43,278 individual particulates, median PM diameter of all solutions was around 3.0 µm. All PM diameter frequency distributions were skewed significantly to the right. Optimal power transformations of PM diameter distributions were between -1.00 and -1.56. The utility of this model reconstruction would be that large samples having a similar probability density function can be developed for similar forests. Further work on the shape and chemical composition of particulates is warranted.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
Bartés-Serrallonga, M; Adan, A; Solé-Casals, J; Caldú, X; Falcón, C; Pérez-Pàmies, M; Bargalló, N; Serra-Grabulosa, J M
2014-04-01
One of the most used paradigms in the study of attention is the Continuous Performance Test (CPT). The identical pairs version (CPT-IP) has been widely used to evaluate attention deficits in developmental, neurological and psychiatric disorders. However, the specific locations and the relative distribution of brain activation in networks identified with functional imaging, varies significantly with differences in task design. To design a task to evaluate sustained attention using functional magnetic resonance imaging (fMRI), and thus to provide data for research concerned with the role of these functions. Forty right-handed, healthy students (50% women; age range: 18-25 years) were recruited. A CPT-IP implemented as a block design was used to assess sustained attention during the fMRI session. The behavioural results from the CPT-IP task showed a good performance in all subjects, higher than 80% of hits. fMRI results showed that the used CPT-IP task activates a network of frontal, parietal and occipital areas, and that these are related to executive and attentional functions. In relation to the use of the CPT to study of attention and working memory, this task provides normative data in healthy adults, and it could be useful to evaluate disorders which have attentional and working memory deficits.
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Schunck, N.; Verrière, M.
2016-05-01
Background: Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r process to fuel cycle optimization for nuclear energy. The need for a predictive theory applicable where no data are available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. Purpose: In this work, we calculate the pre-neutron emission charge and mass distributions of the fission fragments formed in the neutron-induced fission of 239Pu using a microscopic method based on nuclear density functional theory (DFT). Methods: Our theoretical framework is the nuclear energy density functional (EDF) method, where large-amplitude collective motion is treated adiabatically by using the time-dependent generator coordinate method (TDGCM) under the Gaussian overlap approximation (GOA). In practice, the TDGCM is implemented in two steps. First, a series of constrained EDF calculations map the configuration and potential-energy landscape of the fissioning system for a small set of collective variables (in this work, the axial quadrupole and octupole moments of the nucleus). Then, nuclear dynamics is modeled by propagating a collective wave packet on the potential-energy surface. Fission fragment distributions are extracted from the flux of the collective wave packet through the scission line. Results: We find that the main characteristics of the fission charge and mass distributions can be well reproduced by existing energy functionals even in two-dimensional collective spaces. Theory and experiment agree typically within two mass units for the position of the asymmetric peak. As expected, calculations are sensitive to the structure of the initial state and the prescription for the collective inertia. We emphasize that results are also sensitive to the continuity of the collective landscape near scission. Conclusions: Our analysis confirms that the adiabatic approximation provides an effective scheme to compute fission fragment yields. It also suggests that, at least in the framework of nuclear DFT, three-dimensional collective spaces may be a prerequisite to reach 10% accuracy in predicting pre-neutron emission fission fragment yields.
Quantum-Classical Correspondence Principle for Work Distributions
NASA Astrophysics Data System (ADS)
Jarzynski, Christopher; Quan, H. T.; Rahav, Saar
2015-07-01
For closed quantum systems driven away from equilibrium, work is often defined in terms of projective measurements of initial and final energies. This definition leads to statistical distributions of work that satisfy nonequilibrium work and fluctuation relations. While this two-point measurement definition of quantum work can be justified heuristically by appeal to the first law of thermodynamics, its relationship to the classical definition of work has not been carefully examined. In this paper, we employ semiclassical methods, combined with numerical simulations of a driven quartic oscillator, to study the correspondence between classical and quantal definitions of work in systems with 1 degree of freedom. We find that a semiclassical work distribution, built from classical trajectories that connect the initial and final energies, provides an excellent approximation to the quantum work distribution when the trajectories are assigned suitable phases and are allowed to interfere. Neglecting the interferences between trajectories reduces the distribution to that of the corresponding classical process. Hence, in the semiclassical limit, the quantum work distribution converges to the classical distribution, decorated by a quantum interference pattern. We also derive the form of the quantum work distribution at the boundary between classically allowed and forbidden regions, where this distribution tunnels into the forbidden region. Our results clarify how the correspondence principle applies in the context of quantum and classical work distributions and contribute to the understanding of work and nonequilibrium work relations in the quantum regime.
Diffusion theory of decision making in continuous report.
Smith, Philip L
2016-07-01
I present a diffusion model for decision making in continuous report tasks, in which a continuous, circularly distributed, stimulus attribute in working memory is matched to a representation of the attribute in the stimulus display. Memory retrieval is modeled as a 2-dimensional diffusion process with vector-valued drift on a disk, whose bounding circle represents the decision criterion. The direction and magnitude of the drift vector describe the identity of the stimulus and the quality of its representation in memory, respectively. The point at which the diffusion exits the disk determines the reported value of the attribute and the time to exit the disk determines the decision time. Expressions for the joint distribution of decision times and report outcomes are obtained by means of the Girsanov change-of-measure theorem, which allows the properties of the nonzero-drift diffusion process to be characterized as a function of a Euclidian-distance Bessel process. Predicted report precision is equal to the product of the decision criterion and the drift magnitude and follows a von Mises distribution, in agreement with the treatment of precision in the working memory literature. Trial-to-trial variability in criterion and drift rate leads, respectively, to direct and inverse relationships between report accuracy and decision times, in agreement with, and generalizing, the standard diffusion model of 2-choice decisions. The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties of working memory, via the distributions of decision times. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Petit, Pascal; Maître, Anne; Persoons, Renaud; Bicout, Dominique J
2017-04-15
The health risk assessment associated with polycyclic aromatic hydrocarbon (PAH) mixtures faces three main issues: the lack of knowledge regarding occupational exposure mixtures, the accurate chemical characterization and the estimation of cancer risks. To describe industries in which PAH exposures are encountered and construct working context-exposure function matrices, to enable the estimation of both the PAH expected exposure level and chemical characteristic profile of workers based on their occupational sector and activity. Overall, 1729 PAH samplings from the Exporisq-HAP database (E-HAP) were used. An approach was developed to (i) organize E-HAP in terms of the most detailed unit of description of a job and (ii) structure and subdivide the organized E-HAP into groups of detailed industry units, with each group described by the distribution of concentrations of gaseous and particulate PAHs, which would result in working context-exposure function matrices. PAH exposures were described using two scales: phase (total particulate and gaseous PAH distribution concentrations) and congener (16 congener PAH distribution concentrations). Nine industrial sectors were organized according to the exposure durations, short-term, mid-term and long-term into 5, 36 and 47 detailed industry units, which were structured, respectively, into 2, 4, and 7 groups for the phase scale and 2, 3, and 6 groups for the congener scale, corresponding to as much distinct distribution of concentrations of several PAHs. For the congener scale, which included groups that used products derived from coal, the correlations between the PAHs were strong; for groups that used products derived from petroleum, all PAHs in the mixtures were poorly correlated with each other. The current findings provide insights into both the PAH emissions generated by various industrial processes and their associated occupational exposures and may be further used to develop risk assessment analyses of cancers associated with PAH mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.
Study of Cs/NF3 adsorption on GaN (0 0 1) surface
NASA Astrophysics Data System (ADS)
Diao, Yu; Liu, Lei; Xia, Sihao; Kong, Yike
2017-03-01
To investigate the optoelectronics properties of Cs/NF3 adsorption on GaN (0 0 1) photocathode surface, different adsorption models of Cs-only, Cs/O, Cs/NF3 adsorption on GaN clean surface were established, respectively. Atomic structures, work function, adsorption energy, E-Mulliken charge distribution, density of states and optical properties of all these adsorption systems were calculated using first principles. Compared with Cs/O co-adsorption, Cs/NF3 co-adsorption show better stability and more decline of work function, which is more beneficial for photoemission efficiency. Besides, surface band structures of Cs/NF3 co-adsorption system exhibit metal properties, implying good conductivity. Meanwhile, near valence band minimum of Cs/NF3 co-adsorption system, more acceptor levels emerges to form a p-type emission surface, which is conductive to the escape of photoelectrons. In addition, imaginary part of dielectric function curve and absorption curve of Cs/NF3 co-adsorption system both move towards lower energy side. This work can direct the optimization of activation process of NEA GaN photocathode.
Ultrasound beam transmission using a discretely orthogonal Gaussian aperture basis
NASA Astrophysics Data System (ADS)
Roberts, R. A.
2018-04-01
Work is reported on development of a computational model for ultrasound beam transmission at an arbitrary geometry transmission interface for generally anisotropic materials. The work addresses problems encountered when the fundamental assumptions of ray theory do not hold, thereby introducing errors into ray-theory-based transmission models. Specifically, problems occur when the asymptotic integral analysis underlying ray theory encounters multiple stationary phase points in close proximity, due to focusing caused by concavity on either the entry surface or a material slowness surface. The approach presented here projects integrands over both the transducer aperture and the entry surface beam footprint onto a Gaussian-derived basis set, thereby distributing the integral over a summation of second-order phase integrals which are amenable to single stationary phase point analysis. Significantly, convergence is assured provided a sufficiently fine distribution of basis functions is used.
Numerical calculation of ion runaway distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Embréus, O.; Stahl, A.; Hirvijoki, E.
2015-05-15
Ions accelerated by electric fields (so-called runaway ions) in plasmas may explain observations in solar flares and fusion experiments; however, limitations of previous analytic work have prevented definite conclusions. In this work, we describe a numerical solver of the 2D non-relativistic linearized Fokker-Planck equation for ions. It solves the initial value problem in velocity space with a spectral-Eulerian discretization scheme, allowing arbitrary plasma composition and time-varying electric fields and background plasma parameters. The numerical ion distribution function is then used to consider the conditions for runaway ion acceleration in solar flares and tokamak plasmas. Typical time scales and electric fieldsmore » required for ion acceleration are determined for various plasma compositions, ion species, and temperatures, and the potential for excitation of toroidal Alfvén eigenmodes during tokamak disruptions is considered.« less
NASA Astrophysics Data System (ADS)
Langley, Robin S.
2018-03-01
This work is concerned with the statistical properties of the frequency response function of the energy of a random system. Earlier studies have considered the statistical distribution of the function at a single frequency, or alternatively the statistics of a band-average of the function. In contrast the present analysis considers the statistical fluctuations over a frequency band, and results are obtained for the mean rate at which the function crosses a specified level (or equivalently, the average number of times the level is crossed within the band). Results are also obtained for the probability of crossing a specified level at least once, the mean rate of occurrence of peaks, and the mean trough-to-peak height. The analysis is based on the assumption that the natural frequencies and mode shapes of the system have statistical properties that are governed by the Gaussian Orthogonal Ensemble (GOE), and the validity of this assumption is demonstrated by comparison with numerical simulations for a random plate. The work has application to the assessment of the performance of dynamic systems that are sensitive to random imperfections.
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.
Differential Higgs production at N3LO beyond threshold
NASA Astrophysics Data System (ADS)
Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea
2018-01-01
We present several key steps towards the computation of differential Higgs boson cross sections at N3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions. We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Differential Higgs production at N 3LO beyond threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea
We present several key steps towards the computation of differential Higgs boson cross sections at N 3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N 3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions.more » We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N 3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.« less
Differential Higgs production at N 3LO beyond threshold
Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea
2018-01-29
We present several key steps towards the computation of differential Higgs boson cross sections at N 3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N 3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions.more » We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N 3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.« less
Towards full waveform ambient noise inversion
NASA Astrophysics Data System (ADS)
Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas
2018-01-01
In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.
NASA Astrophysics Data System (ADS)
Baasch, B.; Müller, H.; von Dobeneck, T.
2018-07-01
In this work, we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine-learning techniques. Non-negative matrix factorization is used to determine grain-size end-members from sediment surface samples. Four end-members were found, which well represent the variety of sediments in the study area. A radial basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.
NASA Astrophysics Data System (ADS)
Baasch, B.; M"uller, H.; von Dobeneck, T.
2018-04-01
In this work we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine learning techniques. Nonnegative matrix factorisation is used to determine grain-size end-members from sediment surface samples. Four end-members were found which well represent the variety of sediments in the study area. A radial-basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.
Focusing on Attention: The Effects of Working Memory Capacity and Load on Selective Attention
Ahmed, Lubna; de Fockert, Jan W.
2012-01-01
Background Working memory (WM) is imperative for effective selective attention. Distractibility is greater under conditions of high (vs. low) concurrent working memory load (WML), and in individuals with low (vs. high) working memory capacity (WMC). In the current experiments, we recorded the flanker task performance of individuals with high and low WMC during low and high WML, to investigate the combined effect of WML and WMC on selective attention. Methodology/Principal Findings In Experiment 1, distractibility from a distractor at a fixed distance from the target was greater when either WML was high or WMC was low, but surprisingly smaller when both WML was high and WMC low. Thus we observed an inverted-U relationship between reductions in WM resources and distractibility. In Experiment 2, we mapped the distribution of spatial attention as a function of WMC and WML, by recording distractibility across several target-to-distractor distances. The pattern of distractor effects across the target-to-distractor distances demonstrated that the distribution of the attentional window becomes dispersed as WM resources are limited. The attentional window was more spread out under high compared to low WML, and for low compared to high WMC individuals, and even more so when the two factors co-occurred (i.e., under high WML in low WMC individuals). The inverted-U pattern of distractibility effects in Experiment 1, replicated in Experiment 2, can thus be explained by differences in the spread of the attentional window as a function of WM resource availability. Conclusions/Significance The current findings show that limitations in WM resources, due to either WML or individual differences in WMC, affect the spatial distribution of attention. The difference in attentional constraining between high and low WMC individuals demonstrated in the current experiments helps characterise the nature of previously established associations between WMC and controlled attention. PMID:22952636
Resilience-based optimal design of water distribution network
NASA Astrophysics Data System (ADS)
Suribabu, C. R.
2017-11-01
Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faltenbacher, A.; Finoguenov, A.; Drory, N.
2010-03-20
The baryon content of high-density regions in the universe is relevant to two critical unanswered questions: the workings of nurture effects on galaxies and the whereabouts of the missing baryons. In this paper, we analyze the distribution of dark matter and semianalytical galaxies in the Millennium Simulation to investigate these problems. Applying the same density field reconstruction schemes as used for the overall matter distribution to the matter locked in halos, we study the mass contribution of halos to the total mass budget at various background field densities, i.e., the conditional halo mass function. In this context, we present amore » simple fitting formula for the cumulative mass function accurate to {approx}<5% for halo masses between 10{sup 10} and 10{sup 15} h {sup -1} M{sub sun}. We find that in dense environments the halo mass function becomes top heavy and present corresponding fitting formulae for different redshifts. We demonstrate that the major fraction of matter in high-density fields is associated with galaxy groups. Since current X-ray surveys are able to nearly recover the universal baryon fraction within groups, our results indicate that the major part of the so-far undetected warm-hot intergalactic medium resides in low-density regions. Similarly, we show that the differences in galaxy mass functions with environment seen in observed and simulated data stem predominantly from differences in the mass distribution of halos. In particular, the hump in the galaxy mass function is associated with the central group galaxies, and the bimodality observed in the galaxy mass function is therefore interpreted as that of central galaxies versus satellites.« less
Nanopore Kinetic Proofreading of DNA Sequences
NASA Astrophysics Data System (ADS)
Ling, Xinsheng Sean
The concept of DNA sequencing using the time dependence of the nanopore ionic current was proposed in 1996 by Kasianowicz, Brandin, Branton, and Deamer (KBBD). The KBBD concept has generated tremendous amount interests in recent decade. In this talk, I will review the current understanding of the DNA ``translocation'' dynamics and how it can be described by Schrodinger's 1915 paper on first-passage-time distribution function. Schrodinger's distribution function can be used to give a rigorous criterion for achieving nanopore DNA sequencing which turns out to be identical to that of gel electrophoresis used by Sanger in the first-generation Sanger method. A nanopore DNA sequencing technology also requires discrimination of bases with high accuracies. I will describe a solid-state nanopore sandwich structure that can function as a proofreading device capable of discriminating between correct and incorrect hybridization probes with an accuracy rivaling that of high-fidelity DNA polymerases. The latest results from Nanjing will be presented. This work is supported by China 1000-Talent Program at Southeast University, Nanjing, China.
NASA Astrophysics Data System (ADS)
Zhou, distributed delays [rapid communication] T.; Chen, A.; Zhou, Y.
2005-08-01
By using the continuation theorem of coincidence degree theory and Liapunov function, we obtain some sufficient criteria to ensure the existence and global exponential stability of periodic solution to the bidirectional associative memory (BAM) neural networks with periodic coefficients and continuously distributed delays. These results improve and generalize the works of papers [J. Cao, L. Wang, Phys. Rev. E 61 (2000) 1825] and [Z. Liu, A. Chen, J. Cao, L. Huang, IEEE Trans. Circuits Systems I 50 (2003) 1162]. An example is given to illustrate that the criteria are feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Contact-metal dependent current injection in pentacene thin-film transistors
NASA Astrophysics Data System (ADS)
Wang, S. D.; Minari, T.; Miyadera, T.; Tsukagoshi, K.; Aoyagi, Y.
2007-11-01
Contact-metal dependent current injection in top-contact pentacene thin-film transistors is analyzed, and the local mobility in the contact region was found to follow the Meyer-Neldel rule. An exponential trap distribution, rather than the metal/organic hole injection barrier, is proposed to be the dominant factor of the contact resistance in pentacene thin-film transistors. The variable temperature measurements revealed a much narrower trap distribution in the copper contact compared with the corresponding gold contact, and this is the origin of the smaller contact resistance for copper despite a lower work function.
Supporting scalability and flexibility in a distributed management platform
NASA Astrophysics Data System (ADS)
Jardin, P.
1996-06-01
The TeMIP management platform was developed to manage very large distributed systems such as telecommunications networks. The management of these networks imposes a number of fairly stringent requirements including the partitioning of the network, division of work based on skills and target system types and the ability to adjust the functions to specific operational requirements. This requires the ability to cluster managed resources into domains that are totally defined at runtime based on operator policies. This paper addresses some of the issues that must be addressed in order to add a dynamic dimension to a management solution.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Multi-channel distributed coordinated function over single radio in wireless sensor networks.
Campbell, Carlene E-A; Loo, Kok-Keong Jonathan; Gemikonakli, Orhan; Khan, Shafiullah; Singh, Dhananjay
2011-01-01
Multi-channel assignments are becoming the solution of choice to improve performance in single radio for wireless networks. Multi-channel allows wireless networks to assign different channels to different nodes in real-time transmission. In this paper, we propose a new approach, Multi-channel Distributed Coordinated Function (MC-DCF) which takes advantage of multi-channel assignment. The backoff algorithm of the IEEE 802.11 distributed coordination function (DCF) was modified to invoke channel switching, based on threshold criteria in order to improve the overall throughput for wireless sensor networks (WSNs) over 802.11 networks. We presented simulation experiments in order to investigate the characteristics of multi-channel communication in wireless sensor networks using an NS2 platform. Nodes only use a single radio and perform channel switching only after specified threshold is reached. Single radio can only work on one channel at any given time. All nodes initiate constant bit rate streams towards the receiving nodes. In this work, we studied the impact of non-overlapping channels in the 2.4 frequency band on: constant bit rate (CBR) streams, node density, source nodes sending data directly to sink and signal strength by varying distances between the sensor nodes and operating frequencies of the radios with different data rates. We showed that multi-channel enhancement using our proposed algorithm provides significant improvement in terms of throughput, packet delivery ratio and delay. This technique can be considered for WSNs future use in 802.11 networks especially when the IEEE 802.11n becomes popular thereby may prevent the 802.15.4 network from operating effectively in the 2.4 GHz frequency band.
Multi-Channel Distributed Coordinated Function over Single Radio in Wireless Sensor Networks
Campbell, Carlene E.-A.; Loo, Kok-Keong (Jonathan); Gemikonakli, Orhan; Khan, Shafiullah; Singh, Dhananjay
2011-01-01
Multi-channel assignments are becoming the solution of choice to improve performance in single radio for wireless networks. Multi-channel allows wireless networks to assign different channels to different nodes in real-time transmission. In this paper, we propose a new approach, Multi-channel Distributed Coordinated Function (MC-DCF) which takes advantage of multi-channel assignment. The backoff algorithm of the IEEE 802.11 distributed coordination function (DCF) was modified to invoke channel switching, based on threshold criteria in order to improve the overall throughput for wireless sensor networks (WSNs) over 802.11 networks. We presented simulation experiments in order to investigate the characteristics of multi-channel communication in wireless sensor networks using an NS2 platform. Nodes only use a single radio and perform channel switching only after specified threshold is reached. Single radio can only work on one channel at any given time. All nodes initiate constant bit rate streams towards the receiving nodes. In this work, we studied the impact of non-overlapping channels in the 2.4 frequency band on: constant bit rate (CBR) streams, node density, source nodes sending data directly to sink and signal strength by varying distances between the sensor nodes and operating frequencies of the radios with different data rates. We showed that multi-channel enhancement using our proposed algorithm provides significant improvement in terms of throughput, packet delivery ratio and delay. This technique can be considered for WSNs future use in 802.11 networks especially when the IEEE 802.11n becomes popular thereby may prevent the 802.15.4 network from operating effectively in the 2.4 GHz frequency band. PMID:22346614
DOE Office of Scientific and Technical Information (OSTI.GOV)
B.C. Lyons, S.C. Jardin, and J.J. Ramos
2012-06-28
A new code, the Neoclassical Ion-Electron Solver (NIES), has been written to solve for stationary, axisymmetric distribution functions (f ) in the conventional banana regime for both ions and elec trons using a set of drift-kinetic equations (DKEs) with linearized Fokker-Planck-Landau collision operators. Solvability conditions on the DKEs determine the relevant non-adiabatic pieces of f (called h ). We work in a 4D phase space in which Ψ defines a flux surface, θ is the poloidal angle, v is the total velocity referenced to the mean flow velocity, and λ is the dimensionless magnetic moment parameter. We expand h inmore » finite elements in both v and λ . The Rosenbluth potentials, φ and ψ, which define the integral part of the collision operator, are expanded in Legendre series in cos χ , where χ is the pitch angle, Fourier series in cos θ , and finite elements in v . At each ψ , we solve a block tridiagonal system for hi (independent of fe ), then solve another block tridiagonal system for he (dependent on fi ). We demonstrate that such a formulation can be accurately and efficiently solved. NIES is coupled to the MHD equilibrium code JSOLVER [J. DeLucia, et al., J. Comput. Phys. 37 , pp 183-204 (1980).] allowing us to work with realistic magnetic geometries. The bootstrap current is calculated as a simple moment of the distribution function. Results are benchmarked against the Sauter analytic formulas and can be used as a kinetic closure for an MHD code (e.g., M3D-C1 [S.C. Jardin, et al ., Computational Science & Discovery, 4 (2012).]).« less
Gluonic transversity from lattice QCD
NASA Astrophysics Data System (ADS)
Detmold, W.; Shanahan, P. E.
2016-07-01
We present an exploratory study of the gluonic structure of the ϕ meson using lattice QCD (LQCD). This includes the first investigation of gluonic transversity via the leading moment of the twist-2 double-helicity-flip gluonic structure function Δ (x ,Q2). This structure function only exists for targets of spin J ≥1 and does not mix with quark distributions at leading twist, thereby providing a particularly clean probe of gluonic degrees of freedom. We also explore the gluonic analogue of the Soffer bound which relates the helicity flip and nonflip gluonic distributions, finding it to be saturated at the level of 80%. This work sets the stage for more complex LQCD studies of gluonic structure in the nucleon and in light nuclei where Δ (x ,Q2) is an "exotic glue" observable probing gluons in a nucleus not associated with individual nucleons.
Local structure studies of materials using pair distribution function analysis
NASA Astrophysics Data System (ADS)
Peterson, Joseph W.
A collection of pair distribution function studies on various materials is presented in this dissertation. In each case, local structure information of interest pushes the current limits of what these studies can accomplish. The goal is to provide insight into the individual material behaviors as well as to investigate ways to expand the current limits of PDF analysis. Where possible, I provide a framework for how PDF analysis might be applied to a wider set of material phenomena. Throughout the dissertation, I discuss 0 the capabilities of the PDF method to provide information pertaining to a material's structure and properties, ii) current limitations in the conventional approach to PDF analysis, iii) possible solutions to overcome certain limitations in PDF analysis, and iv) suggestions for future work to expand and improve the capabilities PDF analysis.
Spacing distribution functions for 1D point island model with irreversible attachment
NASA Astrophysics Data System (ADS)
Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto
2011-03-01
We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).
Using field-particle correlations to study auroral electron acceleration in the LAPD
NASA Astrophysics Data System (ADS)
Schroeder, J. W. R.; Howes, G. G.; Skiff, F.; Kletzing, C. A.; Carter, T. A.; Vincena, S.; Dorfman, S.
2017-10-01
Resonant nonlinear Alfvén wave-particle interactions are believed to contribute to the acceleration of auroral electrons. Experiments in the Large Plasma Device (LAPD) at UCLA have been performed with the goal of providing the first direct measurement of this nonlinear process. Recent progress includes a measurement of linear fluctuations of the electron distribution function associated with the production of inertial Alfvén waves in the LAPD. These linear measurements have been analyzed using the field-particle correlation technique to study the nonlinear transfer of energy between the Alfvén wave electric fields and the electron distribution function. Results of this analysis indicate collisions alter the resonant signature of the field-particle correlation, and implications for resonant Alfvénic electron acceleration in the LAPD are considered. This work was supported by NSF, DOE, and NASA.
Teaching and Learning Activity Sequencing System using Distributed Genetic Algorithms
NASA Astrophysics Data System (ADS)
Matsui, Tatsunori; Ishikawa, Tomotake; Okamoto, Toshio
The purpose of this study is development of a supporting system for teacher's design of lesson plan. Especially design of lesson plan which relates to the new subject "Information Study" is supported. In this study, we developed a system which generates teaching and learning activity sequences by interlinking lesson's activities corresponding to the various conditions according to the user's input. Because user's input is multiple information, there will be caused contradiction which the system should solve. This multiobjective optimization problem is resolved by Distributed Genetic Algorithms, in which some fitness functions are defined with reference models on lesson, thinking and teaching style. From results of various experiments, effectivity and validity of the proposed methods and reference models were verified; on the other hand, some future works on reference models and evaluation functions were also pointed out.
Stability of uncertain systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Blankenship, G. L.
1971-01-01
The asymptotic properties of feedback systems are discussed, containing uncertain parameters and subjected to stochastic perturbations. The approach is functional analytic in flavor and thereby avoids the use of Markov techniques and auxiliary Lyapunov functionals characteristic of the existing work in this area. The results are given for the probability distributions of the accessible signals in the system and are proved using the Prohorov theory of the convergence of measures. For general nonlinear systems, a result similar to the small loop-gain theorem of deterministic stability theory is given. Boundedness is a property of the induced distributions of the signals and not the usual notion of boundedness in norm. For the special class of feedback systems formed by the cascade of a white noise, a sector nonlinearity and convolution operator conditions are given to insure the total boundedness of the overall feedback system.
Effects of the infectious period distribution on predicted transitions in childhood disease dynamics
Krylova, Olga; Earn, David J. D.
2013-01-01
The population dynamics of infectious diseases occasionally undergo rapid qualitative changes, such as transitions from annual to biennial cycles or to irregular dynamics. Previous work, based on the standard seasonally forced ‘susceptible–exposed–infectious–removed’ (SEIR) model has found that transitions in the dynamics of many childhood diseases result from bifurcations induced by slow changes in birth and vaccination rates. However, the standard SEIR formulation assumes that the stage durations (latent and infectious periods) are exponentially distributed, whereas real distributions are narrower and centred around the mean. Much recent work has indicated that realistically distributed stage durations strongly affect the dynamical structure of seasonally forced epidemic models. We investigate whether inferences drawn from previous analyses of transitions in patterns of measles dynamics are robust to the shapes of the stage duration distributions. As an illustrative example, we analyse measles dynamics in New York City from 1928 to 1972. We find that with a fixed mean infectious period in the susceptible–infectious–removed (SIR) model, the dynamical structure and predicted transitions vary substantially as a function of the shape of the infectious period distribution. By contrast, with fixed mean latent and infectious periods in the SEIR model, the shapes of the stage duration distributions have a less dramatic effect on model dynamical structure and predicted transitions. All these results can be understood more easily by considering the distribution of the disease generation time as opposed to the distributions of individual disease stages. Numerical bifurcation analysis reveals that for a given mean generation time the dynamics of the SIR and SEIR models for measles are nearly equivalent and are insensitive to the shapes of the disease stage distributions. PMID:23676892
Krylova, Olga; Earn, David J D
2013-07-06
The population dynamics of infectious diseases occasionally undergo rapid qualitative changes, such as transitions from annual to biennial cycles or to irregular dynamics. Previous work, based on the standard seasonally forced 'susceptible-exposed-infectious-removed' (SEIR) model has found that transitions in the dynamics of many childhood diseases result from bifurcations induced by slow changes in birth and vaccination rates. However, the standard SEIR formulation assumes that the stage durations (latent and infectious periods) are exponentially distributed, whereas real distributions are narrower and centred around the mean. Much recent work has indicated that realistically distributed stage durations strongly affect the dynamical structure of seasonally forced epidemic models. We investigate whether inferences drawn from previous analyses of transitions in patterns of measles dynamics are robust to the shapes of the stage duration distributions. As an illustrative example, we analyse measles dynamics in New York City from 1928 to 1972. We find that with a fixed mean infectious period in the susceptible-infectious-removed (SIR) model, the dynamical structure and predicted transitions vary substantially as a function of the shape of the infectious period distribution. By contrast, with fixed mean latent and infectious periods in the SEIR model, the shapes of the stage duration distributions have a less dramatic effect on model dynamical structure and predicted transitions. All these results can be understood more easily by considering the distribution of the disease generation time as opposed to the distributions of individual disease stages. Numerical bifurcation analysis reveals that for a given mean generation time the dynamics of the SIR and SEIR models for measles are nearly equivalent and are insensitive to the shapes of the disease stage distributions.
Du, Tingsong; Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.
Systematics of capture and fusion dynamics in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui
2017-03-01
We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.
Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713
Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions
Li, Haoran; Xiong, Li; Jiang, Xiaoqian
2014-01-01
Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241
System of HPC content archiving
NASA Astrophysics Data System (ADS)
Bogdanov, A.; Ivashchenko, A.
2017-12-01
This work is aimed to develop a system, that will effectively solve the problem of storing and analyzing files containing text data, by using modern software development tools, techniques and approaches. The main challenge of storing a large number of text documents defined at the problem formulation stage, have to be resolved with such functionality as full text search and document clustering depends on their contents. Main system features could be described with notions of distributed multilevel architecture, flexibility and interchangeability of components, achieved through the standard functionality incapsulation in independent executable modules.
Correlation in photon pairs generated using four-wave mixing in a cold atomic ensemble
NASA Astrophysics Data System (ADS)
Ferdinand, Andrew Richard; Manjavacas, Alejandro; Becerra, Francisco Elohim
2017-04-01
Spontaneous four-wave mixing (FWM) in atomic ensembles can be used to generate narrowband entangled photon pairs at or near atomic resonances. While extensive research has been done to investigate the quantum correlations in the time and polarization of such photon pairs, the study and control of high dimensional quantum correlations contained in their spatial degrees of freedom has not been fully explored. In our work we experimentally investigate the generation of correlated light from FWM in a cold ensemble of cesium atoms as a function of the frequencies of the pump fields in the FWM process. In addition, we theoretically study the spatial correlations of the photon pairs generated in the FWM process, specifically the joint distribution of their orbital angular momentum (OAM). We investigate the width of the distribution of the OAM modes, known as the spiral bandwidth, and the purity of OAM correlations as a function of the properties of the pump fields, collected photons, and the atomic ensemble. These studies will guide experiments involving high dimensional entanglement of photons generated from this FWM process and OAM-based quantum communication with atomic ensembles. This work is supported by AFORS Grant FA9550-14-1-0300.
Modeling and statistical analysis of non-Gaussian random fields with heavy-tailed distributions.
Nezhadhaghighi, Mohsen Ghasemi; Nakhlband, Abbas
2017-04-01
In this paper, we investigate and develop an alternative approach to the numerical analysis and characterization of random fluctuations with the heavy-tailed probability distribution function (PDF), such as turbulent heat flow and solar flare fluctuations. We identify the heavy-tailed random fluctuations based on the scaling properties of the tail exponent of the PDF, power-law growth of qth order correlation function, and the self-similar properties of the contour lines in two-dimensional random fields. Moreover, this work leads to a substitution for the fractional Edwards-Wilkinson (EW) equation that works in the presence of μ-stable Lévy noise. Our proposed model explains the configuration dynamics of the systems with heavy-tailed correlated random fluctuations. We also present an alternative solution to the fractional EW equation in the presence of μ-stable Lévy noise in the steady state, which is implemented numerically, using the μ-stable fractional Lévy motion. Based on the analysis of the self-similar properties of contour loops, we numerically show that the scaling properties of contour loop ensembles can qualitatively and quantitatively distinguish non-Gaussian random fields from Gaussian random fluctuations.
Multicomponent plasma expansion into vacuum with non-Maxwellian electrons
NASA Astrophysics Data System (ADS)
Elkamash, Ibrahem; Kourakis, Ioannis
2016-10-01
The expansion of a collisionless plasma into vacuum has been widely studied since the early works of Gurevich et al and Allen and coworkers. It has received momentum in recent years, in particular in the context of ultraintense laser pulse interaction with a solid target, in an effort to elucidate the generation of high energy ion beams. In most present day experiments, laser produced plasmas contain several ion species, due to increasingly complicated composite targets. Anderson et al have studied the isothermal expansion of a two-ion-species plasma. As in most earlier works, the electrons were assumed to be isothermal throughout the expansion. However, in more realistic situations, the evolution of laser produced plasmas into vacuum is mainly governed by nonthermal electrons. These electrons are characterized by particle distribution functions with high energy tails, which may significantly deviate from the Maxwellian distribution. In this paper, we present a theoretical model for plasma expansion of two component plasma with nonthermal electrons, modelled by a kappa-type distribution. The superthermal effect on the ion density, velocity and the electric field is investigated. It is shown that energetic electrons have a significant effecton the expansion dynamics of the plasma. This work was supported from CPP/QUB funding. One of us (I.S. Elkamash) acknowledges financial support by an Egyptian Government fellowship.
Modeling knee joint endoprosthesis mode of deformation
NASA Astrophysics Data System (ADS)
Skeeba, V. Yu; Ivancivsky, V. V.
2018-03-01
The purpose of the work was to define the efficient design of the endoprosthesis, working in a multiple-cycle loading environment. Methodology and methods: triangulated surfaces of the base contact surfaces of endoprosthesis butt elements have been created using the PowerShape and SolidWorks software functional environment, and the assemblies of the possible combinations of the knee joint prosthetic designs have been prepared. The mode of deformation modeling took place in the multipurpose program complex ANSYS. Results and discussion: as a result of the numerical modeling, the following data were obtained for each of the developed knee joint versions: the distribution fields of absolute (total) and relative deformations; equivalent stress distribution fields; fatigue strength coefficient distribution fields. In the course of the studies, the following efficient design assembly has been established: 1) Ti-Al-V alloy composite femoral component with polymer inserts; 2) ceramic liners of the compound separator; 3) a Ti-Al-V alloy composite tibial component. The fatigue strength coefficient for the femoral component is 4.2; for the femoral component polymer inserts is 1.2; for the ceramic liners of the compound separator is 3.1; for the tibial component is 2.7. This promising endoprosthesis structure is recommended for further design and technological development.
Grassmann phase space theory and the Jaynes-Cummings model
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Garraway, B. M.; Jeffers, J.; Barnett, S. M.
2013-07-01
The Jaynes-Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherent state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes-Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker-Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker-Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes-Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker-Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker-Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions-that are also equivalent to the canonical Grassmann distribution function-to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum-atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes-Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum-atom optics.
Distribution of shape elongations of main belt asteroids derived from Pan-STARRS1 photometry
NASA Astrophysics Data System (ADS)
Cibulková, H.; Nortunen, H.; Ďurech, J.; Kaasalainen, M.; Vereš, P.; Jedicke, R.; Wainscoat, R. J.; Mommert, M.; Trilling, D. E.; Schunová-Lilly, E.; Magnier, E. A.; Waters, C.; Flewelling, H.
2018-04-01
Context. A considerable amount of photometric data is produced by surveys such as Pan-STARRS, LONEOS, WISE, or Catalina. These data are a rich source of information about the physical properties of asteroids. There are several possible approaches for using these data. Light curve inversion is a typical method that works with individual asteroids. Our approach in focusing on large groups of asteroids, such as dynamical families and taxonomic classes, is statistical; the data are not sufficient for individual models. Aim. Our aim is to study the distributions of shape elongation b/a and the spin axis latitude β for various subpopulations of asteroids and to compare our results, based on Pan-STARRS1 survey, with statistics previously carried out using various photometric databases, such as Lowell and WISE. Methods: We used the LEADER algorithm to compare the b/a and β distributions for various subpopulations of asteroids. The algorithm creates a cumulative distributive function (CDF) of observed brightness variations, and computes the b/a and β distributions with analytical basis functions that yield the observed CDF. A variant of LEADER is used to solve the joint distributions for synthetic populations to test the validity of the method. Results: When comparing distributions of shape elongation for groups of asteroids with different diameters D, we found that there are no differences for D < 25 km. We also constructed distributions for asteroids with different rotation periods and revealed that the fastest rotators with P = 0 - 4 h are more spheroidal than the population with P = 4-8 h.
NASA Astrophysics Data System (ADS)
Li, Li; Chakrabarty, Souvik; Jiang, Jing; Zhang, Ben; Ober, Christopher; Giannelis, Emmanuel P.
2016-01-01
The solubility behavior of Hf and Zr based hybrid nanoparticles with different surface ligands in different concentrations of photoacid generator as potential EUV photoresists was investigated in detail. The nanoparticles regardless of core or ligand chemistry have a hydrodynamic diameter of 2-3 nm and a very narrow size distribution in organic solvents. The Hansen solubility parameters for nanoparticles functionalized with IBA and 2MBA have the highest contribution from the dispersion interaction than those with tDMA and MAA, which show more polar character. The nanoparticles functionalized with unsaturated surface ligands showed more apparent solubility changes after exposure to DUV than those with saturated ones. The solubility differences after exposure are more pronounced for films containing a higher amount of photoacid generator. The work reported here provides material selection criteria and processing strategies for the design of high performance EUV photoresists.The solubility behavior of Hf and Zr based hybrid nanoparticles with different surface ligands in different concentrations of photoacid generator as potential EUV photoresists was investigated in detail. The nanoparticles regardless of core or ligand chemistry have a hydrodynamic diameter of 2-3 nm and a very narrow size distribution in organic solvents. The Hansen solubility parameters for nanoparticles functionalized with IBA and 2MBA have the highest contribution from the dispersion interaction than those with tDMA and MAA, which show more polar character. The nanoparticles functionalized with unsaturated surface ligands showed more apparent solubility changes after exposure to DUV than those with saturated ones. The solubility differences after exposure are more pronounced for films containing a higher amount of photoacid generator. The work reported here provides material selection criteria and processing strategies for the design of high performance EUV photoresists. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07334k
Litzelman, Kristin; Kent, Erin E; Rowland, Julia H
2016-01-15
Social and family factors can influence the health outcomes and quality of life of informal caregivers. Little is known about the distribution and correlates of such factors for caregivers of cancer patients. This study sought to fill this gap with data from the Cancer Care Outcomes Research and Surveillance consortium. Lung and colorectal cancer patients nominated an informal caregiver to participate in a caregiving survey. Caregivers reported their sociodemographic and caregiving characteristics, social stress, relationship quality with the patient, and family functioning. Descriptive statistics and Pearson correlations were used to assess the distribution of caregivers' social factors. Multivariable linear regressions assessed the independent correlates of each social factor. Most caregivers reported low to moderate levels of social stress and good relationship quality and family functioning. In multivariable analyses, older age was associated with less social stress and better family functioning but worse relationship quality, with effect sizes (Cohen's d) up to 0.40 (P < .05). Caring for a female patient was associated with less social stress and better relationship quality but worse family functioning (effect sizes ≤ 0.16, P < .05). Few caregiving characteristics were associated with social stress, whereas several were significant independent correlates of relationship quality. Finally, social factors were important independent correlates of one another. The results indicate the importance of personal and caregiving-related characteristics and the broader family context to social factors. Future work is needed to better understand these pathways and assess whether interventions targeting social factors can improve health or quality-of-life outcomes for informal cancer caregivers. Cancer 2016;122:278-286. © 2015 American Cancer Society. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
NASA Astrophysics Data System (ADS)
Takeuchi, Tsutomu T.
2010-08-01
We provide an analytic method to construct a bivariate distribution function (DF) with given marginal distributions and correlation coefficient. We introduce a convenient mathematical tool, called a copula, to connect two DFs with any prescribed dependence structure. If the correlation of two variables is weak (Pearson's correlation coefficient |ρ| < 1/3), the Farlie-Gumbel-Morgenstern (FGM) copula provides an intuitive and natural way to construct such a bivariate DF. When the linear correlation is stronger, the FGM copula cannot work anymore. In this case, we propose using a Gaussian copula, which connects two given marginals and is directly related to the linear correlation coefficient between two variables. Using the copulas, we construct the bivariate luminosity function (BLF) and discuss its statistical properties. We focus especially on the far-infrared-far-ulatraviolet (FUV-FIR) BLF, since these two wavelength regions are related to star-formation (SF) activity. Though both the FUV and FIR are related to SF activity, the univariate LFs have a very different functional form: the former is well described by the Schechter function whilst the latter has a much more extended power-law-like luminous end. We construct the FUV-FIR BLFs using the FGM and Gaussian copulas with different strengths of correlation, and examine their statistical properties. We then discuss some further possible applications of the BLF: the problem of a multiband flux-limited sample selection, the construction of the star-formation rate (SFR) function, and the construction of the stellar mass of galaxies (M*)-specific SFR (SFR/M*) relation. The copulas turn out to be a very useful tool to investigate all these issues, especially for including complicated selection effects.
The Work Disability Functional Assessment Battery (WD-FAB): Feasibility and Psychometric Properties
Meterko, Mark; Marfeo, Elizabeth E.; McDonough, Christine M.; Jette, Alan M.; Ni, Pengsheng; Bogusz, Kara; Rasch, Elizabeth K; Brandt, Diane E.; Chan, Leighton
2015-01-01
Objectives To assess the feasibility and psychometric properties of eight scales covering two domains of the newly developed Work Disability Functional Assessment Battery (WD-FAB): physical function (PF) and behavioral health (BH) function. Design Cross-sectional. Setting Community. Participants Adults unable to work due to a physical (n=497) or mental (n=476) disability. Interventions None. Main Outcome Measures Each disability group responded to a survey consisting of the relevant WD-FAB scales and existing measures of established validity. The WD-FAB scales were evaluated with regard to data quality (score distribution; percent “I don’t know” responses), efficiency of administration (number of items required to achieve reliability criterion; time required to complete the scale) by computerized adaptive testing (CAT), and measurement accuracy as tested by person fit. Construct validity was assessed by examining both convergent and discriminant correlations between the WD-FAB scales and scores on same-domain and cross-domain established measures. Results Data quality was good and CAT efficiency was high across both WD-FAB domains. Measurement accuracy was very good for the PF scales; BH scales demonstrated more variability. Construct validity correlations, both convergent and divergent, between all WD-FAB scales and established measures were in the expected direction and range of magnitude. Conclusions The data quality, CAT efficacy, person fit and construct validity of the WD-FAB scales were well supported and suggest that the WD-FAB could be used to assess physical and behavioral health function related to work disability. Variation in scale performance suggests the need for future work on item replenishment and refinement, particularly regarding the Self-Efficacy scale. PMID:25528263
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
A physically-based Mie–Gruneisen equation of state to determine hot spot temperature distributions
Kittell, David Erik; Yarrington, Cole Davis
2016-07-14
Here, a physically-based form of the Mie–Grüneisen equation of state (EOS) is derived for calculating 1d planar shock temperatures, as well as hot spot temperature distributions from heterogeneous impact simulations. This form utilises a multi-term Einstein oscillator model for specific heat, and is completely algebraic in terms of temperature, volume, an integrating factor, and the cold curve energy. Moreover, any empirical relation for the reference pressure and energy may be substituted into the equations via the use of a generalised reference function. The complete EOS is then applied to calculations of the Hugoniot temperature and simulation of hydrodynamic pore collapsemore » using data for the secondary explosive, hexanitrostilbene (HNS). From these results, it is shown that the choice of EOS is even more significant for determining hot spot temperature distributions than planar shock states. The complete EOS is also compared to an alternative derivation assuming that specific heat is a function of temperature alone, i.e. cv(T). Temperature discrepancies on the order of 100–600 K were observed corresponding to the shock pressures required to initiate HNS (near 10 GPa). Overall, the results of this work will improve confidence in temperature predictions. By adopting this EOS, future work may be able to assign physical meaning to other thermally sensitive constitutive model parameters necessary to predict the shock initiation and detonation of heterogeneous explosives.« less
NASA Astrophysics Data System (ADS)
Misra, Shikha; Upadhyay Kahaly, M.; Mishra, S. K.
2017-02-01
A formalism describing the thermionic emission from a single layer graphene sheet operating at a finite temperature and the consequent formation of the thermionic sheath in its proximity has been established. The formulation takes account of two dimensional densities of state configuration, Fermi-Dirac (f-d) statistics of the electron energy distribution, Fowler's treatment of electron emission, and Poisson's equation. The thermionic current estimates based on the present analysis is found to be in reasonably good agreement with experimental observations (Zhu et al., Nano Res. 07, 1 (2014)). The analysis has further been simplified for the case where f-d statistics of an electron energy distribution converges to Maxwellian distribution. By using this formulation, the steady state sheath features, viz., spatial dependence of the surface potential and electron density structure in the thermionic sheath are derived and illustrated graphically for graphene parameters; the electron density in the sheath is seen to diminish within ˜10 s of Debye lengths. By utilizing the graphene based cathode in configuring a thermionic converter (TC), an appropriate operating regime in achieving the efficient energy conversion has been identified. A TC configured with the graphene based cathode (operating at ˜1200 K/work function 4.74 V) along with the metallic anode (operating at ˜400 K/ work function 2.0 V) is predicted to display ˜56% of the input thermal flux into the electrical energy, which infers approximately ˜84% of the Carnot efficiency.
Towards an orientation-distribution-based multi-scale approach for remodelling biological tissues.
Menzel, A; Harrysson, M; Ristinmaa, M
2008-10-01
The mechanical behaviour of soft biological tissues is governed by phenomena occurring on different scales of observation. From the computational modelling point of view, a vital aspect consists of the appropriate incorporation of micromechanical effects into macroscopic constitutive equations. In this work, particular emphasis is placed on the simulation of soft fibrous tissues with the orientation of the underlying fibres being determined by distribution functions. A straightforward but convenient Taylor-type homogenisation approach links the micro- or rather meso-level of fibres to the overall macro-level and allows to reflect macroscopically orthotropic response. As a key aspect of this work, evolution equations for the fibre orientations are accounted for so that physiological effects like turnover or rather remodelling are captured. Concerning numerical applications, the derived set of equations can be embedded into a nonlinear finite element context so that first elementary simulations are finally addressed.
Desama, C
1979-01-01
A study of the active population in Verviers during the 1st 1/2 of the 19th century shows that the distribution of immigrants into workers and nonworkers and among the different lines of activity takes place more in line with demographic factors than in line with the real needs of the economy. For instance, the changes in the demographic structure of the working population (younger people and larger numbers of women) removed any rigidity from the employment market. Each element of the production apparatus, including the service industries, was able to count on the human resources necessary for optimum functioning. The available surplus manpower, resulting from immigration, thus made it possible to reach the most profitable production level at the lowest salary costs at the work and technology conditions imposed by the company head. (author's)
Temperature based Restricted Boltzmann Machines
NASA Astrophysics Data System (ADS)
Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping
2016-01-01
Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.
Order statistics applied to the most massive and most distant galaxy clusters
NASA Astrophysics Data System (ADS)
Waizmann, J.-C.; Ettori, S.; Bartelmann, M.
2013-06-01
In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.
Jacobson, Bailey; Grant, James W A; Peres-Neto, Pedro R
2015-07-01
How individuals within a population distribute themselves across resource patches of varying quality has been an important focus of ecological theory. The ideal free distribution predicts equal fitness amongst individuals in a 1 : 1 ratio with resources, whereas resource defence theory predicts different degrees of monopolization (fitness variance) as a function of temporal and spatial resource clumping and population density. One overlooked landscape characteristic is the spatial distribution of resource patches, altering the equitability of resource accessibility and thereby the effective number of competitors. While much work has investigated the influence of morphology on competitive ability for different resource types, less is known regarding the phenotypic characteristics conferring relative ability for a single resource type, particularly when exploitative competition predominates. Here we used young-of-the-year rainbow trout (Oncorhynchus mykiss) to test whether and how the spatial distribution of resource patches and population density interact to influence the level and variance of individual growth, as well as if functional morphology relates to competitive ability. Feeding trials were conducted within stream channels under three spatial distributions of nine resource patches (distributed, semi-clumped and clumped) at two density levels (9 and 27 individuals). Average trial growth was greater in high-density treatments with no effect of resource distribution. Within-trial growth variance had opposite patterns across resource distributions. Here, variance decreased at low-population, but increased at high-population densities as patches became increasingly clumped as the result of changes in the levels of interference vs. exploitative competition. Within-trial growth was related to both pre- and post-trial morphology where competitive individuals were those with traits associated with swimming capacity and efficiency: larger heads/bodies/caudal fins and less angled pectoral fins. The different degrees of within-population growth variance at the same density level found here, as a function of spatial resource distribution, provide an explanation for the inconsistencies in within-site growth variance and population regulation often noted with regard to density dependence in natural landscapes. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
Lv, Zong-xia; Huang, Dong-Hong; Ye, Wei; Chen, Zi-rong; Huang, Wen-li; Zheng, Jin-ou
2014-06-01
This study aimed to investigate the resting-state brain network related to visuospatial working memory (VSWM) in patients with right temporal lobe epilepsy (rTLE). The functional mechanism underlying the cognitive impairment in VSWM was also determined. Fifteen patients with rTLE and 16 healthy controls matched for age, gender, and handedness underwent a 6-min resting-state functional MRI session and a neuropsychological test using VSWM_Nback. The VSWM-related brain network at rest was extracted using multiple independent component analysis; the spatial distribution and the functional connectivity (FC) parameters of the cerebral network were compared between groups. Behavioral data were subsequently correlated with the mean Z-value in voxels showing significant FC difference during intergroup comparison. The distribution of the VSWM-related resting-state network (RSN) in the group with rTLE was virtually consistent with that in the healthy controls. The distribution involved the dorsolateral prefrontal lobe and parietal lobe in the right hemisphere and the partial inferior parietal lobe and posterior lobe of the cerebellum in the left hemisphere (p<0.05, AlphaSim corrected). Between-group differences suggest that the group with rTLE had a decreased FC within the right superior frontal lobe (BA8), right middle frontal lobe, and right ventromedial prefrontal lobe compared with the controls (p<0.05, AlphaSim corrected). The regions of increased FC in rTLE were localized within the right superior frontal lobe (BA11), right superior parietal lobe, and left posterior lobe of the cerebellum (p<0.05, AlphaSim corrected). Moreover, patients with rTLE performed worse than controls in the VSWM_Nback test, and there were negative correlations between ACCmeanRT (2-back) and the mean Z-value in the voxels showing decreased or increased FC in rTLE (p<0.05). The results suggest that the alteration of the VSWM-related RSN might underpin the VSWM impairment in patients with rTLE and possibly implies a functional compensation by enlarging the FC within the ipsilateral cerebral network. Copyright © 2014 Elsevier Inc. All rights reserved.
Linear dispersion properties of ring velocity distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandas, Marek, E-mail: marek.vandas@asu.cas.cz; Hellinger, Petr; Institute of Atmospheric Physics, AS CR, Bocni II/1401, CZ-14100 Prague
2015-06-15
Linear properties of ring velocity distribution functions are investigated. The dispersion tensor in a form similar to the case of a Maxwellian distribution function, but for a general distribution function separable in velocities, is presented. Analytical forms of the dispersion tensor are derived for two cases of ring velocity distribution functions: one obtained from physical arguments and one for the usual, ad hoc ring distribution. The analytical expressions involve generalized hypergeometric, Kampé de Fériet functions of two arguments. For a set of plasma parameters, the two ring distribution functions are compared. At the parallel propagation with respect to the ambientmore » magnetic field, the two ring distributions give the same results identical to the corresponding bi-Maxwellian distribution. At oblique propagation, the two ring distributions give similar results only for strong instabilities, whereas for weak growth rates their predictions are significantly different; the two ring distributions have different marginal stability conditions.« less
Shahar, Nitzan; Meiran, Nachshon
2015-01-01
Few studies have addressed action control training. In the current study, participants were trained over 19 days in an adaptive training task that demanded constant switching, maintenance and updating of novel action rules. Participants completed an executive functions battery before and after training that estimated processing speed, working memory updating, set-shifting, response inhibition and fluid intelligence. Participants in the training group showed greater improvement than a no-contact control group in processing speed, indicated by reduced reaction times in speeded classification tasks. No other systematic group differences were found across the different pre-post measurements. Ex-Gaussian fitting of the reaction-time distribution revealed that the reaction time reduction observed among trained participants was restricted to the right tail of the distribution, previously shown to be related to working memory. Furthermore, training effects were only found in classification tasks that required participants to maintain novel stimulus-response rules in mind, supporting the notion that the training improved working memory abilities. Training benefits were maintained in a 10-month follow-up, indicating relatively long-lasting effects. The authors conclude that training improved action-related working memory abilities. PMID:25799443
Current distribution in conducting nanowire networks
NASA Astrophysics Data System (ADS)
Kumar, Ankush; Vidhyadhiraja, N. S.; Kulkarni, Giridhar U.
2017-07-01
Conducting nanowire networks find diverse applications in solar cells, touch-screens, transparent heaters, sensors, and various related transparent conducting electrode (TCE) devices. The performances of these devices depend on effective resistance, transmittance, and local current distribution in these networks. Although, there have been rigorous studies addressing resistance and transmittance in TCE, not much attention is paid on studying the distribution of current. Present work addresses this compelling issue of understanding current distribution in TCE networks using analytical as well as Monte-Carlo approaches. We quantified the current carrying backbone region against isolated and dangling regions as a function of wire density (ranging from percolation threshold to many multiples of threshold) and compared the wired connectivity with those obtained from template-based methods. Further, the current distribution in the obtained backbone is studied using Kirchhoff's law, which reveals that a significant fraction of the backbone (which is believed to be an active current component) may not be active for end-to-end current transport due to the formation of intervening circular loops. The study shows that conducting wire based networks possess hot spots (extremely high current carrying regions) which can be potential sources of failure. The fraction of these hot spots is found to decrease with increase in wire density, while they are completely absent in template based networks. Thus, the present work discusses unexplored issues related to current distribution in conducting networks, which are necessary to choose the optimum network for best TCE applications.
Model-Free Feature Screening for Ultrahigh Dimensional Discriminant Analysis
Cui, Hengjian; Li, Runze
2014-01-01
This work is concerned with marginal sure independence feature screening for ultra-high dimensional discriminant analysis. The response variable is categorical in discriminant analysis. This enables us to use conditional distribution function to construct a new index for feature screening. In this paper, we propose a marginal feature screening procedure based on empirical conditional distribution function. We establish the sure screening and ranking consistency properties for the proposed procedure without assuming any moment condition on the predictors. The proposed procedure enjoys several appealing merits. First, it is model-free in that its implementation does not require specification of a regression model. Second, it is robust to heavy-tailed distributions of predictors and the presence of potential outliers. Third, it allows the categorical response having a diverging number of classes in the order of O(nκ) with some κ ≥ 0. We assess the finite sample property of the proposed procedure by Monte Carlo simulation studies and numerical comparison. We further illustrate the proposed methodology by empirical analyses of two real-life data sets. PMID:26392643
Yi, Qu; Zhan-ming, Li; Er-chao, Li
2012-11-01
A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Investigation of runaway electron dissipation in DIII-D using a gamma ray imager
NASA Astrophysics Data System (ADS)
Lvovskiy, A.; Paz-Soldan, C.; Eidietis, N.; Pace, D.; Taussig, D.
2017-10-01
We report the findings of a novel gamma ray imager (GRI) to study runaway electron (RE) dissipation in the quiescent regime on the DIII-D tokamak. The GRI measures the bremsstrahlung emission by RE providing information on RE energy spectrum and distribution across a poloidal cross-section. It consists of a lead pinhole camera illuminating a matrix of BGO detectors placed in the DIII-D mid-plane. The number of detectors was recently doubled to provide better spatial resolution and additional detector shielding was implemented to reduce un-collimated gamma flux and increase single-to-noise ratio. Under varying loop voltage, toroidal magnetic field and plasma density, a non-monotonic RE distribution function has been revealed as a result of the interplay between electric field, synchrotron radiation and collisional damping. A fraction of the high-energy RE population grows forming a bump at the RE distribution function while synchrotron radiation decreases. A possible destabilizing effect of Parail-Pogutse instability on the RE population will be also discussed. Work supported by the US DOE under DE-FC02-04ER54698.
On the Tracy-Widomβ Distribution for β=6
NASA Astrophysics Data System (ADS)
Grava, Tamara; Its, Alexander; Kapaev, Andrei; Mezzadri, Francesco
2016-11-01
We study the Tracy-Widom distribution function for Dyson's β-ensemble with β = 6. The starting point of our analysis is the recent work of I. Rumanov where he produces a Lax-pair representation for the Bloemendal-Virág equation. The latter is a linear PDE which describes the Tracy-Widom functions corresponding to general values of β. Using his Lax pair, Rumanov derives an explicit formula for the Tracy-Widom β=6 function in terms of the second Painlevé transcendent and the solution of an auxiliary ODE. Rumanov also shows that this formula allows him to derive formally the asymptotic expansion of the Tracy-Widom function. Our goal is to make Rumanov's approach and hence the asymptotic analysis it provides rigorous. In this paper, the first one in a sequel, we show that Rumanov's Lax-pair can be interpreted as a certain gauge transformation of the standard Lax pair for the second Painlevé equation. This gauge transformation though contains functional parameters which are defined via some auxiliary nonlinear ODE which is equivalent to the auxiliary ODE of Rumanov's formula. The gauge-interpretation of Rumanov's Lax-pair allows us to highlight the steps of the original Rumanov's method which needs rigorous justifications in order to make the method complete. We provide a rigorous justification of one of these steps. Namely, we prove that the Painlevé function involved in Rumanov's formula is indeed, as it has been suggested by Rumanov, the Hastings-McLeod solution of the second Painlevé equation. The key issue which we also discuss and which is still open is the question of integrability of the auxiliary ODE in Rumanov's formula. We note that this question is crucial for the rigorous asymptotic analysis of the Tracy-Widom function. We also notice that our work is a partial answer to one of the problems related to the β-ensembles formulated by Percy Deift during the June 2015 Montreal Conference on integrable systems.
Interaction between high harmonic fast waves and fast ions in NSTX/NSTX-U plasmas
NASA Astrophysics Data System (ADS)
Bertelli, N.; Valeo, E. J.; Gorelenkova, M.; Green, D. L.; RF SciDAC Team
2016-10-01
Fast wave (FW) heating in the ion cyclotron range of frequency (ICRF) has been successfully used to sustain and control the fusion plasma performance, and it will likely play an important role in the ITER experiment. As demonstrated in the NSTX and DIII-D experiments the interactions between fast waves and fast ions can be so strong to significantly modify the fast ion population from neutral beam injection. In fact, it has been recently found in NSTX that FWs can modify and, under certain conditions, even suppress the energetic particle driven instabilities, such as toroidal Alfvén eigenmodes and global Alfvén eigenmodes and fishbones. This paper examines such interactions in NSTX/NSTX-U plasmas by using the recent extension of the RF full-wave code TORIC to include non-Maxwellian ions distribution functions. Particular attention is given to the evolution of the fast ions distribution function w/ and w/o RF. Tests on the RF kick-operator implemented in the Monte-Carlo particle code NUBEAM is also discussed in order to move towards a self consistent evaluation of the RF wave-field and the ion distribution functions in the TRANSP code. Work supported by US DOE Contract DE-AC02-09CH11466.
Wave processes in dusty plasma near the Moon’s surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozova, T. I.; Kopnin, S. I.; Popel, S. I., E-mail: popel@iki.rssi.ru
2015-10-15
A plasma—dust system in the near-surface layer on the illuminated side of the Moon is described. The system involves photoelectrons, solar-wind electrons and ions, neutrals, and charged dust grains. Linear and nonlinear waves in the plasma near the Moon’s surface are discussed. It is noticed that the velocity distribution of photoelectrons can be represented as a superposition of two distribution functions characterized by different electron temperatures: lower energy electrons are knocked out of lunar regolith by photons with energies close to the work function of regolith, whereas higher energy electrons are knocked out by photons corresponding to the peak atmore » 10.2 eV in the solar radiation spectrum. The anisotropy of the electron velocity distribution function is distorted due to the solar wind motion with respect to photoelectrons and dust grains, which leads to the development of instability and excitation of high-frequency oscillations with frequencies in the range of Langmuir and electromagnetic waves. In addition, dust acoustic waves can be excited, e.g., near the lunar terminator. Solutions in the form of dust acoustic solitons corresponding to the parameters of the dust—plasma system in the near-surface layer of the illuminated Moon’s surface are found. Ranges of possible Mach numbers and soliton amplitudes are determined.« less
Advancing mangrove macroecology
Rivera-Monroy, Victor H.; Osland, Michael J.; Day, John W.; Ray, Santanu; Rovai, Andre S.; Day, Richard H.; Mukherjee, Joyita; Rivera-Monroy, Victor H.; Lee, Shing Yip; Kristensen, Erik; Twilley, Robert R.
2017-01-01
Mangrove forests provide a wide range of ecosystem services to society, yet they are among the most anthropogenically impacted coastal ecosystems in the world. In this chapter, we discuss and provide examples for how macroecology can advance our understanding of mangrove ecosystems. Macroecology is broadly defined as a discipline that uses statistical analyses to investigate large-scale, universal patterns in the distribution, abundance, diversity, and organization of species and ecosystems, including the scaling of ecological processes and structural and functional relationships. Macroecological methods can be used to advance our understanding of how non-linear responses in natural systems can be triggered by human impacts at local, regional, and global scales. Although macroecology has the potential to gain knowledge on universal patterns and processes that govern mangrove ecosystems, the application of macroecological methods to mangroves has historically been limited by constraints in data quality and availability. Here we provide examples that include evaluations of the variation in mangrove forest ecosystem structure and function in relation to macroclimatic drivers (e.g., temperature and rainfall regimes) and climate change. Additional examples include work focused upon the continental distribution of aboveground net primary productivity and carbon storage, which are rapidly advancing research areas. These examples demonstrate the value of a macroecological perspective for the understanding of global- and regional-scale effects of both changing environmental conditions and management actions on ecosystem structure, function, and the supply of goods and services. We also present current trends in mangrove modeling approaches and their potential utility to test hypotheses about mangrove structural and functional properties. Given the gap in relevant experimental work at the regional scale, we also discuss the potential use of mangrove restoration and rehabilitation projects as macroecological studies that advance the critical selection and conservation of ecosystem services when managing mangrove resources. Future work to further incorporate macroecology into mangrove research will require a concerted effort by research groups and institutions to launch research initiatives and synthesize data collected across broad biogeographic regions.
Vertical distribution of ozone: a new method of determination using satellite measurements.
Aruga, T; Igarashi, T
1976-01-01
A new method to determine the vertical distribution of atmospheric ozone over a wide range from the spectral measurement of backscattered solar uv radiation is proposed. Equations for the diffuse reflection in an inhomogeneous atmosphere are introduced, and some theoretical approximations are discussed. An inversion equation is formulated in such a way that the change of radiance at each wavelength, caused by the minute relative increment of ozone density at each altitude, is obtained exactly. The equation is solved by an iterative procedure using the weight function obtained in this work. The results of computer simulation indicate that the ozone distribution from the mesopause to the tropopause can be determined, and that although it is impossible to suggest exactly the complicated profile with fine structure, the smoothed ozone distribution and the total content can be determined with almost the same accuracy as the accuracies of measurement and theoretical calculation of the spectral intensity.
Distributed Computing Framework for Synthetic Radar Application
NASA Technical Reports Server (NTRS)
Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael
2006-01-01
We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
NASA Astrophysics Data System (ADS)
Izacard, Olivier
2016-08-01
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izacard, Olivier, E-mail: izacard@llnl.gov
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less
NASA Astrophysics Data System (ADS)
Murray, Eamonn; Fahy, Stephen
2014-03-01
Using first principles electronic structure methods, we calculate the induced force on the Eg (zone centre transverse optical) phonon mode in bismuth immediately after absorption of polarized light. When radiation with polarization perpendicular to the c-axis is absorbed in bismuth, the distribution of excited electrons and holes breaks the three-fold rotational symmetry and leads to a net force on the atoms in the direction perpendicular to the axis. We calculate the initial excited electronic distribution as a function of photon energy and polarization and find the resulting transverse and longitudinal forces experienced by the atoms. Using the measured, temperature-dependent rate of decay of the transverse force[2], we predict the approximate amplitude of induced atomic motion in the Eg mode as a function of temperature and optical fluence. This work is supported by Science Foundation Ireland and a Marie Curie International Incoming Fellowship.
Rafikova, Elvira R; Melikov, Kamran; Chernomordik, Leonid V
2010-01-01
Endoplasmic reticulum and nuclear envelope rearrangements after mitosis are often studied in the reconstitution system based on Xenopus egg extract. In our recent work we partially replaced the membrane vesicles in the reconstitution mix with protein-free liposomes to explore the relative contributions of cytosolic and transmembrane proteins. Here we discuss our finding that cytosolic proteins mediate fusion between membranes lacking functional transmembrane proteins and the role of membrane fusion in endoplasmic reticulum and nuclear envelope reorganization. Cytosol-dependent liposome fusion has allowed us to restore, without adding transmembrane nucleoporins, functionality of nuclear pores, their spatial distribution and chromatin decondensation in nuclei formed at insufficient amounts of membrane material and characterized by only partial decondensation of chromatin and lack of nuclear transport. Both the mechanisms and the biological implications of the discovered coupling between spatial distribution of nuclear pores, chromatin decondensation and nuclear transport are discussed.
Fox, Sharon E.; Wagner, Jennifer B.; Shrock, Christine L.; Tager-Flusberg, Helen; Nelson, Charles A.
2013-01-01
Deficits in face processing and social impairment are core characteristics of autism spectrum disorder. The present work examined 7-month-old infants at high-risk for developing autism and typically developing controls at low-risk, using a face perception task designed to differentiate between the effects of face identity and facial emotions on neural response using functional Near-Infrared Spectroscopy. In addition, we employed independent component analysis, as well as a novel method of condition-related component selection and classification to identify group differences in hemodynamic waveforms and response distributions associated with face and emotion processing. The results indicate similarities of waveforms, but differences in the magnitude, spatial distribution, and timing of responses between groups. These early differences in local cortical regions and the hemodynamic response may, in turn, contribute to differences in patterns of functional connectivity. PMID:23576966
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Yuanyuan; Liu, Hui; Hu, Peng
The effect of radial position of gas holes in the distributor on the performance of cylindrical Hall thruster was investigated. A series of gas distributors with different radial positions (R{sub g}) of holes were designed in the experiment. The results show that the larger R{sub g} leads to the higher ion current and electron current; meanwhile, the beam angle in plume is narrowed. Nevertheless, the peak energy in ion energy distribution function increases, together with the narrowing of ion energy distribution function. As a result, the overall performance is enhanced. It is suggested that the growing of R{sub g} couldmore » lead to the movement of the main ionization region towards anode, which could promote ion velocity and the clearer separation of acceleration region from ionization region. This work can provide some optimal design ideas to improve the performance of the thruster.« less
NASA Astrophysics Data System (ADS)
McCarren, Dustin; Vandervort, Robert; Soderholm, Mark; Carr, Jerry, Jr.; Galante, Matthew; Magee, Richard; Scime, Earl
2013-10-01
Cavity Ring-Down Spectroscopy CRDS is a proven, ultra-sensitive, cavity enhanced absorption spectroscopy technique. When combined with a continuous wavelength (CW) diode laser that has a sufficiently narrow line width, the Doppler broadened absorption line, i.e., the velocity distribution functions (IVDFs), can be measured. Measurements of IVDFS can be made using established techniques, such as laser induced fluorescence (LIF). However, LIF suffers from the requirement that the initial state of the LIF sequence have a substantial density. This usually limits LIF to ions and atoms with large metastable state densities for the given plasma conditions. CW-CRDS is considerably more sensitive than LIF and can potentially be applied to much lower density populations of ion and atom states. In this work we present ongoing measurements of the CW-CRDS diagnostic and discuss the technical challenges of using CW-CRDS to make measurements in a helicon plasma.
Simulation of an expanding plasma using the Boris algorithm
NASA Astrophysics Data System (ADS)
Neal, Luke; Aguirre, Evan; Steinberger, Thomas; Good, Timothy; Scime, Earl
2017-10-01
We present a Boris algorithm simulation in a cylindrical geometry of charged particle motion in a helicon plasma confined by a diverging magnetic field. Laboratory measurements of ion velocity distribution functions (ivdfs) provide evidence for acceleration of ions into the divergent field region in the center of the discharge. The increase in ion velocity is inconsistent with expectations for simple magnetic moment conservation given the magnetic field mirror ratio and is therefore attributed to the presence of a double layer in the literature. Using measured electric fields and ivdfs (at different radial locations across the entire plasma column) upstream and downstream of the divergent magnetic field region, we compare predictions for the downstream ivdfs to measurements. We also present predictions for the evolution of the electron velocity distribution function downstream of the divergent magnetic field. This work was supported by U.S. National Science Foundation Grant No. PHY-1360278.
Water bag modeling of a multispecies plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, P.; Gravier, E.; Besse, N.
2011-03-15
We report in the present paper a new modeling method to study multiple species dynamics in magnetized plasmas. Such a method is based on the gyrowater bag modeling, which consists in using a multistep-like distribution function along the velocity direction parallel to the magnetic field. The choice of a water bag representation allows an elegant link between kinetic and fluid descriptions of a plasma. The gyrowater bag model has been recently adapted to the context of strongly magnetized plasmas. We present its extension to the case of multi ion species magnetized plasmas: each ion species being modeled via a multiwatermore » bag distribution function. The water bag modelization will be discussed in details, under the simplification of a cylindrical geometry that is convenient for linear plasma devices. As an illustration, results obtained in the linear framework for ion temperature gradient instabilities are presented, that are shown to agree qualitatively with older works.« less
Relative performance of selected detectors
NASA Astrophysics Data System (ADS)
Ranney, Kenneth I.; Khatri, Hiralal; Nguyen, Lam H.; Sichina, Jeffrey
2000-08-01
The quadratic polynomial detector (QPD) and the radial basis function (RBF) family of detectors -- including the Bayesian neural network (BNN) -- might well be considered workhorses within the field of automatic target detection (ATD). The QPD works reasonably well when the data is unimodal, and it also achieves the best possible performance if the underlying data follow a Gaussian distribution. The BNN, on the other hand, has been applied successfully in cases where the underlying data are assumed to follow a multimodal distribution. We compare the performance of a BNN detector and a QPD for various scenarios synthesized from a set of Gaussian probability density functions (pdfs). This data synthesis allows us to control parameters such as modality and correlation, which, in turn, enables us to create data sets that can probe the weaknesses of the detectors. We present results for different data scenarios and different detector architectures.
Model to Test Electric Field Comparisons in a Composite Fairing Cavity
NASA Technical Reports Server (NTRS)
Trout, Dawn; Burford, Janessa
2012-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite . a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.
Model to Test Electric Field Comparisons in a Composite Fairing Cavity
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Burford, Janessa
2013-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.
Software Cost Measuring and Reporting. One of the Software Acquisition Engineering Guidebook Series.
1979-01-02
through the peripherals. How- and performance criteria), ever, his interaction is usually minimal since, by difinition , the automatic test Since TS...performs its Software estimating is still heavily intended functions properly. dependent on experienced judgement. However, quantitative methods...apply to systems of totally different can be distributed to specialists who content. The Quantitative guideline may are most familiar with the work. One
Mechanisms of Bacterial Spore Germination and Its Heterogeneity
2015-01-10
mathematical model describing spore germination has been developed; 9) much of the work above has been extended to Clostridium spores; and 10) ~90...germination. C) Faeder lab, with Li and Setlow labs. We have developed a mathematical model of bacterial spore germination that accounts for...heterogeneity in both Tlag and commitment times. The model is built from three main mathematical components: a receptor distribution function
Data Association Algorithms for Tracking Satellites
2013-03-27
validation of the new tools. The description provided here includes the mathematical back ground and description of the models implemented, as well as a...simulation development. This work includes the addition of higher-fidelity models in CU-TurboProp and validation of the new tools. The description...ode45(), used in Ananke, and (3) provide the necessary inputs to the bidirectional reflectance distribution function ( BRDF ) model provided by Pacific
Lambert W function for applications in physics
NASA Astrophysics Data System (ADS)
Veberič, Darko
2012-12-01
The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.
NASA Astrophysics Data System (ADS)
Yao, Shuo; Marsch, Eckart; Tu, Chuan-Yi; Schwenn, Rainer
2010-05-01
This work presents in situ solar wind observations of three magnetic clouds (MCs) that contain cold high-density material when Helios 2 was located at 0.3 AU on 9 May 1979, 0.5 AU on 30 March 1976, and 0.7 AU on 24 December 1978. In the cold high-density regions embedded in the interplanetary coronal mass ejections we find (1) that the number density of protons is higher than in other regions inside the magnetic cloud, (2) the possible existence of He+, (3) that the thermal velocity distribution functions are more isotropic and appear to be colder than in the other regions of the MC, and the proton temperature is lower than that of the ambient plasma, and (4) that the associated magnetic field configuration can for all three MC events be identified as a flux rope. This cold high-density region is located at the polarity inversion line in the center of the bipolar structure of the MC magnetic field (consistent with previous solar observation work that found that a prominence lies over the neutral line of the related bipolar solar magnetic field). Specifically, for the first magnetic cloud event on 8 May 1979, a coronal mass ejection (CME) was related to an eruptive prominence previously reported as a result of the observation of Solwind (P78-1). Therefore, we identify the cold and dense region in the MC as the prominence material. It is the first time that prominence ejecta were identified by both the plasma and magnetic field features inside 1 AU, and it is also the first time that the thermal ion velocity distribution functions were used to investigate the microstate of the prominence material. Moreover, from our three cases, we also found that this material tended to fall behind the magnetic cloud and become smaller as it propagated farther away from the Sun, which confirms speculations in previous work. Overall, our in situ observations are consistent with three-part CME models.
Significance tests for functional data with complex dependence structure.
Staicu, Ana-Maria; Lahiri, Soumen N; Carroll, Raymond J
2015-01-01
We propose an L 2 -norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.
High work-function hole transport layers by self-assembly using a fluorinated additive
Mauger, Scott A.; Li, Jun; Özmen, Özge Tüzün; ...
2013-10-30
The hole transport polymer poly(3,4-ethylenedioxythiophene):poly(4-styrenesulfonate) (PEDOT:PSS) derives many of its favorable properties from a PSS-rich interfacial layer that forms spontaneously during coating. Since PEDOT:PSS is only usable as a blend it is not possible to study PEDOT:PSS without this interfacial layer. Through the use of the self-doped polymer sulfonated poly(thiophene-3-[2-(2-methoxyethoxy) ethoxy]-2,5-diyl) (S-P3MEET) and a polyfluorinated ionomer (PFI) it is possible to compare transparent conducting organic films with and without interfacial layers and to understand their function. Using neutron reflectometry, we show that PFI preferentially segregates at the top surface of the film during coating and forms a thermally stable surfacemore » layer. Because of this distribution we find that even small amounts of PFI increase the electron work function of the HTL. We also find that annealing at 150°C and above reduces the work function compared to samples heated at lower temperatures. Using near edge x-ray absorption fine structure spectroscopy and gas chromatography we show that this reduction in work function is due to S-P3MEET being doped by PFI. Organic photovoltaic devices with S-P3MEET/PFI hole transport layers yield higher power conversion efficiency than devices with pure S-P3MEET or PEDOT:PSS hole transport layers. Additionally, devices with a doped interface layer of S-P3MEET/PFI show superior performance to those with un-doped S-P3MEET.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Jeff; Rylander, Matthew; Boemer, Jens
The fourth solicitation of the California Solar Initiative (CSI) Research, Development, Demonstration and Deployment (RD&D) Program established by the California Public Utilities Commission (CPUC) supported the Electric Power Research Institute (EPRI), National Renewable Energy Laboratory (NREL), and Sandia National Laboratories (SNL) with data provided from Pacific Gas and Electric (PG&E), Southern California Edison (SCE), and San Diego Gas and Electric (SDG&E) conducted research to determine optimal default settings for distributed energy resource advanced inverter controls. The inverter functions studied are aligned with those developed by the California Smart Inverter Working Group (SIWG) and those being considered by the IEEE 1547more » Working Group. The advanced inverter controls examined to improve the distribution system response included power factor, volt-var, and volt-watt. The advanced inverter controls examined to improve the transmission system response included frequency and voltage ride-through as well as Dynamic Voltage Support. This CSI RD&D project accomplished the task of developing methods to derive distribution focused advanced inverter control settings, selecting a diverse set of feeders to evaluate the methods through detailed analysis, and evaluating the effectiveness of each method developed. Inverter settings focused on the transmission system performance were also evaluated and verified. Based on the findings of this work, the suggested advanced inverter settings and methods to determine settings can be used to improve the accommodation of distributed energy resources (PV specifically). The voltage impact from PV can be mitigated using power factor, volt-var, or volt-watt control, while the bulk system impact can be improved with frequency/voltage ride-through.« less
Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.
2000-01-01
Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.
Searches for New Physics Using High Mass Dimuons at the CDF II Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagoz Unel, Muge
2004-12-01
This work describes the measurement of inclusive jets cross section in the D0 experiment. This cross section is computed as a function of jet transverse momentum, in several rapidity intervals. This quantity is sensitive to the proton structure and is crucial for the determination of parton distribution functions (PDF), essentially for the gluon at high proton momentum fraction. The measurement presented here gives the first values obtained for Tevatron Run II for the cross section in several rapidity intervals, for an integrated luminosity of 143 pb -1. The results are in agreement, within the uncertainties, with theoretical Standard Model predictions,more » showing no evidence for new physics. This work points out the aspects of the detector which need better understanding to reach Run I precision and to constrain the PDFs.« less
4-Mercaptophenylboronic acid: conformation, FT-IR, Raman, OH stretching and theoretical studies.
Parlak, Cemal; Ramasami, Ponnadurai; Tursun, Mahir; Rhyman, Lydia; Kaya, Mehmet Fatih; Atar, Necip; Alver, Özgür; Şenyel, Mustafa
2015-06-05
4-Mercaptophenylboronic acid (4-mpba, C6H7BO2S) was investigated experimentally by vibrational spectroscopy. The molecular structure and spectroscopic parameters were studied by computational methods. The molecular dimer was investigated for intermolecular hydrogen bonding. Potential energy distribution analysis of normal modes was performed to identify characteristic frequencies. The present work provides a simple physical picture of the OH stretch vibrational spectra of 4-mpba and analogues of the compound studied. When the different computational methods are compared, there is a strong evidence of the better performance of the BLYP functional than the popular B3LYP functional to describe hydrogen bonding in the dimer. The findings of this research work should be useful to experimentalists in their quests for functionalised 4-mpba derivatives. Copyright © 2015 Elsevier B.V. All rights reserved.
Bio-inspired sensing and control for disturbance rejection and stabilization
NASA Astrophysics Data System (ADS)
Gremillion, Gregory; Humbert, James S.
2015-05-01
The successful operation of small unmanned aircraft systems (sUAS) in dynamic environments demands robust stability in the presence of exogenous disturbances. Flying insects are sensor-rich platforms, with highly redundant arrays of sensors distributed across the insect body that are integrated to extract rich information with diminished noise. This work presents a novel sensing framework in which measurements from an array of accelerometers distributed across a simulated flight vehicle are linearly combined to directly estimate the applied forces and torques with improvements in SNR. In simulation, the estimation performance is quantified as a function of sensor noise level, position estimate error, and sensor quantity.
Distributions and motions of nearby stars defined by objective prism surveys and Hipparcos data
NASA Technical Reports Server (NTRS)
Hemenway, P. D.; Lee, J. T.; Upgren, A. R.
1997-01-01
Material and objective prism spectral classification work is used to determine the space density distribution of nearby common stars to the limits of objective prism spectral surveys. The aim is to extend the knowledge of the local densities of specific spectral types from a radius of 25 pc from the sun, as limited in the Gliese catalog of nearby stars, to 50 pc or more. Future plans for the application of these results to studies of the kinematic and dynamical properties of stars in the solar neighborhood as a function of their physical properties and ages are described.
Solar-terrestrial data access distribution and archiving
NASA Technical Reports Server (NTRS)
1984-01-01
It is recommended that a central data catalog and data access network (CDC/DAN) for solar-terrestrial research be established, initially as a NASA pilot program. The system is envisioned to be flexible and to evolve as funds permit, starting from a catalog to an access network for high-resolution data. The report describes the various functional requirements for the CDC/DAN, but does not specify the hardware and software architectures as these are constantly evolving. The importance of a steering committee, working with the CDC/DAN organization, to provide scientific guidelines for the data catalog and for data storage, access, and distribution is also stressed.
Analysis of shifts in the spatial distribution of vegetation due to climate change
NASA Astrophysics Data System (ADS)
del Jesus, Manuel; Díez-Sierra, Javier; Rinaldo, Andrea; Rodríguez-Iturbe, Ignacio
2017-04-01
Climate change will modify the statistical regime of most climatological variables, inducing changes on average values and in the natural variability of environmental variables. These environmental variables may be used to explain the spatial distribution of functional types of vegetation in arid and semiarid watersheds through the use of plant optimization theories. Therefore, plant optimization theories may be used to approximate the response of the spatial distribution of vegetation to a changing climate. Predicting changes in these spatial distributions is important to understand how climate change may affect vegetated ecosystems, but it is also important for hydrological engineering applications where climate change effects on water availability are assessed. In this work, Maximum Entropy Production (MEP) is used as the plant optimization theory that describes the spatial distribution of functional types of vegetation. Current climatological conditions are obtained from direct observations from meteorological stations. Climate change effects are evaluated for different temporal horizons and different climate change scenarios using numerical model outputs from the CMIP5. Rainfall estimates are downscaled by means of a stochastic point process used to model rainfall. The study is carried out for the Rio Salado watershed, located within the Sevilleta LTER site, in New Mexico (USA). Results show the expected changes in the spatial distribution of vegetation and allow to evaluate the expected variability of the changes. The updated spatial distributions allow to evaluate the vegetated ecosystem health and its updated resilience. These results can then be used to inform the hydrological modeling part of climate change assessments analyzing water availability in arid and semiarid watersheds.
Morini, F; Knippenberg, S; Deleuze, M S; Hajgató, B
2010-04-01
The main purpose of the present work is to simulate from many-body quantum mechanical calculations the results of experimental studies of the valence electronic structure of n-hexane employing photoelectron spectroscopy (PES) and electron momentum spectroscopy (EMS). This study is based on calculations of the valence ionization spectra and spherically averaged (e, 2e) electron momentum distributions for each known conformer by means of one-particle Green's function [1p-GF] theory along with the third-order algebraic diagrammatic construction [ADC(3)] scheme and using Kohn-Sham orbitals derived from DFT calculations employing the Becke 3-parameters Lee-Yang-Parr (B3LYP) functional as approximations to Dyson orbitals. A first thermostatistical analysis of these spectra and momentum distributions employs recent estimations at the W1h level of conformational energy differences, by Gruzman et al. [J. Phys. Chem. A 2009, 113, 11974], and of correspondingly obtained conformer weights using MP2 geometrical, vibrational, and rotational data in thermostatistical calculations of partition functions beyond the level of the rigid rotor-harmonic oscillator approximation. Comparison is made with the results of a focal point analysis of these energy differences using this time B3LYP geometries and the corresponding vibrational and rotational partition functions in the thermostatistical analysis. Large differences are observed between these two thermochemical models, especially because of strong variations in the contributions of hindered rotations to relative entropies. In contrast, the individual ionization spectra or momentum profiles are almost insensitive to the employed geometry. This study confirms the great sensitivity of valence ionization bands and (e, 2e) momentum distributions on the molecular conformation and sheds further light on spectral fingerprints of through-space methylenic hyperconjugation, in both PES and EMS experiments.
Calculation of momentum distribution function of a non-thermal fermionic dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Anirban; Gupta, Aritra, E-mail: anirbanbiswas@hri.res.in, E-mail: aritra@hri.res.in
The most widely studied scenario in dark matter phenomenology is the thermal WIMP scenario. Inspite of numerous efforts to detect WIMP, till now we have no direct evidence for it. A possible explanation for this non-observation of dark matter could be because of its very feeble interaction strength and hence, failing to thermalise with the rest of the cosmic soup. In other words, the dark matter might be of non-thermal origin where the relic density is obtained by the so-called freeze-in mechanism. Furthermore, if this non-thermal dark matter is itself produced substantially from the decay of another non-thermal mother particle,more » then their distribution functions may differ in both size and shape from the usual equilibrium distribution function. In this work, we have studied such a non-thermal (fermionic) dark matter scenario in the light of a new type of U(1){sub B−L} model. The U(1){sub B−L} model is interesting, since, besides being anomaly free, it can give rise to neutrino mass by Type II see-saw mechanism. Moreover, as we will show, it can accommodate a non-thermal fermionic dark matter as well. Starting from the collision terms, we have calculated the momentum distribution function for the dark matter by solving a coupled system of Boltzmann equations. We then used it to calculate the final relic abundance, as well as other relevant physical quantities. We have also compared our result with that obtained from solving the usual Boltzmann (or rate) equations directly in terms of comoving number density, Y . Our findings suggest that the latter approximation is valid only in cases where the system under study is close to equilibrium, and hence should be used with caution.« less
Calculation of momentum distribution function of a non-thermal fermionic dark matter
NASA Astrophysics Data System (ADS)
Biswas, Anirban; Gupta, Aritra
2017-03-01
The most widely studied scenario in dark matter phenomenology is the thermal WIMP scenario. Inspite of numerous efforts to detect WIMP, till now we have no direct evidence for it. A possible explanation for this non-observation of dark matter could be because of its very feeble interaction strength and hence, failing to thermalise with the rest of the cosmic soup. In other words, the dark matter might be of non-thermal origin where the relic density is obtained by the so-called freeze-in mechanism. Furthermore, if this non-thermal dark matter is itself produced substantially from the decay of another non-thermal mother particle, then their distribution functions may differ in both size and shape from the usual equilibrium distribution function. In this work, we have studied such a non-thermal (fermionic) dark matter scenario in the light of a new type of U(1)B-L model. The U(1)B-L model is interesting, since, besides being anomaly free, it can give rise to neutrino mass by Type II see-saw mechanism. Moreover, as we will show, it can accommodate a non-thermal fermionic dark matter as well. Starting from the collision terms, we have calculated the momentum distribution function for the dark matter by solving a coupled system of Boltzmann equations. We then used it to calculate the final relic abundance, as well as other relevant physical quantities. We have also compared our result with that obtained from solving the usual Boltzmann (or rate) equations directly in terms of comoving number density, Y. Our findings suggest that the latter approximation is valid only in cases where the system under study is close to equilibrium, and hence should be used with caution.
Milacic, Snezana; Simic, Jadranko
2009-05-01
This study investigated health risks in workers residing and working in terrains contaminated by low ionizing radiation doses which originated from ammunition containing depleted uranium (DU). The studied population was composed of two test groups (T-I, T-II) who were occasionally exposed to DU, and two referent (R-I, R-II) groups not exposed at any time to DU. All of them were evaluated for the following: complete clinical examination and blood count, presence of immature forms and blasts, leukocyte alkaline phosphatase activity and cytogenetic tests. The probability of onset of the characteristic complete biomarkers--chromosomal aberrations, was analyzed using logarithmic function of the Poisson regression. The estimated function of the density of probabilities of Poisson distribution of the chromosomal aberrations in the test group T-II was drastically different from the corresponding distribution of the referent group R-I and to a somewhat lesser extent from the group R-II; Wilcoxon test exactly confirms the presence of a significant difference between the reference group R-II and test group T-II, p < 0.05. The damages to chromosomes and cells were highest in the test group T-II of workers additionally occupationally exposed to DU. The group of workers T-I, who had been exposed to DU working on contaminated terrain, have had certain risks of cell and chromosome damages, and that risk was not greater than the risk to the referent group R-II of workers occupationally exposed to ionizing radiation.
Characterizing resonant component in speech: A different view of tracking fundamental frequency
NASA Astrophysics Data System (ADS)
Dong, Bin
2017-05-01
Inspired by the nonlinearity and nonstationarity and the modulations in speech, Hilbert-Huang Transform and cyclostationarity analysis are employed to investigate the speech resonance in vowel in sequence. Cyclostationarity analysis is not directly manipulated on the target vowel, but on its intrinsic mode functions one by one. Thanks to the equivalence between the fundamental frequency in speech and the cyclic frequency in cyclostationarity analysis, the modulation intensity distributions of the intrinsic mode functions provide much information for the estimation of the fundamental frequency. To highlight the relationship between frequency and time, the pseudo-Hilbert spectrum is proposed to replace the Hilbert spectrum here. After contrasting the pseudo-Hilbert spectra of and the modulation intensity distributions of the intrinsic mode functions, it finds that there is usually one intrinsic mode function which works as the fundamental component of the vowel. Furthermore, the fundamental frequency of the vowel can be determined by tracing the pseudo-Hilbert spectrum of its fundamental component along the time axis. The later method is more robust to estimate the fundamental frequency, when meeting nonlinear components. Two vowels [a] and [i], picked up from a speech database FAU Aibo Emotion Corpus, are applied to validate the above findings.
New optical probes for the continuous monitoring of renal function
NASA Astrophysics Data System (ADS)
Dorshow, Richard B.; Asmelash, Bethel; Chinen, Lori K.; Debreczeny, Martin P.; Fitch, Richard M.; Freskos, John N.; Galen, Karen P.; Gaston, Kimberly R.; Marzan, Timothy A.; Poreddy, Amruta R.; Rajagopalan, Raghavan; Shieh, Jeng-Jong; Neumann, William L.
2008-02-01
The ability to continuously monitor renal function via the glomerular filtration rate (GFR) in the clinic is currently an unmet medical need. To address this need we have developed a new series of hydrophilic fluorescent probes designed to clear via glomerular filtration for use as real time optical monitoring agents at the bedside. The ideal molecule should be freely filtered via the glomerular filtration barrier and be neither reabsorbed nor secreted by the renal tubule. In addition, we have hypothesized that a low volume of distribution into the interstitial space could also be advantageous. Our primary molecular design strategy employs a very small pyrazine-based fluorophore as the core unit. Modular chemistry for functionalizing these systems for optimal pharmacokinetics (PK) and photophysical properties have been developed. Structure-activity relationship (SAR) and pharmacokinetic (PK) studies involving hydrophilic pyrazine analogues incorporating polyethylene glycol (PEG), carbohydrate, amino acid and peptide functionality have been a focus of this work. Secondary design strategies for minimizing distribution into the interstitium while maintaining glomerular filtration include enhancing molecular volume through PEG substitution. In vivo optical monitoring experiments with advanced candidates have been correlated with plasma PK for measurement of clearance and hence GFR.
On residual stresses and homeostasis: an elastic theory of functional adaptation in living matter.
Ciarletta, P; Destrade, M; Gower, A L
2016-04-26
Living matter can functionally adapt to external physical factors by developing internal tensions, easily revealed by cutting experiments. Nonetheless, residual stresses intrinsically have a complex spatial distribution, and destructive techniques cannot be used to identify a natural stress-free configuration. This work proposes a novel elastic theory of pre-stressed materials. Imposing physical compatibility and symmetry arguments, we define a new class of free energies explicitly depending on the internal stresses. This theory is finally applied to the study of arterial remodelling, proving its potential for the non-destructive determination of the residual tensions within biological materials.
Characterization of microscopic deformation through two-point spatial correlation functions
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Wu, Bin; Wang, Yangyang; Chen, Wei-Ren
2018-01-01
The molecular rearrangements of most fluids under flow and deformation do not directly follow the macroscopic strain field. In this work, we describe a phenomenological method for characterizing such nonaffine deformation via the anisotropic pair distribution function (PDF). We demonstrate how the microscopic strain can be calculated in both simple shear and uniaxial extension, by perturbation expansion of anisotropic PDF in terms of real spherical harmonics. Our results, given in the real as well as the reciprocal space, can be applied in spectrum analysis of small-angle scattering experiments and nonequilibrium molecular dynamics simulations of soft matter under flow.
Characterization of microscopic deformation through two-point spatial correlation functions.
Huang, Guan-Rong; Wu, Bin; Wang, Yangyang; Chen, Wei-Ren
2018-01-01
The molecular rearrangements of most fluids under flow and deformation do not directly follow the macroscopic strain field. In this work, we describe a phenomenological method for characterizing such nonaffine deformation via the anisotropic pair distribution function (PDF). We demonstrate how the microscopic strain can be calculated in both simple shear and uniaxial extension, by perturbation expansion of anisotropic PDF in terms of real spherical harmonics. Our results, given in the real as well as the reciprocal space, can be applied in spectrum analysis of small-angle scattering experiments and nonequilibrium molecular dynamics simulations of soft matter under flow.
Effective Use of SMSS: A Simple Strategy and Sample Implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensinger, David
1998-09-30
The purpose of this document is to present a strategy for effectively using SMSS (Sea.leable Mass Storage System) and to distribute a simple implementation of this strategy. This work was done as a stopgap memure to ~lOW ~ ~~yst to USe the storage Power of SMSS in the absence of a more user friendly interface. The features and functionality discussed in this document represent a minimum set of capabilities to allow a useful archiving interface functionality. The implementation presented is the most basic possible and would benefit significantly from an organized support and documentation effort.
Calibration of a universal indicated turbulence system
NASA Technical Reports Server (NTRS)
Chapin, W. G.
1977-01-01
Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.
NASA Astrophysics Data System (ADS)
Ford, Steven J.; Deán-Ben, Xosé L.; Razansky, Daniel
2015-03-01
The fast heart rate (~7 Hz) of the mouse makes cardiac imaging and functional analysis difficult when studying mouse models of cardiovascular disease, and cannot be done truly in real-time and 3D using established imaging modalities. Optoacoustic imaging, on the other hand, provides ultra-fast imaging at up to 50 volumetric frames per second, allowing for acquisition of several frames per mouse cardiac cycle. In this study, we combined a recently-developed 3D optoacoustic imaging array with novel analytical techniques to assess cardiac function and perfusion dynamics of the mouse heart at high, 4D spatiotemporal resolution. In brief, the heart of an anesthetized mouse was imaged over a series of multiple volumetric frames. In another experiment, an intravenous bolus of indocyanine green (ICG) was injected and its distribution was subsequently imaged in the heart. Unique temporal features of the cardiac cycle and ICG distribution profiles were used to segment the heart from background and to assess cardiac function. The 3D nature of the experimental data allowed for determination of cardiac volumes at ~7-8 frames per mouse cardiac cycle, providing important cardiac function parameters (e.g., stroke volume, ejection fraction) on a beat-by-beat basis, which has been previously unachieved by any other cardiac imaging modality. Furthermore, ICG distribution dynamics allowed for the determination of pulmonary transit time and thus additional quantitative measures of cardiovascular function. This work demonstrates the potential for optoacoustic cardiac imaging and is expected to have a major contribution toward future preclinical studies of animal models of cardiovascular health and disease.
NASA Astrophysics Data System (ADS)
Bin, Che; Ruoying, Yu; Dongsheng, Dang; Xiangyan, Wang
2017-05-01
Distributed Generation (DG) integrating to the network would cause the harmonic pollution which would cause damages on electrical devices and affect the normal operation of power system. On the other hand, due to the randomness of the wind and solar irradiation, the output of DG is random, too, which leads to an uncertainty of the harmonic generated by the DG. Thus, probabilistic methods are needed to analyse the impacts of the DG integration. In this work we studied the harmonic voltage probabilistic distribution and the harmonic distortion in distributed network after the distributed photovoltaic (DPV) system integrating in different weather conditions, mainly the sunny day, cloudy day, rainy day and the snowy day. The probabilistic distribution function of the DPV output power in different typical weather conditions could be acquired via the parameter identification method of maximum likelihood estimation. The Monte-Carlo simulation method was adopted to calculate the probabilistic distribution of harmonic voltage content at different frequency orders as well as the harmonic distortion (THD) in typical weather conditions. The case study was based on the IEEE33 system and the results of harmonic voltage content probabilistic distribution as well as THD in typical weather conditions were compared.
Interevent time distributions of human multi-level activity in a virtual world
NASA Astrophysics Data System (ADS)
Mryglod, O.; Fuchs, B.; Szell, M.; Holovatch, Yu.; Thurner, S.
2015-02-01
Studying human behavior in virtual environments provides extraordinary opportunities for a quantitative analysis of social phenomena with levels of accuracy that approach those of the natural sciences. In this paper we use records of player activities in the massive multiplayer online game Pardus over 1238 consecutive days, and analyze dynamical features of sequences of actions of players. We build on previous work where temporal structures of human actions of the same type were quantified, and provide an empirical understanding of human actions of different types. This study of multi-level human activity can be seen as a dynamic counterpart of static multiplex network analysis. We show that the interevent time distributions of actions in the Pardus universe follow highly non-trivial distribution functions, from which we extract action-type specific characteristic 'decay constants'. We discuss characteristic features of interevent time distributions, including periodic patterns on different time scales, bursty dynamics, and various functional forms on different time scales. We comment on gender differences of players in emotional actions, and find that while males and females act similarly when performing some positive actions, females are slightly faster for negative actions. We also observe effects on the age of players: more experienced players are generally faster in making decisions about engaging in and terminating enmity and friendship, respectively.
Network topology and resilience analysis of South Korean power grid
NASA Astrophysics Data System (ADS)
Kim, Dong Hwan; Eisenberg, Daniel A.; Chun, Yeong Han; Park, Jeryang
2017-01-01
In this work, we present topological and resilience analyses of the South Korean power grid (KPG) with a broad voltage level. While topological analysis of KPG only with high-voltage infrastructure shows an exponential degree distribution, providing another empirical evidence of power grid topology, the inclusion of low voltage components generates a distribution with a larger variance and a smaller average degree. This result suggests that the topology of a power grid may converge to a highly skewed degree distribution if more low-voltage data is considered. Moreover, when compared to ER random and BA scale-free networks, the KPG has a lower efficiency and a higher clustering coefficient, implying that highly clustered structure does not necessarily guarantee a functional efficiency of a network. Error and attack tolerance analysis, evaluated with efficiency, indicate that the KPG is more vulnerable to random or degree-based attacks than betweenness-based intentional attack. Cascading failure analysis with recovery mechanism demonstrates that resilience of the network depends on both tolerance capacity and recovery initiation time. Also, when the two factors are fixed, the KPG is most vulnerable among the three networks. Based on our analysis, we propose that the topology of power grids should be designed so the loads are homogeneously distributed, or functional hubs and their neighbors have high tolerance capacity to enhance resilience.
YORP torque as the function of shape harmonics
NASA Astrophysics Data System (ADS)
Breiter, Sławomir; Michalska, Hanna
2008-08-01
The second-order analytical approximation of the mean Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) torque components is given as an explicit function of the shape spherical harmonics coefficients for a sufficiently regular minor body. The results are based upon a new expression for the insolation function, significantly simpler than in previous works. Linearized plane-parallel model of the temperature distribution derived from the insolation function allows us to take into account a non-zero conductivity. Final expressions for the three average components of the YORP torque related with rotation period, obliquity and precession are given in a form of the Legendre series of the cosine of obliquity. The series have good numerical properties and can be easily truncated according to the degree of the Legendre polynomials or associated functions, with first two terms playing the principal role.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
A Device for Long-Term Perfusion, Imaging, and Electrical Interfacing of Brain Tissue In vitro
Killian, Nathaniel J.; Vernekar, Varadraj N.; Potter, Steve M.; Vukasinovic, Jelena
2016-01-01
Distributed microelectrode array (MEA) recordings from consistent, viable, ≥500 μm thick tissue preparations over time periods from days to weeks may aid in studying a wide range of problems in neurobiology that require in vivo-like organotypic morphology. Existing tools for electrically interfacing with organotypic slices do not address necrosis that inevitably occurs within thick slices with limited diffusion of nutrients and gas, and limited removal of waste. We developed an integrated device that enables long-term maintenance of thick, functionally active, brain tissue models using interstitial perfusion and distributed recordings from thick sections of explanted tissue on a perforated multi-electrode array. This novel device allows for automated culturing, in situ imaging, and extracellular multi-electrode interfacing with brain slices, 3-D cell cultures, and potentially other tissue culture models. The device is economical, easy to assemble, and integrable with standard electrophysiology tools. We found that convective perfusion through the culture thickness provided a functional benefit to the preparations as firing rates were generally higher in perfused cultures compared to their respective unperfused controls. This work is a step toward the development of integrated tools for days-long experiments with more consistent, healthier, thicker, and functionally more active tissue cultures with built-in distributed electrophysiological recording and stimulation functionality. The results may be useful for the study of normal processes, pathological conditions, and drug screening strategies currently hindered by the limitations of acute (a few hours long) brain slice preparations. PMID:27065793
Liu, Lu; Wei, Jianrong; Zhang, Huishu; Xin, Jianhong; Huang, Jiping
2013-01-01
Because classical music has greatly affected our life and culture in its long history, it has attracted extensive attention from researchers to understand laws behind it. Based on statistical physics, here we use a different method to investigate classical music, namely, by analyzing cumulative distribution functions (CDFs) and autocorrelation functions of pitch fluctuations in compositions. We analyze 1,876 compositions of five representative classical music composers across 164 years from Bach, to Mozart, to Beethoven, to Mendelsohn, and to Chopin. We report that the biggest pitch fluctuations of a composer gradually increase as time evolves from Bach time to Mendelsohn/Chopin time. In particular, for the compositions of a composer, the positive and negative tails of a CDF of pitch fluctuations are distributed not only in power laws (with the scale-free property), but also in symmetry (namely, the probability of a treble following a bass and that of a bass following a treble are basically the same for each composer). The power-law exponent decreases as time elapses. Further, we also calculate the autocorrelation function of the pitch fluctuation. The autocorrelation function shows a power-law distribution for each composer. Especially, the power-law exponents vary with the composers, indicating their different levels of long-range correlation of notes. This work not only suggests a way to understand and develop music from a viewpoint of statistical physics, but also enriches the realm of traditional statistical physics by analyzing music.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
Hierarchical resilience with lightweight threads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, Kyle Bruce
2011-10-01
This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less
Flood impacts on a water distribution network
NASA Astrophysics Data System (ADS)
Arrighi, Chiara; Tarani, Fabio; Vicario, Enrico; Castelli, Fabio
2017-12-01
Floods cause damage to people, buildings and infrastructures. Water distribution systems are particularly exposed, since water treatment plants are often located next to the rivers. Failure of the system leads to both direct losses, for instance damage to equipment and pipework contamination, and indirect impact, since it may lead to service disruption and thus affect populations far from the event through the functional dependencies of the network. In this work, we present an analysis of direct and indirect damages on a drinking water supply system, considering the hazard of riverine flooding as well as the exposure and vulnerability of active system components. The method is based on interweaving, through a semi-automated GIS procedure, a flood model and an EPANET-based pipe network model with a pressure-driven demand approach, which is needed when modelling water distribution networks in highly off-design conditions. Impact measures are defined and estimated so as to quantify service outage and potential pipe contamination. The method is applied to the water supply system of the city of Florence, Italy, serving approximately 380 000 inhabitants. The evaluation of flood impact on the water distribution network is carried out for different events with assigned recurrence intervals. Vulnerable elements exposed to the flood are identified and analysed in order to estimate their residual functionality and to simulate failure scenarios. Results show that in the worst failure scenario (no residual functionality of the lifting station and a 500-year flood), 420 km of pipework would require disinfection with an estimated cost of EUR 21 million, which is about 0.5 % of the direct flood losses evaluated for buildings and contents. Moreover, if flood impacts on the water distribution network are considered, the population affected by the flood is up to 3 times the population directly flooded.
Visualizing Distributions from Multi-Return Lidar Data to Understand Forest Structure
NASA Technical Reports Server (NTRS)
Kao, David L.; Kramer, Marc; Luo, Alison; Dungan, Jennifer; Pang, Alex
2004-01-01
Spatially distributed probability density functions (pdfs) are becoming relevant to the Earth scientists and ecologists because of stochastic models and new sensors that provide numerous realizations or data points per unit area. One source of these data is from multi-return airborne lidar, a type of laser that records multiple returns for each pulse of light sent towards the ground. Data from multi-return lidar is a vital tool in helping us understand the structure of forest canopies over large extents. This paper presents several new visualization tools that allow scientists to rapidly explore, interpret and discover characteristic distributions within the entire spatial field. The major contribution from-this work is a paradigm shift which allows ecologists to think of and analyze their data in terms of the distribution. This provides a way to reveal information on the modality and shape of the distribution previously not possible. The tools allow the scientists to depart from traditional parametric statistical analyses and to associate multimodal distribution characteristics to forest structures. Examples are given using data from High Island, southeast Alaska.
Raney Distributions and Random Matrix Theory
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Liu, Dang-Zheng
2015-03-01
Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.
NASA Astrophysics Data System (ADS)
Briceño, Raúl A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-01
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate that the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Finally we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.
Analytical workflow profiling gene expression in murine macrophages
Nixon, Scott E.; González-Peña, Dianelys; Lawson, Marcus A.; McCusker, Robert H.; Hernandez, Alvaro G.; O’Connor, Jason C.; Dantzer, Robert; Kelley, Keith W.
2015-01-01
Comprehensive and simultaneous analysis of all genes in a biological sample is a capability of RNA-Seq technology. Analysis of the entire transcriptome benefits from summarization of genes at the functional level. As a cellular response of interest not previously explored with RNA-Seq, peritoneal macrophages from mice under two conditions (control and immunologically challenged) were analyzed for gene expression differences. Quantification of individual transcripts modeled RNA-Seq read distribution and uncertainty (using a Beta Negative Binomial distribution), then tested for differential transcript expression (False Discovery Rate-adjusted p-value < 0.05). Enrichment of functional categories utilized the list of differentially expressed genes. A total of 2079 differentially expressed transcripts representing 1884 genes were detected. Enrichment of 92 categories from Gene Ontology Biological Processes and Molecular Functions, and KEGG pathways were grouped into 6 clusters. Clusters included defense and inflammatory response (Enrichment Score = 11.24) and ribosomal activity (Enrichment Score = 17.89). Our work provides a context to the fine detail of individual gene expression differences in murine peritoneal macrophages during immunological challenge with high throughput RNA-Seq. PMID:25708305
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-11
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
Characterizing the Lyα forest flux probability distribution function using Legendre polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cieplak, Agnieszka M.; Slosar, Anze
The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less
Characterizing the Lyα forest flux probability distribution function using Legendre polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov
The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisationmore » over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less
Characterizing the Lyα forest flux probability distribution function using Legendre polynomials
Cieplak, Agnieszka M.; Slosar, Anze
2017-10-12
The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less
Sekulić, Vladislav; Skinner, Frances K
2017-01-01
Although biophysical details of inhibitory neurons are becoming known, it is challenging to map these details onto function. Oriens-lacunosum/moleculare (O-LM) cells are inhibitory cells in the hippocampus that gate information flow, firing while phase-locked to theta rhythms. We build on our existing computational model database of O-LM cells to link model with function. We place our models in high-conductance states and modulate inhibitory inputs at a wide range of frequencies. We find preferred spiking recruitment of models at high (4–9 Hz) or low (2–5 Hz) theta depending on, respectively, the presence or absence of h-channels on their dendrites. This also depends on slow delayed-rectifier potassium channels, and preferred theta ranges shift when h-channels are potentiated by cyclic AMP. Our results suggest that O-LM cells can be differentially recruited by frequency-modulated inputs depending on specific channel types and distributions. This work exposes a strategy for understanding how biophysical characteristics contribute to function. DOI: http://dx.doi.org/10.7554/eLife.22962.001 PMID:28318488
Characterizing the Lyα forest flux probability distribution function using Legendre polynomials
NASA Astrophysics Data System (ADS)
Cieplak, Agnieszka M.; Slosar, Anže
2017-10-01
The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.
Characterizations of particle size distribution of the droplets exhaled by sneeze
Han, Z. Y.; Weng, W. G.; Huang, Q. Y.
2013-01-01
This work focuses on the size distribution of sneeze droplets exhaled immediately at mouth. Twenty healthy subjects participated in the experiment and 44 sneezes were measured by using a laser particle size analyser. Two types of distributions are observed: unimodal and bimodal. For each sneeze, the droplets exhaled at different time in the sneeze duration have the same distribution characteristics with good time stability. The volume-based size distributions of sneeze droplets can be represented by a lognormal distribution function, and the relationship between the distribution parameters and the physiological characteristics of the subjects are studied by using linear regression analysis. The geometric mean of the droplet size of all the subjects is 360.1 µm for unimodal distribution and 74.4 µm for bimodal distribution with geometric standard deviations of 1.5 and 1.7, respectively. For the two peaks of the bimodal distribution, the geometric mean (the geometric standard deviation) is 386.2 µm (1.8) for peak 1 and 72.0 µm (1.5) for peak 2. The influences of the measurement method, the limitations of the instrument, the evaporation effects of the droplets, the differences of biological dynamic mechanism and characteristics between sneeze and other respiratory activities are also discussed. PMID:24026469
NASA Astrophysics Data System (ADS)
Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team
2016-10-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.
Generation of a Combined Dataset of Simulated Radar and Electro-Optical Imagery
2005-10-05
directional reflectance distribution function (BRDF) predictions and the geometry of a line scanner. Using programs such as MODTRAN and FASCODE, images can be...DIRSIG tries to accurately model scenes through various approaches that model real- world occurrences. MODTRAN is an atmospheric radiative transfer code...used to predict path transmissions and radiances within the atmosphere (DIRSIG Manual, 2004). FASCODE is similar to MODTRAN , however it works as a
Basic Studies of Distributed Discharge Limiters
2014-02-10
Sputtered Lanthanum Hexaboride Film Thickness on Field Emission from Metallic Knife Edge Cathodes,” M.P. Kirley, B. Novakovic , N. Sule, M. J. Weber, I...IEEE ICOPS, San Diego, CA (2009). 99. Nishant Sule, Matt Kirley, Bozidar Novakovic , John Scharer, Irena Knezevic and John H. Booske...M. Kirley, B. Novakovic , J. Scharer, I. Knezevic, and J.H. Booske, “Field emission from low work function cathode coatings,”, Intl. Conf. Plasma
Altered Gastrointestinal Function in the Neuroligin-3 Mouse Model of Autism
2013-10-01
GABA neurotransmission in the brain. This work aims to examine the spatiotemporal distribution patterns of NL3 and related proteins and mRNA in gut ...implicated in ASD are upregulated during gut development presynaptic localization of the neuroligin-3 protein 16. SECURITY CLASSIFICATION OF: U...related proteins and mRNA in gut tissue from these mice. This project aims to determine biological mechanisms contributing to gastrointestinal dysfunction
GSRP/David Marshall: Fully Automated Cartesian Grid CFD Application for MDO in High Speed Flows
NASA Technical Reports Server (NTRS)
2003-01-01
With the renewed interest in Cartesian gridding methodologies for the ease and speed of gridding complex geometries in addition to the simplicity of the control volumes used in the computations, it has become important to investigate ways of extending the existing Cartesian grid solver functionalities. This includes developing methods of modeling the viscous effects in order to utilize Cartesian grids solvers for accurate drag predictions and addressing the issues related to the distributed memory parallelization of Cartesian solvers. This research presents advances in two areas of interest in Cartesian grid solvers, viscous effects modeling and MPI parallelization. The development of viscous effects modeling using solely Cartesian grids has been hampered by the widely varying control volume sizes associated with the mesh refinement and the cut cells associated with the solid surface. This problem is being addressed by using physically based modeling techniques to update the state vectors of the cut cells and removing them from the finite volume integration scheme. This work is performed on a new Cartesian grid solver, NASCART-GT, with modifications to its cut cell functionality. The development of MPI parallelization addresses issues associated with utilizing Cartesian solvers on distributed memory parallel environments. This work is performed on an existing Cartesian grid solver, CART3D, with modifications to its parallelization methodology.
On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2013-04-01
The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006. Growth-collapse and decay-surge evolutions, and geometric Langevin equations, Physica A, 367, 106 - 128.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Efficient monitoring of CRAB jobs at CMS
NASA Astrophysics Data System (ADS)
Silva, J. M. D.; Balcas, J.; Belforte, S.; Ciangottini, D.; Mascheroni, M.; Rupeika, E. A.; Ivanov, T. T.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.
Optimal indolence: a normative microscopic approach to work and leisure
Niyogi, Ritwik K.; Breton, Yannick-Andre; Solomon, Rebecca B.; Conover, Kent; Shizgal, Peter; Dayan, Peter
2014-01-01
Dividing limited time between work and leisure when both have their attractions is a common everyday decision. We provide a normative control-theoretic treatment of this decision that bridges economic and psychological accounts. We show how our framework applies to free-operant behavioural experiments in which subjects are required to work (depressing a lever) for sufficient total time (called the price) to receive a reward. When the microscopic benefit-of-leisure increases nonlinearly with duration, the model generates behaviour that qualitatively matches various microfeatures of subjects’ choices, including the distribution of leisure bout durations as a function of the pay-off. We relate our model to traditional accounts by deriving macroscopic, molar, quantities from microscopic choices. PMID:24284898
Efficient Monitoring of CRAB Jobs at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, J. M.D.; Balcas, J.; Belforte, S.
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates themore » design choices and gives a report on our experience with the tools we developed and the external ones we used.« less
Space Flight Cable Model Development
NASA Technical Reports Server (NTRS)
Spak, Kaitlin
2013-01-01
This work concentrates the modeling efforts presented in last year's VSGC conference paper, "Model Development for Cable-Harnessed Beams." The focus is narrowed to modeling of space-flight cables only, as a reliable damped cable model is not yet readily available and is necessary to continue modeling cable-harnessed space structures. New experimental data is presented, eliminating the low-frequency noise that plagued the first year's efforts. The distributed transfer function method is applied to a single section of space flight cable for Euler-Bernoulli and shear beams. The work presented here will be developed into a damped cable model that can be incorporated into an interconnected beam-cable system. The overall goal of this work is to accurately predict natural frequencies and modal damping ratios for cabled space structures.
Modelling neural correlates of working memory: A coordinate-based meta-analysis
Rottschy, C.; Langner, R.; Dogan, I.; Reetz, K.; Laird, A.R.; Schulz, J.B.; Fox, P.T.; Eickhoff, S.B.
2011-01-01
Working memory subsumes the capability to memorize, retrieve and utilize information for a limited period of time which is essential to many human behaviours. Moreover, impairments of working memory functions may be found in nearly all neurological and psychiatric diseases. To examine what brain regions are commonly and differently active during various working memory tasks, we performed a coordinate-based meta-analysis over 189 fMRI experiments on healthy subjects. The main effect yielded a widespread bilateral fronto-parietal network. Further meta-analyses revealed that several regions were sensitive to specific task components, e.g. Broca’s region was selectively active during verbal tasks or ventral and dorsal premotor cortex were preferentially involved in memory for object identity and location, respectively. Moreover, the lateral prefrontal cortex showed a division in a rostral and a caudal part based on differential involvement in task-set and load effects. Nevertheless, a consistent but more restricted “core” network emerged from conjunctions across analyses of specific task designs and contrasts. This “core” network appears to comprise the quintessence of regions, which are necessary during working memory tasks. It may be argued that the core regions form a distributed executive network with potentially generalized functions for focusing on competing representations in the brain. The present study demonstrates that meta-analyses are a powerful tool to integrate the data of functional imaging studies on a (broader) psychological construct, probing the consistency across various paradigms as well as the differential effects of different experimental implementations. PMID:22178808
Grassmann phase space methods for fermions. II. Field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalton, B.J., E-mail: bdalton@swin.edu.au; Jeffers, J.; Barnett, S.M.
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, thoughmore » fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.« less
ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers
NASA Astrophysics Data System (ADS)
Torrent, Marc
2014-03-01
For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization algorithm, as well as the use of external optimized librairies. Part of this work has been supported by the european Prace project (PaRtnership for Advanced Computing in Europe) in the framework of its workpackage 8.
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Optimal Information Processing in Biochemical Networks
NASA Astrophysics Data System (ADS)
Wiggins, Chris
2012-02-01
A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.
Neutron-skin effect in direct-photon and charged-hadron production in Pb+Pb collisions at the LHC
NASA Astrophysics Data System (ADS)
Helenius, Ilkka; Paukkunen, Hannu; Eskola, Kari J.
2017-03-01
A well-established observation in nuclear physics is that in neutron-rich spherical nuclei the distribution of neutrons extends farther than the distribution of protons. In this work, we scrutinize the influence of this so called neutron-skin effect on the centrality dependence of high-p_T direct-photon and charged-hadron production. We find that due to the estimated spatial dependence of the nuclear parton distribution functions, it will be demanding to unambiguously expose the neutron-skin effect with direct photons. However, when taking a ratio between the cross sections for negatively and positively charged high-p_T hadrons, even centrality-dependent nuclear-PDF effects cancel, making this observable a better handle on the neutron skin. Up to 10% effects can be expected for the most peripheral collisions in the measurable region.
NASA Technical Reports Server (NTRS)
Dasgupta, Partha; Leblanc, Richard J., Jr.; Appelbe, William F.
1988-01-01
Clouds is an operating system in a novel class of distributed operating systems providing the integration, reliability, and structure that makes a distributed system usable. Clouds is designed to run on a set of general purpose computers that are connected via a medium-of-high speed local area network. The system structuring paradigm chosen for the Clouds operating system, after substantial research, is an object/thread model. All instances of services, programs and data in Clouds are encapsulated in objects. The concept of persistent objects does away with the need for file systems, and replaces it with a more powerful concept, namely the object system. The facilities in Clouds include integration of resources through location transparency; support for various types of atomic operations, including conventional transactions; advanced support for achieving fault tolerance; and provisions for dynamic reconfiguration.
Virtual Solar Observatory Distributed Query Construction
NASA Technical Reports Server (NTRS)
Gurman, J. B.; Dimitoglou, G.; Bogart, R.; Davey, A.; Hill, F.; Martens, P.
2003-01-01
Through a prototype implementation (Tian et al., this meeting) the VSO has already demonstrated the capability of unifying geographically distributed data sources following the Web Services paradigm and utilizing mechanisms such as the Simple Object Access Protocol (SOAP). So far, four participating sites (Stanford, Montana State University, National Solar Observatory and the Solar Data Analysis Center) permit Web-accessible, time-based searches that allow browse access to a number of diverse data sets. Our latest work includes the extension of the simple, time-based queries to include numerous other searchable observation parameters. For VSO users, this extended functionality enables more refined searches. For the VSO, it is a proof of concept that more complex, distributed queries can be effectively constructed and that results from heterogeneous, remote sources can be synthesized and presented to users as a single, virtual data product.
Grain coarsening in two-dimensional phase-field models with an orientation field
NASA Astrophysics Data System (ADS)
Korbuly, Bálint; Pusztai, Tamás; Henry, Hervé; Plapp, Mathis; Apel, Markus; Gránásy, László
2017-05-01
In the literature, contradictory results have been published regarding the form of the limiting (long-time) grain size distribution (LGSD) that characterizes the late stage grain coarsening in two-dimensional and quasi-two-dimensional polycrystalline systems. While experiments and the phase-field crystal (PFC) model (a simple dynamical density functional theory) indicate a log-normal distribution, other works including theoretical studies based on conventional phase-field simulations that rely on coarse grained fields, like the multi-phase-field (MPF) and orientation field (OF) models, yield significantly different distributions. In a recent work, we have shown that the coarse grained phase-field models (whether MPF or OF) yield very similar limiting size distributions that seem to differ from the theoretical predictions. Herein, we revisit this problem, and demonstrate in the case of OF models [R. Kobayashi, J. A. Warren, and W. C. Carter, Physica D 140, 141 (2000), 10.1016/S0167-2789(00)00023-3; H. Henry, J. Mellenthin, and M. Plapp, Phys. Rev. B 86, 054117 (2012), 10.1103/PhysRevB.86.054117] that an insufficient resolution of the small angle grain boundaries leads to a log-normal distribution close to those seen in the experiments and the molecular scale PFC simulations. Our paper indicates, furthermore, that the LGSD is critically sensitive to the details of the evaluation process, and raises the possibility that the differences among the LGSD results from different sources may originate from differences in the detection of small angle grain boundaries.
Improvements to the MST Thomson Scattering Diagnostic
NASA Astrophysics Data System (ADS)
Adams, D. T.; Borchardt, M. T.; den Hartog, D. J.; Holly, D. J.; Kile, T.; Kubala, S. Z.; Jacobson, C. M.; Thomas, M. A.; Wallace, J. P.; Young, W. C.; MST Thomson Scattering Team
2017-10-01
Multiple upgrades to the MST Thomson Scattering diagnostic have been implemented to expand capabilities of the system. In the past, stray laser light prevented electron density measurements everywhere and temperature measurements for -z/a >0.75. To mitigate stray light, a new laser beamline is being commissioned that includes a longer entrance flight tube, close-fitting apertures, and baffles. A polarizer has been added to the collection optics to further reduce stray light. An absolute density calibration using Rayleigh scattering in argon will be performed. An insertable integrating sphere will provide a full-system spectral calibration as well as maps optical fibers to machine coordinates. Reduced transmission of the collection optics due to coatings from plasma-surface interactions is regularly monitored to inform timely replacements of the first lens. Long-wavelength filters have been installed to better characterize non-Maxwellian electron distribution features. Previous work has identified residual photons not described by a Maxwellian distribution during m =0 magnetic bursts. Further effort to characterize the distribution function will be described. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences program under Award No. DE-FC02-05ER54814.
Spatial distribution of citizen science casuistic observations for different taxonomic groups.
Tiago, Patrícia; Ceia-Hasse, Ana; Marques, Tiago A; Capinha, César; Pereira, Henrique M
2017-10-16
Opportunistic citizen science databases are becoming an important way of gathering information on species distributions. These data are temporally and spatially dispersed and could have limitations regarding biases in the distribution of the observations in space and/or time. In this work, we test the influence of landscape variables in the distribution of citizen science observations for eight taxonomic groups. We use data collected through a Portuguese citizen science database (biodiversity4all.org). We use a zero-inflated negative binomial regression to model the distribution of observations as a function of a set of variables representing the landscape features plausibly influencing the spatial distribution of the records. Results suggest that the density of paths is the most important variable, having a statistically significant positive relationship with number of observations for seven of the eight taxa considered. Wetland coverage was also identified as having a significant, positive relationship, for birds, amphibians and reptiles, and mammals. Our results highlight that the distribution of species observations, in citizen science projects, is spatially biased. Higher frequency of observations is driven largely by accessibility and by the presence of water bodies. We conclude that efforts are required to increase the spatial evenness of sampling effort from volunteers.
Renormalizability of quasiparton distribution functions
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...
2017-11-21
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
Renormalizability of quasiparton distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
NASA Astrophysics Data System (ADS)
Malinina, A. A.; Malinin, A. N.
2013-12-01
Results are presented from studies of the optical characteristics and parameters of plasma of a dielectric barrier discharge in a mixture of mercury dibromide vapor with neon—the working medium of a non-coaxial exciplex gas-discharge emitter. The electron energy distribution function, the transport characteristics, the specific power losses for electron processes, the electron density and temperature, and the rate constants for the processes of elastic and inelastic electron scattering by the working mixture components are determined as functions of the reduced electric field. The rate constant of the process leading to the formation of exciplex mercury monobromide molecules is found to be 1.6 × 10-14 m3/s for a reduced electric field of E/ N = 15 Td, at which the maximum emission intensity in the blue-green spectral region (λmax = 502 nm) was observed in this experiment.
Nonequilibrium thermodynamics of restricted Boltzmann machines.
Salazar, Domingos S P
2017-08-01
In this work, we analyze the nonequilibrium thermodynamics of a class of neural networks known as restricted Boltzmann machines (RBMs) in the context of unsupervised learning. We show how the network is described as a discrete Markov process and how the detailed balance condition and the Maxwell-Boltzmann equilibrium distribution are sufficient conditions for a complete thermodynamics description, including nonequilibrium fluctuation theorems. Numerical simulations in a fully trained RBM are performed and the heat exchange fluctuation theorem is verified with excellent agreement to the theory. We observe how the contrastive divergence functional, mostly used in unsupervised learning of RBMs, is closely related to nonequilibrium thermodynamic quantities. We also use the framework to interpret the estimation of the partition function of RBMs with the annealed importance sampling method from a thermodynamics standpoint. Finally, we argue that unsupervised learning of RBMs is equivalent to a work protocol in a system driven by the laws of thermodynamics in the absence of labeled data.
Video Analysis in Multi-Intelligence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Key, Everett Kiusan; Van Buren, Kendra Lu; Warren, Will
This is a project which was performed by a graduated high school student at Los Alamos National Laboratory (LANL). The goal of the Multi-intelligence (MINT) project is to determine the state of a facility from multiple data streams. The data streams are indirect observations. The researcher is using DARHT (Dual-Axis Radiographic Hydrodynamic Test Facility) as a proof of concept. In summary, videos from the DARHT facility contain a rich amount of information. Distribution of car activity can inform us about the state of the facility. Counting large vehicles shows promise as another feature for identifying the state of operations. Signalmore » processing techniques are limited by the low resolution and compression of the videos. We are working on integrating these features with features obtained from other data streams to contribute to the MINT project. Future work can pursue other observations, such as when the gate is functioning or non-functioning.« less
Billard, L; Dayananda, P W A
2014-03-01
Stochastic population processes have received a lot of attention over the years. One approach focuses on compartmental modeling. Billard and Dayananda (2012) developed one such multi-stage model for epidemic processes in which the possibility that individuals can die at any stage from non-disease related causes was also included. This extra feature is of particular interest to the insurance and health-care industries among others especially when the epidemic is HIV/AIDS. Rather than working with numbers of individuals in each stage, they obtained distributional results dealing with the waiting time any one individual spent in each stage given the initial stage. In this work, the impact of the HIV/AIDS epidemic on several functions relevant to these industries (such as adjustments to premiums) is investigated. Theoretical results are derived, followed by a numerical study. Copyright © 2014 Elsevier Inc. All rights reserved.
The changing age distribution in Indonesia and some consequences.
Nam, C B; Dasvarma, G L; Rahardjo, S P
1991-08-01
"Beginning with a discussion of the sources and quality of Indonesian age data by sex, this paper examines the changes in the functional age groups of the population of Indonesia from 1971 to the year 2005, and the implications of these changes for education, labour force participation, dependency ratios and fertility. Data for the period 1971 to 1985 are based on actual enumerations, while those for the period 1990 to 2005 are based on projections. Although the provisional totals of the 1990 Census had been released before the publication of this paper, their breakdown by age was still not available. The functional age categories discussed in the paper include the pre-school years, the primary and intermediate school ages, the teenage years, the reproductive ages of women, the principal working ages and the post-work years. It concludes with a discussion of various policy and planning implications of these changes." excerpt
Optimizing the Distribution of Leg Muscles for Vertical Jumping
Wong, Jeremy D.; Bobbert, Maarten F.; van Soest, Arthur J.; Gribble, Paul L.; Kistemaker, Dinant A.
2016-01-01
A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas—which determine the maximum force deliverable by the muscles—constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal segments. PMID:26919645
Neurodevelopment and executive function in autism.
O'Hearn, Kirsten; Asato, Miya; Ordaz, Sarah; Luna, Beatriz
2008-01-01
Autism is a neurodevelopmental disorder characterized by social and communication deficits, and repetitive behavior. Studies investigating the integrity of brain systems in autism suggest a wide range of gray and white matter abnormalities that are present early in life and change with development. These abnormalities predominantly affect association areas and undermine functional integration. Executive function, which has a protracted development into adolescence and reflects the integration of complex widely distributed brain function, is also affected in autism. Evidence from studies probing response inhibition and working memory indicate impairments in these core components of executive function, as well as compensatory mechanisms that permit normative function in autism. Studies also demonstrate age-related improvements in executive function from childhood to adolescence in autism, indicating the presence of plasticity and suggesting a prolonged window for effective treatment. Despite developmental gains, mature executive functioning is limited in autism, reflecting abnormalities in wide-spread brain networks that may lead to impaired processing of complex information across all domains.
The interval testing procedure: A general framework for inference in functional data analysis.
Pini, Alessia; Vantini, Simone
2016-09-01
We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Hanisch, R.
1999-12-01
Despite the tremendous advances in electronic publications and the increasing rapidity with which papers are now moving from acceptance into ``print,'' preprints continue to be an important mode of communication within the astronomy community. The Los Alamos e-preprint service, astro-ph, provides for rapid and cost-free (to authors and readers) dissemination of manuscripts. As the use of astro-ph has increased the number of paper preprints in circulation to libraries has decreased, and institutional preprint series appear to be waning. It is unfortunate, however, that astro-ph does not function in collaboration with the refereed publications. For example, there is no systematic tracking of manuscripts from preprint to their final, published form, and as a centralized archive it is difficult to distribute the tracking and maintenance functions. It retains documents that have been superseded or have become obsolete. We are currently developing a distributed preprint and document management system which can support both distributed collections of preprints (e.g., traditional institutional preprint series), can link to the LANL collections, can index other documents in the ``grey'' literature (observatory reports, telescope and instrument user's manuals, calls for proposals, etc.), and can function as a manuscript submission tool for the refereed journals. This system is being developed to work cooperatively with the refereed literature so that, for example, links to preprints are updated to links to the final published papers.
NASA Astrophysics Data System (ADS)
Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming
2016-12-01
Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).
NASA Astrophysics Data System (ADS)
Haakonsen, Christian Bernt; Hutchinson, Ian H.
2013-10-01
Mach probes can be used to measure transverse flow in magnetized plasmas, but what they actually measure in strongly non-uniform plasmas has not been definitively established. A fluid treatment in previous work has suggested that the diamagnetic drifts associated with background density and temperature gradients affect transverse flow measurements, but detailed computational study is required to validate and elaborate on those results; it is really a kinetic problem, since the probe deforms and introduces voids in the ion and electron distribution functions. A new code, the Plasma-Object Simulator with Iterated Trajectories (POSIT) has been developed to self-consistently compute the steady-state six-dimensional ion and electron distribution functions in the perturbed plasma. Particle trajectories are integrated backwards in time to the domain boundary, where arbitrary background distribution functions can be specified. This allows POSIT to compute the ion and electron density at each node of its unstructured mesh, update the potential based on those densities, and then iterate until convergence. POSIT is used to study the impact of a background density gradient on transverse Mach probe measurements, and the results compared to the previous fluid theory. C.B. Haakonsen was supported in part by NSF/DOE Grant No. DE-FG02-06ER54512, and in part by an SCGF award administered by ORISE under DOE Contract No. DE-AC05-06OR23100.
Unifying distribution functions: some lesser known distributions.
Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P
2008-08-01
We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.
Spatio-temporal analysis of aftershock sequences in terms of Non Extensive Statistical Physics.
NASA Astrophysics Data System (ADS)
Chochlaki, Kalliopi; Vallianatos, Filippos
2017-04-01
Earth's seismicity is considered as an extremely complicated process where long-range interactions and fracturing exist (Vallianatos et al., 2016). For this reason, in order to analyze it, we use an innovative methodological approach, introduced by Tsallis (Tsallis, 1988; 2009), named Non Extensive Statistical Physics. This approach introduce a generalization of the Boltzmann-Gibbs statistical mechanics and it is based on the definition of Tsallis entropy Sq, which maximized leads the the so-called q-exponential function that expresses the probability distribution function that maximizes the Sq. In the present work, we utilize the concept of Non Extensive Statistical Physics in order to analyze the spatiotemporal properties of several aftershock series. Marekova (Marekova, 2014) suggested that the probability densities of the inter-event distances between successive aftershocks follow a beta distribution. Using the same data set we analyze the inter-event distance distribution of several aftershocks sequences in different geographic regions by calculating non extensive parameters that determine the behavior of the system and by fitting the q-exponential function, which expresses the degree of non-extentivity of the investigated system. Furthermore, the inter-event times distribution of the aftershocks as well as the frequency-magnitude distribution has been analyzed. The results supports the applicability of Non Extensive Statistical Physics ideas in aftershock sequences where a strong correlation exists along with memory effects. References C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys. 52 (1988) 479-487. doi:10.1007/BF01016429 C. Tsallis, Introduction to nonextensive statistical mechanics: Approaching a complex world, 2009. doi:10.1007/978-0-387-85359-8. E. Marekova, Analysis of the spatial distribution between successive earthquakes in aftershocks series, Annals of Geophysics, 57, 5, doi:10.4401/ag-6556, 2014 F. Vallianatos, G. Papadakis, G. Michas, Generalized statistical mechanics approaches to earthquakes and tectonics. Proc. R. Soc. A, 472, 20160497, 2016.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
NASA Astrophysics Data System (ADS)
Ramirez-Lopez, L.; van Wesemael, B.; Stevens, A.; Doetterl, S.; Van Oost, K.; Behrens, T.; Schmidt, K.
2012-04-01
Soil Organic Carbon (SOC) represents a key component in the global C cycle and has an important influence on the global CO2 fluxes between terrestrial biosphere and atmosphere. In the context of agricultural landscapes, SOC inventories are important since soil management practices have a strong influence on CO2 fluxes and SOC stocks. However, there is lack of accurate and cost-effective methods for producing high spatial resolution of SOC information. In this respect, our work is focused on the development of a three dimensional modeling approach for SOC monitoring in agricultural fields. The study area comprises ~420 km2 and includes 4 of the 5 agro-geological regions of the Grand-Duchy of Luxembourg. The soil dataset consist of 172 profiles (1033 samples) which were not sampled specifically for this study. This dataset is a combination of profile samples collected in previous soil surveys and soil profiles sampled for other research purposes. The proposed strategy comprises two main steps. In the first step the SOC distribution within each profile (vertical distribution) is modeled. Depth functions for are fitted in order to summarize the information content in the profile. By using these functions the SOC can be interpolated at any depth within the profiles. The second step involves the use of contextual terrain (ConMap) features (Behrens et al., 2010). These features are based on the differences in elevation between a given point location in the landscape and its circular neighbourhoods at a given set of different radius. One of the main advantages of this approach is that it allows the integration of several spatial scales (eg. local and regional) for soil spatial analysis. In this work the ConMap features are derived from a digital elevation model of the area and are used as predictors for spatial modeling of the parameters of the depth functions fitted in the previous step. In this poster we present some preliminary results in which we analyze: i. The use of different depth functions, ii. The use of different machine learning approaches for modeling the parameters of the fitted depth functions using the ConMap features and iii. The influence of different spatial scales on the SOC profile distribution variability. Keywords: 3D modeling, Digital soil mapping, Depth functions, Terrain analysis. Reference Behrens, T., K. Schmidt, K., Zhu, A.X. Scholten, T. 2010. The ConMap approach for terrain-based digital soil mapping. European Journal of Soil Science, v. 61, p.133-143.
The WorkPlace distributed processing environment
NASA Technical Reports Server (NTRS)
Ames, Troy; Henderson, Scott
1993-01-01
Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.
[Changes in the work capacity of the operators of command-measuring systems during daily duty].
Novikov, V S; Lustin, S I; Blaginin, A A; Kozlov, V P
1997-06-01
Through 12 hours of work the operators of command-measuring complexes had initial signs of exhaustion, showed them-self by decrease of health state, activity, mood, increase of latent period of simple sensorimotor reaction. These changes of a functional condition had no effect on quality of fulfillment of target problems. At the end of daily duty exhaustion, described by deterioration of health state, increase of operators' anxiousness, rapid pulse, reduction of time of delay of breath, increase of time of instability of sensorimotor reactions, amount of faulty actions, reduction of speed of mental processes and distribution of attention were developed.
NASA Technical Reports Server (NTRS)
Bryson, Arthur Earl, Jr
1952-01-01
Report presents the results of interferometer measurements of the flow field near two-dimensional wedge and circular-arc sections of zero angle of attack at high-subsonic and low-supersonic velocities. Both subsonic flow with local supersonic zone and supersonic flow with detached shock wave have been investigated. Pressure distributions and drag coefficients as a function of Mach number have been obtained. The wedge data are compared with the theoretical work on flow past wedge sections of Guderley and Yoshihara, Vincenti and Wagner, and Cole. Pressure distributions and drag coefficients for the wedge and circular-arc sections are presented throughout the entire transonic range of velocities.
Abbott, Lauren J; Stevens, Mark J
2015-12-28
A coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil-globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomistic simulations.
Fine Structure of Dark Energy and New Physics
Jejjala, Vishnu; Kavic, Michael; Minic, Djordje
2007-01-01
Following our recent work on the cosmological constant problem, in this letter we make a specific proposal regarding the fine structure (i.e., the spectrum) of dark energy. The proposal is motivated by a deep analogy between the blackbody radiation problem, which led to the development of quantum theory, and the cosmological constant problem, for which we have recently argued calls for a conceptual extension of the quantum theory. We argue that the fine structure of dark energy is governed by a Wien distribution, indicating its dual quantum and classical nature. We discuss observational consequences of such a picture of darkmore » energy and constrain the distribution function.« less
NASA Astrophysics Data System (ADS)
Ushenko, Yu. O.; Telenga, O. Y.
2011-09-01
Presented in this work are the results of investigation aimed at analysis of coordinate distributions for azimuths and ellipticity of polarization (polarization maps) in blood plasma layers laser images of three groups of patients: healthy (group 1), with dysplasia (group 2) and cancer of cervix uteri (group 3). To characterize polarization maps for all groups of samples, the authors have offered to use three groups of parameters: statistical moments of the first to the fourth orders, autocorrelation functions, logarithmic dependences for power spectra related to distributions of azimuths and ellipticity of polarization inherent to blood plasma laser images. Ascertained are the criteria for diagnostics and differentiation of cervix uteri pathological changes.
Atomistic simulations of TeO₂-based glasses: interatomic potentials and molecular dynamics.
Gulenko, Anastasia; Masson, Olivier; Berghout, Abid; Hamani, David; Thomas, Philippe
2014-07-21
In this work we present for the first time empirical interatomic potentials that are able to reproduce TeO2-based systems. Using these potentials in classical molecular dynamics simulations, we obtained first results for the pure TeO2 glass structure model. The calculated pair distribution function is in good agreement with the experimental one, which indicates a realistic glass structure model. We investigated the short- and medium-range TeO2 glass structures. The local environment of the Te atom strongly varies, so that the glass structure model has a broad Q polyhedral distribution. The glass network is described as weakly connected with a large number of terminal oxygen atoms.
Grassmann phase space theory and the Jaynes–Cummings model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalton, B.J., E-mail: bdalton@swin.edu.au; Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne University of Technology, Melbourne, Victoria 3122; Garraway, B.M.
2013-07-15
The Jaynes–Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherentmore » state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes–Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker–Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker–Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes–Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker–Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker–Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions–that are also equivalent to the canonical Grassmann distribution function–to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum–atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes–Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum–atom optics. -- Highlights: •Novel phase space theory of the Jaynes–Cummings model using Grassmann variables. •Fokker–Planck equations solved analytically. •Results agree with the standard quantum optics treatment. •Grassmann phase space theory applicable to fermion many-body problems.« less
Liu, Yi; Consta, Styliani; Shi, Yujun; Lipson, R H; Goddard, William A
2009-06-25
The size distributions and geometries of vapor clusters equilibrated with methanol-ethanol (Me-Et) liquid mixtures were recently studied by vacuum ultraviolet (VUV) laser time-of-flight (TOF) mass spectrometry and density functional theory (DFT) calculations (Liu, Y.; Consta, S.; Ogeer, F.; Shi, Y. J.; Lipson, R. H. Can. J. Chem. 2007, 85, 843-852). On the basis of the mass spectra recorded, it was concluded that the formation of neutral tetramers is particularly prominent. Here we develop grand canonical Monte Carlo (GCMC) and molecular dynamics (MD) frameworks to compute cluster size distributions in vapor mixtures that allow a direct comparison with experimental mass spectra. Using the all-atom optimized potential for liquid simulations (OPLS-AA) force field, we systematically examined the neutral cluster size distributions as functions of pressure and temperature. These neutral cluster distributions were then used to derive ionized cluster distributions to compare directly with the experiments. The simulations suggest that supersaturation at 12 to 16 times the equilibrium vapor pressure at 298 K or supercooling at temperature 240 to 260 K at the equilibrium vapor pressure can lead to the relatively abundant tetramer population observed in the experiments. Our simulations capture the most distinct features observed in the experimental TOF mass spectra: Et(3)H(+) at m/z = 139 in the vapor corresponding to 10:90% Me-Et liquid mixture and Me(3)H(+) at m/z = 97 in the vapors corresponding to 50:50% and 90:10% Me-Et liquid mixtures. The hybrid GCMC scheme developed in this work extends the capability of studying the size distributions of neat clusters to mixed species and provides a useful tool for studying environmentally important systems such as atmospheric aerosols.
Evaluation of probabilistic forecasts with the scoringRules package
NASA Astrophysics Data System (ADS)
Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian
2017-04-01
Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.
NASA Astrophysics Data System (ADS)
Siddiqui, Maheen; Wedemann, Roseli S.; Jensen, Henrik Jeldtoft
2018-01-01
We explore statistical characteristics of avalanches associated with the dynamics of a complex-network model, where two modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's ideas regarding the neuroses and that consciousness is related with symbolic and linguistic memory activity in the brain. It incorporates the Stariolo-Tsallis generalization of the Boltzmann Machine in order to model memory retrieval and associativity. In the present work, we define and measure avalanche size distributions during memory retrieval, in order to gain insight regarding basic aspects of the functioning of these complex networks. The avalanche sizes defined for our model should be related to the time consumed and also to the size of the neuronal region which is activated, during memory retrieval. This allows the qualitative comparison of the behaviour of the distribution of cluster sizes, obtained during fMRI measurements of the propagation of signals in the brain, with the distribution of avalanche sizes obtained in our simulation experiments. This comparison corroborates the indication that the Nonextensive Statistical Mechanics formalism may indeed be more well suited to model the complex networks which constitute brain and mental structure.
Acauan, Luiz; Dias, Anna C; Pereira, Marcelo B; Horowitz, Flavio; Bergmann, Carlos P
2016-06-29
The chemical inertness of carbon nanotubes (CNT) requires some degree of "defect engineering" for controlled deposition of metal oxides through atomic layer deposition (ALD). The type, quantity, and distribution of such defects rules the deposition rate and defines the growth behavior. In this work, we employed ALD to grow titanium oxide (TiO2) on vertically aligned carbon nanotubes (VACNT). The effects of nitrogen doping and oxygen plasma pretreatment of the CNT on the morphology and total amount of TiO2 were systematically studied using transmission electron microscopy, Raman spectroscopy, and thermogravimetric analysis. The induced chemical changes for each functionalization route were identified by X-ray photoelectron and Raman spectroscopies. The TiO2 mass fraction deposited with the same number of cycles for the pristine CNT, nitrogen-doped CNT, and plasma-treated CNT were 8, 47, and 80%, respectively. We demonstrate that TiO2 nucleation is dependent mainly on surface incorporation of heteroatoms and their distribution rather than structural defects that govern the growth behavior. Therefore, selecting the best way to functionalize CNT will allow us to tailor TiO2 distribution and hence fabricate complex heterostructures.
NASA Astrophysics Data System (ADS)
He, Xiaozhou; Wang, Yin; Tong, Penger
2018-05-01
Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.
NASA Astrophysics Data System (ADS)
Lorito, S.; Romano, F.; Piatanesi, A.
2007-12-01
The aim of this work is to infer the slip distribution and mean rupture velocity along the rupture zone of the 12 September 2007 Southern Sumatra, Indonesia from available tide-gauge records of the tsunami. We select waveforms from 12 stations, distributed along the west coast of Sumatra and in the whole Indian Ocean (11 GLOSS stations and 1 DART buoy). We assume the fault plane and the slip direction to be consistent with both the geometry of the subducting plate and the early focal mechanism solutions. Then we subdivide the fault plane into several subfaults (both along strike and down dip) and compute the corresponding Green's functions by numerical solution of the shallow water equations through a finite difference method. The slip distribution and rupture velocity are determined simultaneously by means of a simulated annealing technique. We compare the recorded and synthetic waveforms in the time domain, using a cost function that is a trade-off between the L1 and L2 norms. Preliminary synthetic checkerboard tests, using the station coverage and the sampling interval of the available data, indicate that the main features of the rupture process may be robustly inverted.
Collective alignment of nanorods in thin Newtonian films
NASA Astrophysics Data System (ADS)
Gu, Yu; Burtovyy, Ruslan; Townsend, James; Owens, Jeffery; Luzinov, Igor; Kornev, Konstantin
2013-11-01
We provide a complete analytical description of the alignment kinetics of magnetic nanorods in magnetic field. Nickel nanorods were formed by template electrochemical deposition in alumina membranes from a dispersion in a water-glycerol mixture. To ensure uniformity of the dispersion, the surface of the nickel nanorods was covered with polyvinylpyrrolidone (PVP). A 40-70 nm coating prevented aggregation of nanoroda. These modifications allowed us to control alignment of the nanorods in a magnetic field and test the proposed theory. An orientational distribution function of nanorods was introduced. We demonstrated that the 0.04% volume fraction of nanorods in the glycerol-water mixture behaves as a system of non-interacting particles. However, the kinetics of alignment of a nanorod assembly does not follow the predictions of the single-nanorod theory. The distribution function theory explains the kinetics of alignment of a nanorod assembly and shows the significance of the initial distribution of nanorods in the film. It can be used to develop an experimental protocol for controlled ordering of magnetic nanorods in thin films. This work was supported by the Air Force Office of Scientific Research, Grant numbers FA9550-12-1-0459 and FA8650-09-D-507 5900.
Bare Proton Contribution to the d / u Ratio in the Proton Sea
NASA Astrophysics Data System (ADS)
Fish, Aaron
2017-09-01
From perturbative processes, such as gluon splitting, we expect there to be symmetric distributions of d and u partons in the proton. partons in the proton. However, experiment has shown an excess of d over u . This has been qualitatively explained by the Meson Cloud Model (MCM), in which the non-perturbative processes of proton fluctuations into meson-baryon pairs, allowed by the Heisenberg uncertainty principle, create the flavor asymmetry. The x dependence of d and u in the nucleon sea is determined from a convolution of meson-baryon splitting functions and the parton distribution functions (pdfs) of the mesons and baryons in the cloud, as well as a contribution from the leading term in the MCM, the ``bare proton.'' We use a statistical model to calculate pdfs for the hadrons in the cloud, but modify the model for the bare proton in order to avoid double counting. We evolved our distributions in Q2 for comparison to experimental data from the Fermilab E866/NuSea experiment. We present predictions for the d / u ratio that is currently being examined by Fermilab's SeaQuest experiment, E906. This work is supported in part by the National Science Foundation under Grant No.1516105.
Yu, Peng; Shaw, Chad A
2014-06-01
The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Multi-scale clustering of functional data with application to hydraulic gradients in wetlands
Greenwood, Mark C.; Sojda, Richard S.; Sharp, Julia L.; Peck, Rory G.; Rosenberry, Donald O.
2011-01-01
A new set of methods are developed to perform cluster analysis of functions, motivated by a data set consisting of hydraulic gradients at several locations distributed across a wetland complex. The methods build on previous work on clustering of functions, such as Tarpey and Kinateder (2003) and Hitchcock et al. (2007), but explore functions generated from an additive model decomposition (Wood, 2006) of the original time se- ries. Our decomposition targets two aspects of the series, using an adaptive smoother for the trend and circular spline for the diurnal variation in the series. Different measures for comparing locations are discussed, including a method for efficiently clustering time series that are of different lengths using a functional data approach. The complicated nature of these wetlands are highlighted by the shifting group memberships depending on which scale of variation and year of the study are considered.
Meng, Qiang; Weng, Jinxian
2013-01-01
Taking into account the uncertainty caused by exogenous factors, the accident notification time (ANT) and emergency medical service (EMS) response time were modeled as 2 random variables following the lognormal distribution. Their mean values and standard deviations were respectively formulated as the functions of environmental variables including crash time, road type, weekend, holiday, light condition, weather, and work zone type. Work zone traffic accident data from the Fatality Analysis Report System between 2002 and 2009 were utilized to determine the distributions of the ANT and the EMS arrival time in the United States. A mixed logistic regression model, taking into account the uncertainty associated with the ANT and the EMS response time, was developed to estimate the risk of death. The results showed that the uncertainty of the ANT was primarily influenced by crash time and road type, whereas the uncertainty of EMS response time is greatly affected by road type, weather, and light conditions. In addition, work zone accidents occurring during a holiday and in poor light conditions were found to be statistically associated with a longer mean ANT and longer EMS response time. The results also show that shortening the ANT was a more effective approach in reducing the risk of death than the EMS response time in work zones. To shorten the ANT and the EMS response time, work zone activities are suggested to be undertaken during non-holidays, during the daytime, and in good weather and light conditions.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
Izacard, Olivier
2016-08-02
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less
Prime Contract Awards by State, Fiscal Year 1988
1988-01-01
information on Department of Defense procurement actions. Data are broken down into three categories: 1) Total U.S. Actions, 2) Actions Not Distributed...and civil functions (i.e., US. Army Corps of Engineers--Civil Works) procurements are represented- The data are presented by state and include the...also provide, for comparison purposes, data for the fourth quarters of FY 1988 and FY 1987. The total net value of military procurement actions for
2017-12-01
inefficiencies of a more complex system. Additional time may also be due to the longer distances traveled . The fulfillment time for a requisition to...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...advanced manufacturing methods with additive manufacturing. This work decomposes the additive manufacturing processes into 11 primary functions. The time
2013-04-30
licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. McAndrew et al...34), a trait related to hypnotic susceptibility. J Abnorm Psychol 1974, 83:268–277. 19. King DW, King LA, Vogt DS: Manual for the Deployment Risk and
Comparison of BRDF-Predicted and Observed Light Curves of GEO Satellites
2015-10-18
to validate the BRDF models . 7. ACKNOWLEDGEMENTS This work was partially funded by a Phase II SBIR (FA9453-14-C-029) from the AFRL Space...Bidirectional Reflectance Distribution Function ( BRDF ) models . These BRDF models have generally come from researchers in computer graphics and machine...characterization, there is a lack of research on the validation of BRDFs with regards to real data. In this paper, we compared telescope data provided by the
DOE Office of Scientific and Technical Information (OSTI.GOV)
König, Johannes; Merle, Alexander; Totzauer, Maximilian
We investigate the early Universe production of sterile neutrino Dark Matter by the decays of singlet scalars. All previous studies applied simplifying assumptions and/or studied the process only on the level of number densities, which makes it impossible to give statements about cosmic structure formation. We overcome these issues by dropping all simplifying assumptions (except for one we showed earlier to work perfectly) and by computing the full course of Dark Matter production on the level of non-thermal momentum distribution functions. We are thus in the position to study a broad range of aspects of the resulting settings and applymore » a broad set of bounds in a reliable manner. We have a particular focus on how to incorporate bounds from structure formation on the level of the linear power spectrum, since the simplistic estimate using the free-streaming horizon clearly fails for highly non-thermal distributions. Our work comprises the most detailed and comprehensive study of sterile neutrino Dark Matter production by scalar decays presented so far.« less
Extension of the ratio method to low energy
Colomer, Frederic; Capel, Pierre; Nunes, F. M.; ...
2016-05-25
The ratio method has been proposed as a means to remove the reaction model dependence in the study of halo nuclei. Originally, it was developed for higher energies but given the potential interest in applying the method at lower energy, in this work we explore its validity at 20 MeV/nucleon. The ratio method takes the ratio of the breakup angular distribution and the summed angular distribution (which includes elastic, inelastic and breakup) and uses this observable to constrain the features of the original halo wave function. In this work we use the Continuum Discretized Coupled Channel method and the Coulomb-correctedmore » Dynamical Eikonal Approximation for the study. We study the reactions of 11Be on 12C, 40Ca and 208Pb at 20 MeV/nucleon. We compare the various theoretical descriptions and explore the dependence of our result on the core-target interaction. Lastly, our study demonstrates that the ratio method is valid at these lower beam energies.« less
NASA Astrophysics Data System (ADS)
Jin, Daeseong; Kim, Hackjin
2018-03-01
We have investigated the agglomeration of magnetite nanoparticles in the aqueous solution under magnetic field by measuring temporal change of magnetic weight. The magnetic weight corresponds to the force due to the magnetization of magnetic materials. Superparamagnetic magnetite nanoparticles are synthesized and used in this work. When the aqueous solution of magnetite nanoparticle is placed under magnetic field, the magnetic weight of the sample jumps instantaneously by Neel and Brown mechanisms and thereafter increases steadily following a stretched exponential function as the nanoparticles agglomerate, which results from the distribution of energy barriers involved in the dynamics. Thermal motions of nanoparticles in the agglomerate perturb the ordered structure of the agglomerate to reduce the magnetic weight. Fluctuation of the structural order of the agglomerate by temperature change is much faster than the formation of agglomerate and explained well with the Boltzmann distribution, which suggests that the magnetic weight of the agglomerate works as a magnetic thermometer.
Work statistics of charged noninteracting fermions in slowly changing magnetic fields.
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β^{-1} and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β(2). At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes. ©2011 American Physical Society
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
Work statistics of charged noninteracting fermions in slowly changing magnetic fields
NASA Astrophysics Data System (ADS)
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β-1 and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β2. At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes.
The electronic and optical properties of Cs adsorbed GaAs nanowires via first-principles study
NASA Astrophysics Data System (ADS)
Diao, Yu; Liu, Lei; Xia, Sihao; Feng, Shu; Lu, Feifei
2018-07-01
In this study, we investigate the Cs adsorption mechanism on (110) surface of zinc-blende GaAs nanowire. The adsorption energy, work function, dipole moment, geometric structure, Mulliken charge distribution, charge transfer index, band structures, density of state and optical properties of Cs adsorption structures are calculated utilizing first-principles method based on density function theory. Total-energy calculations show that all the adsorption energies are negative, indicating that Cs adsorption process is exothermic and Cs covered GaAs nanowires are stable. The work function of nanowire surface has an obvious decrease after Cs adsorption. Besides, the ionization of nanowire surface is enhanced as well. More importantly, Cs adsorption contributes to a lower side shift of bands near Fermi level, and the corresponding band gap disappears. Additionally, the absorption peak and energy loss function after Cs adsorption are far higher than those before adsorption, implying better light absorption characteristic of nanowire surface after Cs adsorption. These theoretical calculations can directly guide the Cs activation experiment for negative electron affinity GaAs nanowire, and also lay a foundation for the further study of Cs/O co-adsorption on the nanowire surface.
Vaidya, Manushka V; Collins, Christopher M; Sodickson, Daniel K; Brown, Ryan; Wiggins, Graham C; Lattanzi, Riccardo
2016-02-01
In high field MRI, the spatial distribution of the radiofrequency magnetic ( B 1 ) field is usually affected by the presence of the sample. For hardware design and to aid interpretation of experimental results, it is important both to anticipate and to accurately simulate the behavior of these fields. Fields generated by a radiofrequency surface coil were simulated using dyadic Green's functions, or experimentally measured over a range of frequencies inside an object whose electrical properties were varied to illustrate a variety of transmit [Formula: see text] and receive [Formula: see text] field patterns. In this work, we examine how changes in polarization of the field and interference of propagating waves in an object can affect the B 1 spatial distribution. Results are explained conceptually using Maxwell's equations and intuitive illustrations. We demonstrate that the electrical conductivity alters the spatial distribution of distinct polarized components of the field, causing "twisted" transmit and receive field patterns, and asymmetries between [Formula: see text] and [Formula: see text]. Additionally, interference patterns due to wavelength effects are observed at high field in samples with high relative permittivity and near-zero conductivity, but are not present in lossy samples due to the attenuation of propagating EM fields. This work provides a conceptual framework for understanding B 1 spatial distributions for surface coils and can provide guidance for RF engineers.
NASA Astrophysics Data System (ADS)
Hong, D. H.; Park, J. K.
2018-04-01
The purpose of the present work was to verify the grain size distribution (GSD) method, which was recently proposed by one of the present authors as a method for evaluating the fraction of dynamic recrystallisation (DRX) in a microalloyed medium carbon steel. To verify the GSD-method, we have selected a 304 stainless steel as a model system and have measured the evolution of the overall grain size distribution (including both the recrystallised and unrecrystallised grains) during hot compression at 1,000 °C in a Gleeble machine; the DRX fraction estimated using the GSD method is compared with the experimentally measured value via EBSD. The results show that the previous GSD method tends to overestimate the DRX fraction due to the utilisation of a plain lognormal distribution function (LDF). To overcome this shortcoming, we propose a modified GSD-method wherein an area-weighted LDF, in place of a plain LDF, is employed to model the evolution of GSD during hot deformation. Direct measurement of the DRX fraction using EBSD confirms that the modified GSD-method provides a reliable method for evaluating the DRX fraction from the experimentally measured GSDs. Reasonable agreement between the DRX fraction and softening fraction suggests that the Kocks-Mecking method utilising the Voce equation can be satisfactorily used to model the work hardening and dynamic recovery behaviour of steels during hot deformation.
Li, Q; He, Y L; Wang, Y; Tao, W Q
2007-11-01
A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.
NASA Technical Reports Server (NTRS)
Zhao, W.; Newman, J. C., Jr.; Sutton, M. A.; Shivakumar, K. N.; Wu, X. R.
1995-01-01
Parallel with the work in Part-1, stress intensity factors for semi-elliptical surface cracks emanating from a circular hole are determined. The 3-D weight function method with the 3D finite element solutions for the uncracked stress distribution as in Part-1 is used for the analysis. Two different loading conditions, i.e. remote tension and wedge loading, are considered for a wide range in geometrical parameters. Both single and double surface cracks are studied and compared with other solutions available in the literature. Typical crack opening displacements are also provided.
A technique for plasma velocity-space cross-correlation
NASA Astrophysics Data System (ADS)
Mattingly, Sean; Skiff, Fred
2018-05-01
An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.
Stochastic description of geometric phase for polarized waves in random media
NASA Astrophysics Data System (ADS)
Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent
2013-01-01
We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.
Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures
NASA Technical Reports Server (NTRS)
James, Benjamin Wylie
1935-01-01
This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chalise, Roshan, E-mail: plasma.roshan@gmail.com; Khanal, Raju
2015-11-15
We have developed a self-consistent 1d3v (one dimension in space and three dimension in velocity) Kinetic Trajectory Simulation (KTS) model, which can be used for modeling various situations of interest and yields results of high accuracy. Exact ion trajectories are followed, to calculate along them the ion distribution function, assuming an arbitrary injection ion distribution. The electrons, on the other hand, are assumed to have a cut-off Maxwellian velocity distribution at injection and their density distribution is obtained analytically. Starting from an initial guess, the potential profile is iterated towards the final time-independent self-consistent state. We have used it tomore » study plasma sheath region formed in presence of an oblique magnetic field. Our results agree well with previous works from other models, and hence, we expect our 1d3v KTS model to provide a basis for the studying of all types of magnetized plasmas, yielding more accurate results.« less
Wein, Lawrence M; Craft, David L
2005-01-01
To aid in understanding how best to respond to a bioterror anthrax attack, we analyze a system of differential equations that includes a disease progression model, a set of spatially distributed queues for distributing antibiotics, and vaccination (pre-event and/or post-event). We derive approximate expressions for the number of casualties as a function of key parameters and management levers, including the time at which the attack is detected, the number of days to distribute antibiotics, the adherence to prophylactic antibiotics, and the fraction of the population that is preimmunized. We compare a variety of public health intervention policies in the event of a hypothetical anthrax attack in a large metropolitan area. Modeling assumptions were decided by the Anthrax Modeling Working Group of the Secretary's Council on Public Health Preparedness. Our results highlight the primary importance of rapid antibiotic distribution and lead us to argue for ensuring post-attack surge capacity to rapidly produce enough anthrax vaccine for an additional 100 million people.
MRI contrast agent concentration and tumor interstitial fluid pressure.
Liu, L J; Schlesinger, M
2016-10-07
The present work describes the relationship between tumor interstitial fluid pressure (TIFP) and the concentration of contrast agent for dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We predict the spatial distribution of TIFP based on that of contrast agent concentration. We also discuss the cases for estimating tumor interstitial volume fraction (void fraction or porosity of porous medium), ve, and contrast volume transfer constant, K(trans), by measuring the ratio of contrast agent concentration in tissue to that in plasma. A linear fluid velocity distribution may reflect a quadratic function of TIFP distribution and lead to a practical method for TIFP estimation. To calculate TIFP, the parameters or variables should preferably be measured along the direction of the linear fluid velocity (this is in the same direction as the gray value distribution of the image, which is also linear). This method may simplify the calculation for estimating TIFP. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Military Health Service System Ambulatory Work Unit (AWU).
1988-04-01
E-40 BBC-4 Ambulatory Work Unit Distribution Screen Passes BBC - Neurosurgery Clinic .... ............. . E-40 BBD -I Initial Record...Screen Failures BBD - Ophthalmology Clinic ... ............ E-41 BBD -2 Distribution Screen Failures BBD - Ophthalmology Clinic ............ E-41 BBD -3...Descriptive Statistics Distribution Screen Passes BBD - Ophthalmology Clinic ............ E-42 BBD -4 Ambulatory Work Unit Distribution Screen Passes BBD
NASA Astrophysics Data System (ADS)
Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Fuchs, B.; Fuhrmann, D.; Gherghel-Lascu, A.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2015-05-01
The KASCADE-Grande large area (128 m2) Muon Tracking Detector has been built with the aim to identify muons ( Eμthr = 800 MeV) in Extensive Air Showers by track measurements under 18 r.l. shielding. This detector provides high-accuracy angular information (approx. 0.3 °) for muons up to 700 m distance from the shower core. In this work we present the lateral density distributions of muons in EAS measured with the Muon Tracking Detector of the KASCADE-Grande experiment. The density is calculated by counting muon tracks in a muon-to-shower-axis distance range from 100 m to 610 m from showers with reconstructed energy of 1016 -1017 eV and zenith angle θ < 18 ° . In the distance range covered by the experiment, these distributions are well described by functions phenomenologically determined already in the fifties (of the last century) by Greisen. They are compared also with the distributions obtained with the KASCADE scintillator array (Eμthr = 230 MeV) and with distributions obtained using simulated showers.
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
The effects of working memory training on functional brain network efficiency.
Langer, Nicolas; von Bastian, Claudia C; Wirz, Helen; Oberauer, Klaus; Jäncke, Lutz
2013-10-01
The human brain is a highly interconnected network. Recent studies have shown that the functional and anatomical features of this network are organized in an efficient small-world manner that confers high efficiency of information processing at relatively low connection cost. However, it has been unclear how the architecture of functional brain networks is related to performance in working memory (WM) tasks and if these networks can be modified by WM training. Therefore, we conducted a double-blind training study enrolling 66 young adults. Half of the subjects practiced three WM tasks and were compared to an active control group practicing three tasks with low WM demand. High-density resting-state electroencephalography (EEG) was recorded before and after training to analyze graph-theoretical functional network characteristics at an intracortical level. WM performance was uniquely correlated with power in the theta frequency, and theta power was increased by WM training. Moreover, the better a person's WM performance, the more their network exhibited small-world topology. WM training shifted network characteristics in the direction of high performers, showing increased small-worldness within a distributed fronto-parietal network. Taken together, this is the first longitudinal study that provides evidence for the plasticity of the functional brain network underlying WM. Copyright © 2013 Elsevier Ltd. All rights reserved.
Abebe, Workineh; Collar, Concha; Ronda, Felicidad
2015-01-22
Tef grain is becoming very attractive in the Western countries since it is a gluten-free grain with appreciated nutritional advantages. However there is little information of its functional properties and starch digestibility and how they are affected by variety type and particle size distribution. This work evaluates the effect of the grain variety and the mill used on tef flour physico-chemical and functional properties, mainly derived from starch behavior. In vitro starch digestibility of the flours by Englyst method was assessed. Two types of mills were used to obtain whole flours of different granulation. Rice and wheat flours were analyzed as references. Protein molecular weight distribution and flour structure by SEM were also analyzed to justify some of the differences found among the cereals studied. Tef cultivar and mill type exhibited important effect on granulation, bulking density and starch damage, affecting the processing performance of the flours and determining the hydration and pasting properties. The color was darker although one of the white varieties had a lightness near the reference flours. Different granulation of tef flour induced different in vitro starch digestibility. The disc attrition mill led to higher starch digestibility rate index and rapidly available glucose, probably as consequence of a higher damaged starch content. The results confirm the adequacy of tef flour as ingredient in the formulation of new cereal based foods and the importance of the variety and the mill on its functional properties. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quintela-del-Río, Alejandro; Francisco-Fernández, Mario
2011-02-01
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Nucleon localization and fragment formation in nuclear fission
Zhang, C. L.; Schuetrumpf, B.; Nazarewicz, W.
2016-12-27
An electron localization measure was originally introduced to characterize chemical bond structures in molecules. Recently, a nucleon localization based on Hartree-Fock densities has been introduced to investigate α-cluster structures in light nuclei. Compared to the local nucleonic densities, the nucleon localization function has been shown to be an excellent indicator of shell effects and cluster correlations. In this work, using the spatial nucleon localization measure, we investigated the emergence of fragments in fissioning heavy nuclei using the self-consistent energy density functional method with a quantified energy density functional optimized for fission studies. We studied the particle densities and spatial nucleonmore » localization distributions along the fission pathways of 264Fm, 232Th, and 240Pu. We demonstrated that the fission fragments were formed fairly early in the evolution, well before scission. To illustrate the usefulness of the localization measure, we showed how the hyperdeformed state of 232Th could be understood in terms of a quasimolecular state made of 132Sn and 100Zr fragments. Compared to nucleonic distributions, the nucleon localization function more effectively quantifies nucleonic clustering: its characteristic oscillating pattern, traced back to shell effects, is a clear fingerprint of cluster/fragment configurations. This is of particular interest for studies of fragment formation and fragment identification in fissioning nuclei.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, C. L.; Schuetrumpf, B.; Nazarewicz, W.
An electron localization measure was originally introduced to characterize chemical bond structures in molecules. Recently, a nucleon localization based on Hartree-Fock densities has been introduced to investigate α-cluster structures in light nuclei. Compared to the local nucleonic densities, the nucleon localization function has been shown to be an excellent indicator of shell effects and cluster correlations. In this work, using the spatial nucleon localization measure, we investigated the emergence of fragments in fissioning heavy nuclei using the self-consistent energy density functional method with a quantified energy density functional optimized for fission studies. We studied the particle densities and spatial nucleonmore » localization distributions along the fission pathways of 264Fm, 232Th, and 240Pu. We demonstrated that the fission fragments were formed fairly early in the evolution, well before scission. To illustrate the usefulness of the localization measure, we showed how the hyperdeformed state of 232Th could be understood in terms of a quasimolecular state made of 132Sn and 100Zr fragments. Compared to nucleonic distributions, the nucleon localization function more effectively quantifies nucleonic clustering: its characteristic oscillating pattern, traced back to shell effects, is a clear fingerprint of cluster/fragment configurations. This is of particular interest for studies of fragment formation and fragment identification in fissioning nuclei.« less
Self-consistent Langmuir waves in resonantly driven thermal plasmas
NASA Astrophysics Data System (ADS)
Lindberg, R. R.; Charman, A. E.; Wurtele, J. S.
2007-12-01
The longitudinal dynamics of a resonantly driven Langmuir wave are analyzed in the limit that the growth of the electrostatic wave is slow compared to the bounce frequency. Using simple physical arguments, the nonlinear distribution function is shown to be nearly invariant in the canonical particle action, provided both a spatially uniform term and higher-order spatial harmonics are included along with the fundamental in the longitudinal electric field. Requirements of self-consistency with the electrostatic potential yield the basic properties of the nonlinear distribution function, including a frequency shift that agrees closely with driven, electrostatic particle simulations over a range of temperatures. This extends earlier work on nonlinear Langmuir waves by Morales and O'Neil [G. J. Morales and T. M. O'Neil, Phys. Rev. Lett. 28, 417 (1972)] and Dewar [R. L. Dewar, Phys. Plasmas 15, 712 (1972)], and could form the basis of a reduced kinetic treatment of plasma dynamics for accelerator applications or Raman backscatter.
One-loop gravitational wave spectrum in de Sitter spacetime
NASA Astrophysics Data System (ADS)
Fröb, Markus B.; Roura, Albert; Verdaguer, Enric
2012-08-01
The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.
Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes
NASA Astrophysics Data System (ADS)
González Arenas, Zochil; Barci, Daniel G.
2012-12-01
Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.
Effect of Ponderomotive Terms on Heat Flux in Laser-Produced Plasmas
NASA Astrophysics Data System (ADS)
Li, G.
2005-10-01
A laser electromagnetic field introduces ponderomotive termsootnotetextV. N. Goncharov and G. Li, Phys. Plasmas 11, 5680 (2004). in the heat flux in a plasma. To account for the nonlocal effects in the ponderomotive terms, first, the kinetic equation coupled with the Maxwell equations is numerically solved for the isotropic part of the electron distribution function. Such an equation includes self-consistent electromagnetic fields and laser absorption through the inverse bremsstrahlung. Then, the anisotropic part is found by solving a simplified Fokker--Planck equation. Using the distribution function, the electric current and heat flux are obtained and substituted into the hydrocode LILAC to simulate ICF implosions. The simulation results are compared against the existing nonlocal electron conduction modelsootnotetextG. P. Schurtz, P. D. Nicola"i, and M. Busquet, Phys. Plasmas 9, 4238 (2000). and Fokker--Planck simulations. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-92SF19460.
RASMUSSEN, ANDREW; ANNAN, JEANNIE
2013-01-01
The research on the determinants of mental health among refugees has been largely limited to traumatic events, but recent work has indicated that the daily hassles of living in refugee camps also play a large role. Using hierarchical linear modelling to account for refugees nested within camp blocks, this exploratory study attempted to model stress surrounding safety and acquiring basic needs and functional impairment among refugees from Darfur living in Chad, using individual-level demographics (e.g., gender, age, presence of a debilitating injury), structural factors (e.g., distance from block to distribution centre), and social ecological variables (e.g., percentage of single women within a block). We found that stress concerning safety concerns, daily hassles, and functional impairment were associated with several individual-level demographic factors (e.g., gender), but also with interactions between block-level and individual-level factors as well (e.g., injury and distance to distribution centre). Findings are discussed in terms of monitoring and evaluation of refugee services. PMID:23626407
Rasmussen, Andrew; Annan, Jeannie
2010-03-01
The research on the determinants of mental health among refugees has been largely limited to traumatic events, but recent work has indicated that the daily hassles of living in refugee camps also play a large role. Using hierarchical linear modelling to account for refugees nested within camp blocks, this exploratory study attempted to model stress surrounding safety and acquiring basic needs and functional impairment among refugees from Darfur living in Chad, using individual-level demographics (e.g., gender, age, presence of a debilitating injury), structural factors (e.g., distance from block to distribution centre), and social ecological variables (e.g., percentage of single women within a block). We found that stress concerning safety concerns, daily hassles, and functional impairment were associated with several individual-level demographic factors (e.g., gender), but also with interactions between block-level and individual-level factors as well (e.g., injury and distance to distribution centre). Findings are discussed in terms of monitoring and evaluation of refugee services.