Sample records for distribution function defined

  1. Distribution functions of probabilistic automata

    NASA Technical Reports Server (NTRS)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  2. Renormalizability of quasiparton distribution functions

    DOE PAGES

    Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...

    2017-11-21

    Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less

  3. Renormalizability of quasiparton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei

    Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less

  4. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.

    2011-01-01

    A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.

  5. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  6. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  7. System approach to distributed sensor management

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid

    2010-04-01

    Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.

  8. A development framework for distributed artificial intelligence

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1989-01-01

    The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.

  9. Functional Bregman Divergence and Bayesian Estimation of Distributions (Preprint)

    DTIC Science & Technology

    2008-01-01

    shows that if the set of possible minimizers A includes EPF [F ], then g∗ = EPF [F ] minimizes the expectation of any Bregman divergence. Note the theorem...probability distribution PF defined over the set M. Let A be a set of functions that includes EPF [F ] if it exists. Suppose the function g∗ minimizes...the expected Bregman divergence between the random function F and any function g ∈ A such that g∗ = arg inf g∈A EPF [dφ(F, g)]. Then, if g∗ exists

  10. Detectability of auditory signals presented without defined observation intervals

    NASA Technical Reports Server (NTRS)

    Watson, C. S.; Nichols, T. L.

    1976-01-01

    Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

  11. Transfer function concept for ultrasonic characterization of material microstructures

    NASA Technical Reports Server (NTRS)

    Vary, A.; Kautz, H. E.

    1986-01-01

    The approach given depends on treating material microstructures as elastomechanical filters that have analytically definable transfer functions. These transfer functions can be defined in terms of the frequency dependence of the ultrasonic attenuation coefficient. The transfer function concept provides a basis for synthesizing expressions that characterize polycrystalline materials relative to microstructural factors such as mean grain size, grain-size distribution functions, and grain boundary energy transmission. Although the approach is nonrigorous, it leads to a rational basis for combining the previously mentioned diverse and fragmented equations for ultrasonic attenuation coefficients.

  12. New Approaches to Robust Confidence Intervals for Location: A Simulation Study.

    DTIC Science & Technology

    1984-06-01

    obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined

  13. What are the Shapes of Response Time Distributions in Visual Search?

    PubMed Central

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905

  14. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  15. Determination of Distance Distribution Functions by Singlet-Singlet Energy Transfer

    PubMed Central

    Cantor, Charles R.; Pechukas, Philip

    1971-01-01

    The efficiency of energy transfer between two chromophores can be used to define an apparent donor-acceptor distance, which in flexible systems will depend on the R0 of the chromophores. If efficiency is measured as a function of R0, it will be possible to determine the actual distribution function of donor-acceptor distances. Numerical procedures are described for extracting this information from experimental data. They should be most useful for distribution functions with mean values from 20-30 Å (2-3 nm). This technique should provide considerably more detailed information on end-to-end distributions of oligomers than has hitherto been available. It should also be useful for describing, in detail, conformational flexibility in other large molecules. PMID:16591942

  16. Charon Message-Passing Toolkit for Scientific Computations

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Yan, Jerry (Technical Monitor)

    2000-01-01

    Charon is a library, callable from C and Fortran, that aids the conversion of structured-grid legacy codes-such as those used in the numerical computation of fluid flows-into parallel, high- performance codes. Key are functions that define distributed arrays, that map between distributed and non-distributed arrays, and that allow easy specification of common communications on structured grids. The library is based on the widely accepted MPI message passing standard. We present an overview of the functionality of Charon, and some representative results.

  17. A design for an intelligent monitor and controller for space station electrical power using parallel distributed problem solving

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.

    1990-01-01

    The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.

    Light-Front Quantization – Dirac’s “Front Form” – provides a physical, frame-independent formalism for hadron dynamics and structure. Observables such as structure functions, transverse momentum distributions, and distribution amplitudes are defined from the hadronic LFWFs. One obtains new insights into the hadronic mass scale, the hadronic spectrum, and the functional form of the QCD running coupling in the nonperturbative domain using light-front holography. In addition, superconformal algebra leads to remarkable supersymmetric relations between mesons and baryons. I also discuss evidence that the antishadowing of nuclear structure functions is nonuniversal; i.e., flavor dependent, and why shadowing and antishadowing phenomena may be incompatiblemore » with the momentum and other sum rules for the nuclear parton distribution functions.« less

  19. Teaching Uncertainties

    ERIC Educational Resources Information Center

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  20. Unconventional Signal Processing Using the Cone Kernel Time-Frequency Representation.

    DTIC Science & Technology

    1992-10-30

    Wigner - Ville distribution ( WVD ), the Choi- Williams distribution , and the cone kernel distribution were compared with the spectrograms. Results were...ambiguity function. Figures A-18(c) and (d) are the Wigner - Ville Distribution ( WVD ) and CK-TFR Doppler maps. In this noiseless case all three exhibit...kernel is the basis for the well known Wigner - Ville distribution . In A-9(2), the cone kernel defined by Zhao, Atlas and Marks [21 is described

  1. Melatonin membrane receptors in peripheral tissues: Distribution and functions

    PubMed Central

    Slominski, Radomir M.; Reiter, Russel J.; Schlabritz-Loutsevitch, Natalia; Ostrom, Rennolds S.; Slominski, Andrzej T.

    2012-01-01

    Many of melatonin’s actions are mediated through interaction with the G-protein coupled membrane bound melatonin receptors type 1 and type 2 (MT1 and MT2, respectively) or, indirectly with nuclear orphan receptors from the RORα/RZR family. Melatonin also binds to the quinone reductase II enzyme, previously defined the MT3 receptor. Melatonin receptors are widely distributed in the body; herein we summarize their expression and actions in non-neural tissues. Several controversies still exist regarding, for example, whether melatonin binds the RORα/RZR family. Studies of the peripheral distribution of melatonin receptors are important since they are attractive targets for immunomodulation, regulation of endocrine, reproductive and cardiovascular functions, modulation of skin pigmentation, hair growth, cancerogenesis, and aging. Melatonin receptor agonists and antagonists have an exciting future since they could define multiple mechanisms by which melatonin modulates the complexity of such a wide variety of physiological and pathological processes. PMID:22245784

  2. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  3. Evidence of three-body correlation functions in Rb+ and Sr2+ acetonitrile solutions

    NASA Astrophysics Data System (ADS)

    D'Angelo, P.; Pavel, N. V.

    1999-09-01

    The local structure of Sr2+ and Rb+ ions in acetonitrile has been investigated by x-ray absorption spectroscopy (XAS) and molecular dynamics simulations. The extended x-ray absorption fine structure above the Sr and Rb K edges has been interpreted in the framework of multiple scattering (MS) formalism and, for the first time, clear evidence of MS contributions has been found in noncomplexing ion solutions. Molecular dynamics has been used to generate the partial pair and triangular distribution functions from which model χ(k) signals have been constructed. The Sr2+ and Rb+ acetonitrile pair distribution functions show very sharp and well-defined first peaks indicating the presence of a well organized first solvation shell. Most of the linear acetonitrile molecules have been found to be distributed like hedgehog spines around the Sr2+ and Rb+ ions. The presence of three-body correlations has been singled out by the existence of well-defined peaks in the triangular configurations. Excellent agreement has been found between the theoretical and experimental data enforcing the reliability of the interatomic potentials used in the simulations. These results demonstrate the ability of the XAS technique in probing the higher-order correlation functions in solution.

  4. ProbOnto: ontology and knowledge base of probability distributions.

    PubMed

    Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala

    2016-09-01

    Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  5. A grid spacing control technique for algebraic grid generation methods

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Kudlinski, R. A.; Everton, E. L.

    1982-01-01

    A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.

  6. The concept of temperature in space plasmas

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2017-12-01

    Independently of the initial distribution function, once the system is thermalized, its particles are stabilized into a specific distribution function parametrized by a temperature. Classical particle systems in thermal equilibrium have their phase-space distribution stabilized into a Maxwell-Boltzmann function. In contrast, space plasmas are particle systems frequently described by stationary states out of thermal equilibrium, namely, their distribution is stabilized into a function that is typically described by kappa distributions. The temperature is well-defined for systems at thermal equilibrium or stationary states described by kappa distributions. This is based on the equivalence of the two fundamental definitions of temperature, that is (i) the kinetic definition of Maxwell (1866) and (ii) the thermodynamic definition of Clausius (1862). This equivalence holds either for Maxwellians or kappa distributions, leading also to the equipartition theorem. The temperature and kappa index (together with density) are globally independent parameters characterizing the kappa distribution. While there is no equation of state or any universal relation connecting these parameters, various local relations may exist along the streamlines of space plasmas. Observations revealed several types of such local relations among plasma thermal parameters.

  7. The Radial Distribution Function (RDF) of Amorphous Selenium Obtained through the Vacuum Evaporator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guda, Bardhyl; Dede, Marie

    2010-01-21

    After the amorphous selenium obtained through the vacuum evaporator, the relevant diffraction intensity is taken and its processing is made. Further on the interferential function is calculated and the radial density function is defined. For determining these functions are used two methods, which were compared with each other and finally are received results for amorphous selenium RDF.

  8. One-loop gravitational wave spectrum in de Sitter spacetime

    NASA Astrophysics Data System (ADS)

    Fröb, Markus B.; Roura, Albert; Verdaguer, Enric

    2012-08-01

    The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.

  9. Generalized spherical and simplicial coordinates

    NASA Astrophysics Data System (ADS)

    Richter, Wolf-Dieter

    2007-12-01

    Elementary trigonometric quantities are defined in l2,p analogously to that in l2,2, the sine and cosine functions are generalized for each p>0 as functions sinp and cosp such that they satisfy the basic equation cosp([phi])p+sinp([phi])p=1. The p-generalized radius coordinate of a point [xi][set membership, variant]Rn is defined for each p>0 as . On combining these quantities, ln,p-spherical coordinates are defined. It is shown that these coordinates are nearly related to ln,p-simplicial coordinates. The Jacobians of these generalized coordinate transformations are derived. Applications and interpretations from analysis deal especially with the definition of a generalized surface content on ln,p-spheres which is nearly related to a modified co-area formula and an extension of Cavalieri's and Torricelli's indivisibeln method, and with differential equations. Applications from probability theory deal especially with a geometric interpretation of the uniform probability distribution on the ln,p-sphere and with the derivation of certain generalized statistical distributions.

  10. Parameters of Higher Education Quality Assessment System at Universities

    ERIC Educational Resources Information Center

    Savickiene, Izabela

    2005-01-01

    The article analyses the system of institutional quality assessment at universities and lays foundation to its functional, morphological and processual parameters. It also presents the concept of the system and discusses the distribution of systems into groups, defines information, accountability, improvement and benchmarking functions of higher…

  11. Naima: a Python package for inference of particle distribution properties from nonthermal spectra

    NASA Astrophysics Data System (ADS)

    Zabalza, V.

    2015-07-01

    The ultimate goal of the observation of nonthermal emission from astrophysical sources is to understand the underlying particle acceleration and evolution processes, and few tools are publicly available to infer the particle distribution properties from the observed photon spectra from X-ray to VHE gamma rays. Here I present naima, an open source Python package that provides models for nonthermal radiative emission from homogeneous distribution of relativistic electrons and protons. Contributions from synchrotron, inverse Compton, nonthermal bremsstrahlung, and neutral-pion decay can be computed for a series of functional shapes of the particle energy distributions, with the possibility of using user-defined particle distribution functions. In addition, naima provides a set of functions that allow to use these models to fit observed nonthermal spectra through an MCMC procedure, obtaining probability distribution functions for the particle distribution parameters. Here I present the models and methods available in naima and an example of their application to the understanding of a galactic nonthermal source. naima's documentation, including how to install the package, is available at http://naima.readthedocs.org.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitru, Adrian; Skokov, Vladimir

    The conventional and linearly polarized Weizsäcker-Williams gluon distributions at small x are defined from the two-point function of the gluon field in light-cone gauge. They appear in the cross section for dijet production in deep inelastic scattering at high energy. We determine these functions in the small-x limit from solutions of the JIMWLK evolution equations and show that they exhibit approximate geometric scaling. Also, we discuss the functional distributions of these WW gluon distributions over the JIMWLK ensemble at rapidity Y ~ 1/αs. These are determined by a 2d Liouville action for the logarithm of the covariant gauge function g2trmore » A+(q)A+(-q). For transverse momenta on the order of the saturation scale we observe large variations across configurations (evolution trajectories) of the linearly polarized distribution up to several times its average, and even to negative values.« less

  13. Evidence for criticality in financial data

    NASA Astrophysics Data System (ADS)

    Ruiz, G.; de Marcos, A. F.

    2018-01-01

    We provide evidence that cumulative distributions of absolute normalized returns for the 100 American companies with the highest market capitalization, uncover a critical behavior for different time scales Δt. Such cumulative distributions, in accordance with a variety of complex - and financial - systems, can be modeled by the cumulative distribution functions of q-Gaussians, the distribution function that, in the context of nonextensive statistical mechanics, maximizes a non-Boltzmannian entropy. These q-Gaussians are characterized by two parameters, namely ( q, β), that are uniquely defined by Δt. From these dependencies, we find a monotonic relationship between q and β, which can be seen as evidence of criticality. We numerically determine the various exponents which characterize this criticality.

  14. Spectral and geometrical variation of the bidirectional reflectance distribution function of diffuse reflectance standards.

    PubMed

    Ferrero, Alejandro; Rabal, Ana María; Campos, Joaquín; Pons, Alicia; Hernanz, María Luisa

    2012-12-20

    A study on the variation of the spectral bidirectional reflectance distribution function (BRDF) of four diffuse reflectance standards (matte ceramic, BaSO(4), Spectralon, and white Russian opal glass) is accomplished through this work. Spectral BRDF measurements were carried out and, using principal components analysis, its spectral and geometrical variation respect to a reference geometry was assessed from the experimental data. Several descriptors were defined in order to compare the spectral BRDF variation of the four materials.

  15. Empirical study on human acupuncture point network

    NASA Astrophysics Data System (ADS)

    Li, Jian; Shen, Dan; Chang, Hui; He, Da-Ren

    2007-03-01

    Chinese medical theory is ancient and profound, however is confined by qualitative and faint understanding. The effect of Chinese acupuncture in clinical practice is unique and effective, and the human acupuncture points play a mysterious and special role, however there is no modern scientific understanding on human acupuncture points until today. For this reason, we attend to use complex network theory, one of the frontiers in the statistical physics, for describing the human acupuncture points and their connections. In the network nodes are defined as the acupuncture points, two nodes are connected by an edge when they are used for a medical treatment of a common disease. A disease is defined as an act. Some statistical properties have been obtained. The results certify that the degree distribution, act degree distribution, and the dependence of the clustering coefficient on both of them obey SPL distribution function, which show a function interpolating between a power law and an exponential decay. The results may be helpful for understanding Chinese medical theory.

  16. The Form, and Some Robustness Properties of Integrated Distance Estimators for Linear Models, Applied to Some Published Data Sets.

    DTIC Science & Technology

    1982-06-01

    observation in our framework is the pair (y,x) with x considered given. The influence function for 52 at the Gaussian distribution with mean xB and variance...3/2 - (1+22)o2 2) 1+2x\\/2 x’) 2(3-9) (1+2X) This influence function is bounded in the residual y-xS, and redescends to an asymptote greater than...version of the influence function for B at the Gaussian distribution, given the x. and x, is defined as the normalized differenceJ (see Barnett and

  17. Overview of Marketing and Distribution. The Wisconsin Guide to Local Curriculum Improvement in Industrial Education, K-12.

    ERIC Educational Resources Information Center

    Ritz, John M.

    The intent of this field tested instructional package is to familiarize the student with the marketing and distribution element of industry and its function in the production of goods and services. Defining behavioral objectives, the course description offers a media guide, suggested classroom activities, and sample student evaluation forms as…

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandes, P. A.; Lynch, K. A.

    Here, we define the observational parameter regime necessary for observing low-altitude ionospheric origins of high-latitude ion upflow/outflow. We present measurement challenges and identify a new analysis technique which mitigates these impediments. To probe the initiation of auroral ion upflow, it is necessary to examine the thermal ion population at 200-350 km, where typical thermal energies are tenths of eV. Interpretation of the thermal ion distribution function measurement requires removal of payload sheath and ram effects. We use a 3-D Maxwellian model to quantify how observed ionospheric parameters such as density, temperature, and flows affect in situ measurements of the thermalmore » ion distribution function. We define the viable acceptance window of a typical top-hat electrostatic analyzer in this regime and show that the instrument's energy resolution prohibits it from directly observing the shape of the particle spectra. To extract detailed information about measured particle population, we define two intermediate parameters from the measured distribution function, then use a Maxwellian model to replicate possible measured parameters for comparison to the data. Liouville's theorem and the thin-sheath approximation allow us to couple the measured and modeled intermediate parameters such that measurements inside the sheath provide information about plasma outside the sheath. We apply this technique to sounding rocket data to show that careful windowing of the data and Maxwellian models allows for extraction of the best choice of geophysical parameters. More widespread use of this analysis technique will help our community expand its observational database of the seed regions of ionospheric outflows.« less

  19. An extension of the Laplace transform to Schwartz distributions

    NASA Technical Reports Server (NTRS)

    Price, D. R.

    1974-01-01

    A characterization of the Laplace transform is developed which extends the transform to the Schwartz distributions. The class of distributions includes the impulse functions and other singular functions which occur as solutions to ordinary and partial differential equations. The standard theorems on analyticity, uniqueness, and invertibility of the transform are proved by using the characterization as the definition of the Laplace transform. The definition uses sequences of linear transformations on the space of distributions which extends the Laplace transform to another class of generalized functions, the Mikusinski operators. It is shown that the sequential definition of the transform is equivalent to Schwartz' extension of the ordinary Laplace transform to distributions but, in contrast to Schwartz' definition, does not use the distributional Fourier transform. Several theorems concerning the particular linear transformations used to define the Laplace transforms are proved. All the results proved in one dimension are extended to the n-dimensional case, but proofs are presented only for those situations that require methods different from their one-dimensional analogs.

  20. Modelling altered revenue function based on varying power consumption distribution and electricity tariff charge using data analytics framework

    NASA Astrophysics Data System (ADS)

    Zainudin, W. N. R. A.; Ramli, N. A.

    2017-09-01

    In 2010, Energy Commission (EC) had introduced Incentive Based Regulation (IBR) to ensure sustainable Malaysian Electricity Supply Industry (MESI), promotes transparent and fair returns, encourage maximum efficiency and maintains policy driven end user tariff. To cater such revolutionary transformation, a sophisticated system to generate policy driven electricity tariff structure is in great need. Hence, this study presents a data analytics framework that generates altered revenue function based on varying power consumption distribution and tariff charge function. For the purpose of this study, the power consumption distribution is being proxy using proportion of household consumption and electricity consumed in KwH and the tariff charge function is being proxy using three-tiered increasing block tariff (IBT). The altered revenue function is useful to give an indication on whether any changes in the power consumption distribution and tariff charges will give positive or negative impact to the economy. The methodology used for this framework begins by defining the revenue to be a function of power consumption distribution and tariff charge function. Then, the proportion of household consumption and tariff charge function is derived within certain interval of electricity power. Any changes in those proportion are conjectured to contribute towards changes in revenue function. Thus, these changes can potentially give an indication on whether the changes in power consumption distribution and tariff charge function are giving positive or negative impact on TNB revenue. Based on the finding of this study, major changes on tariff charge function seems to affect altered revenue function more than power consumption distribution. However, the paper concludes that power consumption distribution and tariff charge function can influence TNB revenue to some great extent.

  1. Assignment of functional activations to probabilistic cytoarchitectonic areas revisited.

    PubMed

    Eickhoff, Simon B; Paus, Tomas; Caspers, Svenja; Grosbras, Marie-Helene; Evans, Alan C; Zilles, Karl; Amunts, Katrin

    2007-07-01

    Probabilistic cytoarchitectonic maps in standard reference space provide a powerful tool for the analysis of structure-function relationships in the human brain. While these microstructurally defined maps have already been successfully used in the analysis of somatosensory, motor or language functions, several conceptual issues in the analysis of structure-function relationships still demand further clarification. In this paper, we demonstrate the principle approaches for anatomical localisation of functional activations based on probabilistic cytoarchitectonic maps by exemplary analysis of an anterior parietal activation evoked by visual presentation of hand gestures. After consideration of the conceptual basis and implementation of volume or local maxima labelling, we comment on some potential interpretational difficulties, limitations and caveats that could be encountered. Extending and supplementing these methods, we then propose a supplementary approach for quantification of structure-function correspondences based on distribution analysis. This approach relates the cytoarchitectonic probabilities observed at a particular functionally defined location to the areal specific null distribution of probabilities across the whole brain (i.e., the full probability map). Importantly, this method avoids the need for a unique classification of voxels to a single cortical area and may increase the comparability between results obtained for different areas. Moreover, as distribution-based labelling quantifies the "central tendency" of an activation with respect to anatomical areas, it will, in combination with the established methods, allow an advanced characterisation of the anatomical substrates of functional activations. Finally, the advantages and disadvantages of the various methods are discussed, focussing on the question of which approach is most appropriate for a particular situation.

  2. Probabilistic structural analysis of a truss typical for space station

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.

    1990-01-01

    A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.

  3. Robust Multiple Linear Regression.

    DTIC Science & Technology

    1982-12-01

    difficulty, but it might have more solutions corresponding to local minima. Influence Function of M-Estimates The influence function describes the effect...distributionn n function. In case of M-Estimates the influence function was found to be pro- portional to and given as T(X F)) " C(xpF,T) = .(X.T(F) F(dx...where the inverse of any distribution function F is defined in the usual way as F- (s) = inf{x IF(x) > s) 0<sə Influence Function of L-Estimates In a

  4. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  5. Children's Media Comprehension: The Relationship between Media Platform, Executive Functioning Abilities, and Age

    ERIC Educational Resources Information Center

    Menkes, Susan M.

    2012-01-01

    Children's media comprehension was compared for material presented on television, computer, or touchscreen tablet. One hundred and thirty-two children were equally distributed across 12 groups defined by age (4- or 6-years-olds), gender, and the three media platforms. Executive functioning as measured by attentional control, cognitive…

  6. Quantifiable Assessment of SWNT Dispersion in Polymer Composites

    NASA Technical Reports Server (NTRS)

    Park, Cheol; Kim, Jae-Woo; Wise, Kristopher E.; Working, Dennis; Siochi, Mia; Harrison, Joycelyn; Gibbons, Luke; Siochi, Emilie J.; Lillehei, Peter T.; Cantrell, Sean; hide

    2007-01-01

    NASA LaRC has established a new protocol for visualizing the nanomaterials in structural polymer matrix resins. Using this new technique and reconstructing the 3D distribution of the nanomaterials allows us to compare this distribution against a theoretically perfect distribution. Additional tertiary structural information can now be obtained and quantified with the electron tomography studies. These tools will be necessary to establish the structural-functional relationships between the nano and the bulk. This will also help define the critical length scales needed for functional properties. Field ready tool development and calibration can begin by using these same samples and comparing the response. i.e. gold standards of good and bad dispersion.

  7. GDF v2.0, an enhanced version of GDF

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Gavrilis, Dimitris; Dermatas, Evangelos

    2007-12-01

    An improved version of the function estimation program GDF is presented. The main enhancements of the new version include: multi-output function estimation, capability of defining custom functions in the grammar and selection of the error function. The new version has been evaluated on a series of classification and regression datasets, that are widely used for the evaluation of such methods. It is compared to two known neural networks and outperforms them in 5 (out of 10) datasets. Program summaryTitle of program: GDF v2.0 Catalogue identifier: ADXC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 98 147 No. of bytes in distributed program, including test data, etc.: 2 040 684 Distribution format: tar.gz Programming language: GNU C++ Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200000 bytes Classification: 4.9 Does the new version supersede the previous version?: Yes Nature of problem: The technique of function estimation tries to discover from a series of input data a functional form that best describes them. This can be performed with the use of parametric models, whose parameters can adapt according to the input data. Solution method: Functional forms are being created by genetic programming which are approximations for the symbolic regression problem. Reasons for new version: The GDF package was extended in order to be more flexible and user customizable than the old package. The user can extend the package by defining his own error functions and he can extend the grammar of the package by adding new functions to the function repertoire. Also, the new version can perform function estimation of multi-output functions and it can be used for classification problems. Summary of revisions: The following features have been added to the package GDF: Multi-output function approximation. The package can now approximate any function f:R→R. This feature gives also to the package the capability of performing classification and not only regression. User defined function can be added to the repertoire of the grammar, extending the regression capabilities of the package. This feature is limited to 3 functions, but easily this number can be increased. Capability of selecting the error function. The package offers now to the user apart from the mean square error other error functions such as: mean absolute square error, maximum square error. Also, user defined error functions can be added to the set of error functions. More verbose output. The main program displays more information to the user as well as the default values for the parameters. Also, the package gives to the user the capability to define an output file, where the output of the gdf program for the testing set will be stored after the termination of the process. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the train data.

  8. Tools for distributed application management

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Cooper, Robert; Wood, Mark; Birman, Kenneth P.

    1990-01-01

    Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system (a collection of tools for constructing distributed application management software) is described. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real-time reactive program. The underlying application is instrumented with a variety of built-in and user-defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when preexisting, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.

  9. Star formation history: Modeling of visual binaries

    NASA Astrophysics Data System (ADS)

    Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.

    2018-05-01

    Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.

  10. Tools for distributed application management

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Wood, Mark; Cooper, Robert; Birman, Kenneth P.

    1990-01-01

    Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system is described: a collection of tools for constructing distributed application management software. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real time reactive program. The underlying application is instrumented with a variety of built-in and user defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when pre-existing, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.

  11. Functional brain networks develop from a "local to distributed" organization.

    PubMed

    Fair, Damien A; Cohen, Alexander L; Power, Jonathan D; Dosenbach, Nico U F; Church, Jessica A; Miezin, Francis M; Schlaggar, Bradley L; Petersen, Steven E

    2009-05-01

    The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward 'segregation' (a general decrease in correlation strength) between regions close in anatomical space and 'integration' (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more "distributed" architecture in young adults. We argue that this "local to distributed" developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing "small-world"-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways.

  12. Electrostatic analyzer measurements of ionospheric thermal ion populations

    DOE PAGES

    Fernandes, P. A.; Lynch, K. A.

    2016-07-09

    Here, we define the observational parameter regime necessary for observing low-altitude ionospheric origins of high-latitude ion upflow/outflow. We present measurement challenges and identify a new analysis technique which mitigates these impediments. To probe the initiation of auroral ion upflow, it is necessary to examine the thermal ion population at 200-350 km, where typical thermal energies are tenths of eV. Interpretation of the thermal ion distribution function measurement requires removal of payload sheath and ram effects. We use a 3-D Maxwellian model to quantify how observed ionospheric parameters such as density, temperature, and flows affect in situ measurements of the thermalmore » ion distribution function. We define the viable acceptance window of a typical top-hat electrostatic analyzer in this regime and show that the instrument's energy resolution prohibits it from directly observing the shape of the particle spectra. To extract detailed information about measured particle population, we define two intermediate parameters from the measured distribution function, then use a Maxwellian model to replicate possible measured parameters for comparison to the data. Liouville's theorem and the thin-sheath approximation allow us to couple the measured and modeled intermediate parameters such that measurements inside the sheath provide information about plasma outside the sheath. We apply this technique to sounding rocket data to show that careful windowing of the data and Maxwellian models allows for extraction of the best choice of geophysical parameters. More widespread use of this analysis technique will help our community expand its observational database of the seed regions of ionospheric outflows.« less

  13. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  14. Models and algorithm of optimization launch and deployment of virtual network functions in the virtual data center

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2017-10-01

    The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.

  15. Polarized structure functions in a constituent quark scenario

    NASA Astrophysics Data System (ADS)

    Scopetta, Sergio; Vento, Vicente; Traini, Marco

    1998-12-01

    Using a simple picture of the constituent quark as a composite system of point-like partons, we construct the polarized parton distributions by a convolution between constituent quark momentum distributions and constituent quark structure functions. Using unpolarized data to fix the parameters we achieve good agreement with the polarization experiments for the proton, while not so for the neutron. By relaxing our assumptions for the sea distributions, we define new quark functions for the polarized case, which reproduce well the proton data and are in better agreement with the neutron data. When our results are compared with similar calculations using non-composite constituent quarks the accord with the experiments of the present scheme is impressive. We conclude that, also in the polarized case, DIS data are consistent with a low energy scenario dominated by composite constituents of the nucleon.

  16. The tensor distribution function.

    PubMed

    Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M

    2009-01-01

    Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.

  17. Muscle glycogen and cell function--Location, location, location.

    PubMed

    Ørtenblad, N; Nielsen, J

    2015-12-01

    The importance of glycogen, as a fuel during exercise, is a fundamental concept in exercise physiology. The use of electron microscopy has revealed that glycogen is not evenly distributed in skeletal muscle fibers, but rather localized in distinct pools. In this review, we present the available evidence regarding the subcellular localization of glycogen in skeletal muscle and discuss this from the perspective of skeletal muscle fiber function. The distribution of glycogen in the defined pools within the skeletal muscle varies depending on exercise intensity, fiber phenotype, training status, and immobilization. Furthermore, these defined pools may serve specific functions in the cell. Specifically, reduced levels of these pools of glycogen are associated with reduced SR Ca(2+) release, muscle relaxation rate, and membrane excitability. Collectively, the available literature strongly demonstrates that the subcellular localization of glycogen has to be considered to fully understand the role of glycogen metabolism and signaling in skeletal muscle function. Here, we propose that the effect of low muscle glycogen on excitation-contraction coupling may serve as a built-in mechanism, which links the energetic state of the muscle fiber to energy utilization. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Application of Maxent Multivariate Analysis to Define Climate-Change Effects on Species Distributions and Changes

    DTIC Science & Technology

    2014-09-01

    approaches. Ecological Modelling Volume 200, Issues 1–2, 10, pp 1–19. Buhlmann, Kurt A ., Thomas S.B. Akre , John B. Iverson, Deno Karapatakis, Russell A ...statistical multivariate analysis to define the current and projected future range probability for species of interest to Army land managers. A software...15 Figure 4. RCW omission rate and predicted area as a function of the cumulative threshold

  19. Chemical Reactions in Turbulent Mixing Flows. Revision.

    DTIC Science & Technology

    1983-08-02

    jet diameter F2 fluorine H2 hydrogen HF hydrogen fluoride I(y) instantaneous fluorescence intensity distribution L-s flame length measured from...virtual origin -.4 of turbulent region (L-s). flame length at high Reynolds number LIF laser induced fluorescence N2 nitrogen PI product thickness (defined...mixing is attained as a function of the equivallence ratio. For small values of the equivalence ratio f, the flame length - defined here as the

  20. Reconstructing metabolic flux vectors from extreme pathways: defining the alpha-spectrum.

    PubMed

    Wiback, Sharon J; Mahadevan, Radhakrishnan; Palsson, Bernhard Ø

    2003-10-07

    The move towards genome-scale analysis of cellular functions has necessitated the development of analytical (in silico) methods to understand such large and complex biochemical reaction networks. One such method is extreme pathway analysis that uses stoichiometry and thermodynamic irreversibly to define mathematically unique, systemic metabolic pathways. These extreme pathways form the edges of a high-dimensional convex cone in the flux space that contains all the attainable steady state solutions, or flux distributions, for the metabolic network. By definition, any steady state flux distribution can be described as a nonnegative linear combination of the extreme pathways. To date, much effort has been focused on calculating, defining, and understanding these extreme pathways. However, little work has been performed to determine how these extreme pathways contribute to a given steady state flux distribution. This study represents an initial effort aimed at defining how physiological steady state solutions can be reconstructed from a network's extreme pathways. In general, there is not a unique set of nonnegative weightings on the extreme pathways that produce a given steady state flux distribution but rather a range of possible values. This range can be determined using linear optimization to maximize and minimize the weightings of a particular extreme pathway in the reconstruction, resulting in what we have termed the alpha-spectrum. The alpha-spectrum defines which extreme pathways can and cannot be included in the reconstruction of a given steady state flux distribution and to what extent they individually contribute to the reconstruction. It is shown that accounting for transcriptional regulatory constraints can considerably shrink the alpha-spectrum. The alpha-spectrum is computed and interpreted for two cases; first, optimal states of a skeleton representation of core metabolism that include transcriptional regulation, and second for human red blood cell metabolism under various physiological, non-optimal conditions.

  1. Evaluating the assumption of power-law late time scaling of breakthrough curves in highly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele

    2017-04-01

    Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.

  2. Baseline Architecture of ITER Control System

    NASA Astrophysics Data System (ADS)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  3. Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble

    NASA Astrophysics Data System (ADS)

    Bicci, Alberto

    2016-12-01

    In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.

  4. EMD-WVD time-frequency distribution for analysis of multi-component signals

    NASA Astrophysics Data System (ADS)

    Chai, Yunzi; Zhang, Xudong

    2016-10-01

    Time-frequency distribution (TFD) is two-dimensional function that indicates the time-varying frequency content of one-dimensional signals. And The Wigner-Ville distribution (WVD) is an important and effective time-frequency analysis method. The WVD can efficiently show the characteristic of a mono-component signal. However, a major drawback is the extra cross-terms when multi-component signals are analyzed by WVD. In order to eliminating the cross-terms, we decompose signals into single frequency components - Intrinsic Mode Function (IMF) - by using the Empirical Mode decomposition (EMD) first, then use WVD to analyze each single IMF. In this paper, we define this new time-frequency distribution as EMD-WVD. And the experiment results show that the proposed time-frequency method can solve the cross-terms problem effectively and improve the accuracy of WVD time-frequency analysis.

  5. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  6. Hierarchical resilience with lightweight threads.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, Kyle Bruce

    2011-10-01

    This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less

  7. Species, functional groups, and thresholds in ecological resilience

    USGS Publications Warehouse

    Sundstrom, Shana M.; Allen, Craig R.; Barichievy, Chris

    2012-01-01

    The cross-scale resilience model states that ecological resilience is generated in part from the distribution of functions within and across scales in a system. Resilience is a measure of a system's ability to remain organized around a particular set of mutually reinforcing processes and structures, known as a regime. We define scale as the geographic extent over which a process operates and the frequency with which a process occurs. Species can be categorized into functional groups that are a link between ecosystem processes and structures and ecological resilience. We applied the cross-scale resilience model to avian species in a grassland ecosystem. A species’ morphology is shaped in part by its interaction with ecological structure and pattern, so animal body mass reflects the spatial and temporal distribution of resources. We used the log-transformed rank-ordered body masses of breeding birds associated with grasslands to identify aggregations and discontinuities in the distribution of those body masses. We assessed cross-scale resilience on the basis of 3 metrics: overall number of functional groups, number of functional groups within an aggregation, and the redundancy of functional groups across aggregations. We assessed how the loss of threatened species would affect cross-scale resilience by removing threatened species from the data set and recalculating values of the 3 metrics. We also determined whether more function was retained than expected after the loss of threatened species by comparing observed loss with simulated random loss in a Monte Carlo process. The observed distribution of function compared with the random simulated loss of function indicated that more functionality in the observed data set was retained than expected. On the basis of our results, we believe an ecosystem with a full complement of species can sustain considerable species losses without affecting the distribution of functions within and across aggregations, although ecological resilience is reduced. We propose that the mechanisms responsible for shaping discontinuous distributions of body mass and the nonrandom distribution of functions may also shape species losses such that local extinctions will be nonrandom with respect to the retention and distribution of functions and that the distribution of function within and across aggregations will be conserved despite extinctions.

  8. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  9. Nonstandard Analysis and Shock Wave Jump Conditions in a One-Dimensional Compressible Gas

    NASA Technical Reports Server (NTRS)

    Baty, Roy S.; Farassat, Fereidoun; Hargreaves, John

    2007-01-01

    Nonstandard analysis is a relatively new area of mathematics in which infinitesimal numbers can be defined and manipulated rigorously like real numbers. This report presents a fairly comprehensive tutorial on nonstandard analysis for physicists and engineers with many examples applicable to generalized functions. To demonstrate the power of the subject, the problem of shock wave jump conditions is studied for a one-dimensional compressible gas. It is assumed that the shock thickness occurs on an infinitesimal interval and the jump functions in the thermodynamic and fluid dynamic parameters occur smoothly across this interval. To use conservations laws, smooth pre-distributions of the Dirac delta measure are applied whose supports are contained within the shock thickness. Furthermore, smooth pre-distributions of the Heaviside function are applied which vary from zero to one across the shock wave. It is shown that if the equations of motion are expressed in nonconservative form then the relationships between the jump functions for the flow parameters may be found unambiguously. The analysis yields the classical Rankine-Hugoniot jump conditions for an inviscid shock wave. Moreover, non-monotonic entropy jump conditions are obtained for both inviscid and viscous flows. The report shows that products of generalized functions may be defined consistently using nonstandard analysis; however, physically meaningful products of generalized functions must be determined from the physics of the problem and not the mathematical form of the governing equations.

  10. Modeling Hawaiian Ecosystem Degradation due to Invasive Plants under Current and Future Climates

    PubMed Central

    Vorsino, Adam E.; Fortini, Lucas B.; Amidon, Fred A.; Miller, Stephen E.; Jacobi, James D.; Price, Jonathan P.; Gon, Sam 'Ohukani'ohi'a; Koob, Gregory A.

    2014-01-01

    Occupation of native ecosystems by invasive plant species alters their structure and/or function. In Hawaii, a subset of introduced plants is regarded as extremely harmful due to competitive ability, ecosystem modification, and biogeochemical habitat degradation. By controlling this subset of highly invasive ecosystem modifiers, conservation managers could significantly reduce native ecosystem degradation. To assess the invasibility of vulnerable native ecosystems, we selected a proxy subset of these invasive plants and developed robust ensemble species distribution models to define their respective potential distributions. The combinations of all species models using both binary and continuous habitat suitability projections resulted in estimates of species richness and diversity that were subsequently used to define an invasibility metric. The invasibility metric was defined from species distribution models with <0.7 niche overlap (Warrens I) and relatively discriminative distributions (Area Under the Curve >0.8; True Skill Statistic >0.75) as evaluated per species. Invasibility was further projected onto a 2100 Hawaii regional climate change scenario to assess the change in potential habitat degradation. The distribution defined by the invasibility metric delineates areas of known and potential invasibility under current climate conditions and, when projected into the future, estimates potential reductions in native ecosystem extent due to climate-driven invasive incursion. We have provided the code used to develop these metrics to facilitate their wider use (Code S1). This work will help determine the vulnerability of native-dominated ecosystems to the combined threats of climate change and invasive species, and thus help prioritize ecosystem and species management actions. PMID:24805254

  11. Modeling Hawaiian ecosystem degradation due to invasive plants under current and future climates.

    PubMed

    Vorsino, Adam E; Fortini, Lucas B; Amidon, Fred A; Miller, Stephen E; Jacobi, James D; Price, Jonathan P; Gon, Sam 'ohukani'ohi'a; Koob, Gregory A

    2014-01-01

    Occupation of native ecosystems by invasive plant species alters their structure and/or function. In Hawaii, a subset of introduced plants is regarded as extremely harmful due to competitive ability, ecosystem modification, and biogeochemical habitat degradation. By controlling this subset of highly invasive ecosystem modifiers, conservation managers could significantly reduce native ecosystem degradation. To assess the invasibility of vulnerable native ecosystems, we selected a proxy subset of these invasive plants and developed robust ensemble species distribution models to define their respective potential distributions. The combinations of all species models using both binary and continuous habitat suitability projections resulted in estimates of species richness and diversity that were subsequently used to define an invasibility metric. The invasibility metric was defined from species distribution models with <0.7 niche overlap (Warrens I) and relatively discriminative distributions (Area Under the Curve >0.8; True Skill Statistic >0.75) as evaluated per species. Invasibility was further projected onto a 2100 Hawaii regional climate change scenario to assess the change in potential habitat degradation. The distribution defined by the invasibility metric delineates areas of known and potential invasibility under current climate conditions and, when projected into the future, estimates potential reductions in native ecosystem extent due to climate-driven invasive incursion. We have provided the code used to develop these metrics to facilitate their wider use (Code S1). This work will help determine the vulnerability of native-dominated ecosystems to the combined threats of climate change and invasive species, and thus help prioritize ecosystem and species management actions.

  12. Avalanches and generalized memory associativity in a network model for conscious and unconscious mental functioning

    NASA Astrophysics Data System (ADS)

    Siddiqui, Maheen; Wedemann, Roseli S.; Jensen, Henrik Jeldtoft

    2018-01-01

    We explore statistical characteristics of avalanches associated with the dynamics of a complex-network model, where two modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's ideas regarding the neuroses and that consciousness is related with symbolic and linguistic memory activity in the brain. It incorporates the Stariolo-Tsallis generalization of the Boltzmann Machine in order to model memory retrieval and associativity. In the present work, we define and measure avalanche size distributions during memory retrieval, in order to gain insight regarding basic aspects of the functioning of these complex networks. The avalanche sizes defined for our model should be related to the time consumed and also to the size of the neuronal region which is activated, during memory retrieval. This allows the qualitative comparison of the behaviour of the distribution of cluster sizes, obtained during fMRI measurements of the propagation of signals in the brain, with the distribution of avalanche sizes obtained in our simulation experiments. This comparison corroborates the indication that the Nonextensive Statistical Mechanics formalism may indeed be more well suited to model the complex networks which constitute brain and mental structure.

  13. Distribution of neurons in functional areas of the mouse cerebral cortex reveals quantitatively different cortical zones

    PubMed Central

    Herculano-Houzel, Suzana; Watson, Charles; Paxinos, George

    2013-01-01

    How are neurons distributed along the cortical surface and across functional areas? Here we use the isotropic fractionator (Herculano-Houzel and Lent, 2005) to analyze the distribution of neurons across the entire isocortex of the mouse, divided into 18 functional areas defined anatomically. We find that the number of neurons underneath a surface area (the N/A ratio) varies 4.5-fold across functional areas and neuronal density varies 3.2-fold. The face area of S1 contains the most neurons, followed by motor cortex and the primary visual cortex. Remarkably, while the distribution of neurons across functional areas does not accompany the distribution of surface area, it mirrors closely the distribution of cortical volumes—with the exception of the visual areas, which hold more neurons than expected for their volume. Across the non-visual cortex, the volume of individual functional areas is a shared linear function of their number of neurons, while in the visual areas, neuronal densities are much higher than in all other areas. In contrast, the 18 functional areas cluster into three different zones according to the relationship between the N/A ratio and cortical thickness and neuronal density: these three clusters can be called visual, sensory, and, possibly, associative. These findings are remarkably similar to those in the human cerebral cortex (Ribeiro et al., 2013) and suggest that, like the human cerebral cortex, the mouse cerebral cortex comprises two zones that differ in how neurons form the cortical volume, and three zones that differ in how neurons are distributed underneath the cortical surface, possibly in relation to local differences in connectivity through the white matter. Our results suggest that beyond the developmental divide into visual and non-visual cortex, functional areas initially share a common distribution of neurons along the parenchyma that become delimited into functional areas according to the pattern of connectivity established later. PMID:24155697

  14. Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes

    NASA Astrophysics Data System (ADS)

    González Arenas, Zochil; Barci, Daniel G.

    2012-12-01

    Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.

  15. Simulation and optimization of faceted structure for illumination

    NASA Astrophysics Data System (ADS)

    Liu, Lihong; Engel, Thierry; Flury, Manuel

    2016-04-01

    The re-direction of incoherent light using a surface containing only facets with specific angular values is proposed. A new photometric approach is adopted since the size of each facet is large in comparison with the wavelength. A reflective configuration is employed to avoid the dispersion problems of materials. The irradiance distribution of the reflected beam is determined by the angular position of each facet. In order to obtain the specific irradiance distribution, the angular position of each facet is optimized using Zemax OpticStudio 15 software. A detector is placed in the direction which is perpendicular to the reflected beam. According to the incoherent irradiance distribution on the detector, a merit function needs to be defined to pilot the optimization process. The two dimensional angular position of each facet is defined as a variable which is optimized within a specified varying range. Because the merit function needs to be updated, a macro program is carried out to update this function within Zemax. In order to reduce the complexity of the manual operation, an automatic optimization approach is established. Zemax is in charge of performing the optimization task and sending back the irradiance data to Matlab for further analysis. Several simulation results are given for the verification of the optimization method. The simulation results are compared to those obtained with the LightTools software in order to verify our optimization method.

  16. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  17. Equilibration in the time-dependent Hartree-Fock approach probed with the Wigner distribution function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loebl, N.; Maruhn, J. A.; Reinhard, P.-G.

    2011-09-15

    By calculating the Wigner distribution function in the reaction plane, we are able to probe the phase-space behavior in the time-dependent Hartree-Fock scheme during a heavy-ion collision in a consistent framework. Various expectation values of operators are calculated by evaluating the corresponding integrals over the Wigner function. In this approach, it is straightforward to define and analyze quantities even locally. We compare the Wigner distribution function with the smoothed Husimi distribution function. Different reaction scenarios are presented by analyzing central and noncentral {sup 16}O +{sup 16}O and {sup 96}Zr +{sup 132}Sn collisions. Although we observe strong dissipation in the timemore » evolution of global observables, there is no evidence for complete equilibration in the local analysis of the Wigner function. Because the initial phase-space volumes of the fragments barely merge and mean values of the observables are conserved in fusion reactions over thousands of fm/c, we conclude that the time-dependent Hartree-Fock method provides a good description of the early stage of a heavy-ion collision but does not provide a mechanism to change the phase-space structure in a dramatic way necessary to obtain complete equilibration.« less

  18. ATTITUDE FILTERING ON SO(3)

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2005-01-01

    A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.

  19. Data compression and genomes: a two-dimensional life domain map.

    PubMed

    Menconi, Giulia; Benci, Vieri; Buiatti, Marcello

    2008-07-21

    We define the complexity of DNA sequences as the information content per nucleotide, calculated by means of some Lempel-Ziv data compression algorithm. It is possible to use the statistics of the complexity values of the functional regions of different complete genomes to distinguish among genomes of different domains of life (Archaea, Bacteria and Eukarya). We shall focus on the distribution function of the complexity of non-coding regions. We show that the three domains may be plotted in separate regions within the two-dimensional space where the axes are the skewness coefficient and the curtosis coefficient of the aforementioned distribution. Preliminary results on 15 genomes are introduced.

  20. Modeling Hawaiian ecosystem degradation due to invasive plants under current and future climates

    USGS Publications Warehouse

    Vorsino, Adam E.; Fortini, Lucas B.; Amidon, Fred A.; Miller, Stephen E.; Jacobi, James D.; Price, Jonathan P.; `Ohukani`ohi`a Gon, Sam; Koob, Gregory A.

    2014-01-01

    Occupation of native ecosystems by invasive plant species alters their structure and/or function. In Hawaii, a subset of introduced plants is regarded as extremely harmful due to competitive ability, ecosystem modification, and biogeochemical habitat degradation. By controlling this subset of highly invasive ecosystem modifiers, conservation managers could significantly reduce native ecosystem degradation. To assess the invasibility of vulnerable native ecosystems, we selected a proxy subset of these invasive plants and developed robust ensemble species distribution models to define their respective potential distributions. The combinations of all species models using both binary and continuous habitat suitability projections resulted in estimates of species richness and diversity that were subsequently used to define an invasibility metric. The invasibility metric was defined from species distribution models with 0.8; True Skill Statistic >0.75) as evaluated per species. Invasibility was further projected onto a 2100 Hawaii regional climate change scenario to assess the change in potential habitat degradation. The distribution defined by the invasibility metric delineates areas of known and potential invasibility under current climate conditions and, when projected into the future, estimates potential reductions in native ecosystem extent due to climate-driven invasive incursion. We have provided the code used to develop these metrics to facilitate their wider use (Code S1). This work will help determine the vulnerability of native-dominated ecosystems to the combined threats of climate change and invasive species, and thus help prioritize ecosystem and species management actions.

  1. Small angle neutron scattering study of polyelectrolyte brushes grafted to well-defined gold nanoparticle interfaces.

    PubMed

    Jia, Haidong; Grillo, Isabelle; Titmuss, Simon

    2010-05-18

    Small angle neutron scattering (SANS) has been used to study the conformations, and response to added salt, of a polyelectrolyte layer grafted to the interfaces of well-defined gold nanoparticles. The polyelectrolyte layer is prepared at a constant coverage by grafting thiol-functionalized polystyrene (M(w) = 53k) to gold nanoparticles of well-defined interfacial curvature (R(c) = 26.5 nm) followed by a soft-sulfonation of 38% of the segments to sodium polystyrene sulfonate (NaPSS). The SANS profiles can be fit by Fermi-Dirac distributions that are consistent with a Gaussian distribution but are better described by a parabolic distribution plus an exponential tail, particularly in the high salt regime. These distributions are consistent with the predictions and measurements for osmotic and salted brushes at interfaces of low curvature. When the concentration of added salt exceeds the concentration of counterions inside the brush, there is a salt-induced deswelling, but even at the highest salt concentration the brush remains significantly swollen due to a short-ranged excluded volume interaction. This is responsible for the observed resistance to aggregation of these comparatively high concentration polyelectrolyte stabilized gold nanoparticle dispersions even in the presence of a high concentration of added salt.

  2. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  3. Intensity Modulated Radiation Treatment of Prostate Cancer Guided by High Field MR Spectroscopic Imaging

    DTIC Science & Technology

    2005-05-01

    constructed with incorporation of the nonuniform dose prescription. The functional unit density distribution in a sensitive structure is also considered...of the corresponding organ, and -b(i) is the target, we define the effective dose at a voxel as the physical functional unit density. The value of n...cr, tended to include the nonuniform functional unit density dis- D,(i) the calculated dose in voxel i, DO(i) the prescription tribution using Eq. (8

  4. A database system to support image algorithm evaluation

    NASA Technical Reports Server (NTRS)

    Lien, Y. E.

    1977-01-01

    The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.

  5. Assessing hail risk for a building portfolio by generating stochastic events

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Choffet, Marc; Demierre, Jonathan; Imhof, Markus; Jaboyedoff, Michel; Nguyen, Liliane; Voumard, Jérémie

    2015-04-01

    Among the natural hazards affecting buildings, hail is one of the most costly and is nowadays a major concern for building insurance companies. In Switzerland, several costly events were reported these last years, among which the July 2011 event, which cost around 125 million EUR to the Aargauer public insurance company (North-western Switzerland). This study presents the new developments in a stochastic model which aims at evaluating the risk for a building portfolio. Thanks to insurance and meteorological radar data of the 2011 Aargauer event, vulnerability curves are proposed by comparing the damage rate to the radar intensity (i.e. the maximum hailstone size reached during the event, deduced from the radar signal). From these data, vulnerability is defined by a two-step process. The first step defines the probability for a building to be affected (i.e. to claim damages), while the second, if the building is affected, attributes a damage rate to the building from a probability distribution specific to the intensity class. To assess the risk, stochastic events are then generated by summing a set of Gaussian functions with 6 random parameters (X and Y location, maximum hailstone size, standard deviation, eccentricity and orientation). The location of these functions is constrained by a general event shape and by the position of the previously defined functions of the same event. For each generated event, the total cost is calculated in order to obtain a distribution of event costs. The general events parameters (shape, size, …) as well as the distribution of the Gaussian parameters are inferred from two radar intensity maps, namely the one of the aforementioned event, and a second from an event which occurred in 2009. After a large number of simulations, the hailstone size distribution obtained in different regions is compared to the distribution inferred from pre-existing hazard maps, built from a larger set of radar data. The simulation parameters are then adjusted by trial and error, in order to get the best reproduction of the expected distributions. The value of the mean annual risk obtained using the model is also compared to the mean annual risk calculated using directly the hazard maps. According to the first results, the return period of an event inducing a total damage cost equal or greater than 125 million EUR for the Aargauer insurance company would be of around 10 to 40 years.

  6. 29 CFR 1630.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... “auxiliary aids or services” (as defined by 42 U.S.C. 12103(1)); (iv) Learned behavioral or adaptive... involuntary leave, termination, exclusion for failure to meet a qualification standard, harassment, or denial... number of employees available among whom the performance of that job function can be distributed; and/or...

  7. Rényi entropies characterizing the shape and the extension of the phase space representation of quantum wave functions in disordered systems.

    PubMed

    Varga, Imre; Pipek, János

    2003-08-01

    We discuss some properties of the generalized entropies, called Rényi entropies, and their application to the case of continuous distributions. In particular, it is shown that these measures of complexity can be divergent; however, their differences are free from these divergences, thus enabling them to be good candidates for the description of the extension and the shape of continuous distributions. We apply this formalism to the projection of wave functions onto the coherent state basis, i.e., to the Husimi representation. We also show how the localization properties of the Husimi distribution on average can be reconstructed from its marginal distributions that are calculated in position and momentum space in the case when the phase space has no structure, i.e., no classical limit can be defined. Numerical simulations on a one-dimensional disordered system corroborate our expectations.

  8. Generalized local emission tomography

    DOEpatents

    Katsevich, Alexander J.

    1998-01-01

    Emission tomography enables locations and values of internal isotope density distributions to be determined from radiation emitted from the whole object. In the method for locating the values of discontinuities, the intensities of radiation emitted from either the whole object or a region of the object containing the discontinuities are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the isotope density discontinuity. The asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) knowing pointwise values of the attenuation coefficient within the object. In the method for determining the location of the discontinuity, the intensities of radiation emitted from an object are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the density discontinuity and the location .GAMMA. of the attenuation coefficient discontinuity. Pointwise values of the attenuation coefficient within the object need not be known in this case.

  9. Synthetic lipids and their role in defining macromolecular assemblies.

    PubMed

    Parrill, Abby L

    2015-10-01

    Lipids have a variety of physiological roles, ranging from structural and biophysical contributions to membrane functions to signaling contributions in normal and abnormal physiology. This review highlights some of the contributions made by Robert Bittman to our understanding of lipid assemblies through the production of synthetic lipid analogs in the sterol, sphingolipid, and glycolipid classes. His contributions have included the development of a fluorescent cholesterol analog that shows strong functional analogies to cholesterol that has allowed live imaging of cholesterol distribution in living systems, to stereospecific synthetic approaches to both sphingolipid and glycolipid analogs crucial in defining the structure-activity relationships of lipid biological targets. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Static shape control for flexible structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.

  11. Maps on statistical manifolds exactly reduced from the Perron-Frobenius equations for solvable chaotic maps

    NASA Astrophysics Data System (ADS)

    Goto, Shin-itiro; Umeno, Ken

    2018-03-01

    Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps, where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.

  12. Features of the use of time-frequency distributions for controlling the mixture-producing aggregate

    NASA Astrophysics Data System (ADS)

    Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.

    2018-05-01

    The paper submits and argues the information on filtering properties of the mixing unit as a part of the mixture-producing aggregate. Relevant theoretical data concerning a channel transfer function of the mixing unit and multidimensional material flow signals are adduced here. Note that ordinary one-dimensional material flow signals are defined in terms of time-frequency distributions of Cohen’s class representations operating with Gabor wavelet functions. Two time-frequencies signal representations are written about in the paper to show how one can solve controlling problems as applied to mixture-producing systems: they are the so-called Rihaczek and Wigner-Ville distributions. In particular, the latter illustrates low-pass filtering properties that are practically available in any of low-pass elements of a physical system.

  13. Density-functional theory based on the electron distribution on the energy coordinate

    NASA Astrophysics Data System (ADS)

    Takahashi, Hideaki

    2018-03-01

    We developed an electronic density functional theory utilizing a novel electron distribution n(ɛ) as a basic variable to compute ground state energy of a system. n(ɛ) is obtained by projecting the electron density n({\\boldsymbol{r}}) defined on the space coordinate {\\boldsymbol{r}} onto the energy coordinate ɛ specified with the external potential {\\upsilon }ext}({\\boldsymbol{r}}) of interest. It was demonstrated that the Kohn-Sham equation can also be formulated with the exchange-correlation functional E xc[n(ɛ)] that employs the density n(ɛ) as an argument. It turned out an exchange functional proposed in our preliminary development suffices to describe properly the potential energies of several types of chemical bonds with comparable accuracies to the corresponding functional based on local density approximation. As a remarkable feature of the distribution n(ɛ) it inherently involves the spatially non-local information of the exchange hole at the bond dissociation limit in contrast to conventional approximate functionals. By taking advantage of this property we also developed a prototype of the static correlation functional E sc including no empirical parameters, which showed marked improvements in describing the dissociations of covalent bonds in {{{H}}}2,{{{C}}}2{{{H}}}4 and {CH}}4 molecules.

  14. A concise introduction to Colombeau generalized functions and their applications in classical electrodynamics

    NASA Astrophysics Data System (ADS)

    Gsponer, Andre

    2009-01-01

    The objective of this introduction to Colombeau algebras of generalized functions (in which distributions can be freely multiplied) is to explain in elementary terms the essential concepts necessary for their application to basic nonlinear problems in classical physics. Examples are given in hydrodynamics and electrodynamics. The problem of the self-energy of a point electric charge is worked out in detail: the Coulomb potential and field are defined as Colombeau generalized functions, and integrals of nonlinear expressions corresponding to products of distributions (such as the square of the Coulomb field and the square of the delta function) are calculated. Finally, the methods introduced in Gsponer (2007 Eur. J. Phys. 28 267, 2007 Eur. J. Phys. 28 1021 and 2007 Eur. J. Phys. 28 1241), to deal with point-like singularities in classical electrodynamics are confirmed.

  15. A Distributed Trajectory-Oriented Approach to Managing Traffic Complexity

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Wing, David J.; Vivona, Robert; Garcia-Chico, Jose-Luis

    2007-01-01

    In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which ground-based service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. While its architecture becomes more distributed, the goal of the Air Traffic Management (ATM) system remains to achieve objectives such as maintaining safety and efficiency. It is, therefore, critical to design appropriate control elements to ensure that aircraft and groundbased actions result in achieving these objectives without unduly restricting user-preferred trajectories. This paper presents a trajectory-oriented approach containing two such elements. One is a trajectory flexibility preservation function, by which aircraft plan their trajectories to preserve flexibility to accommodate unforeseen events. And the other is a trajectory constraint minimization function by which ground-based agents, in collaboration with air-based agents, impose just-enough restrictions on trajectories to achieve ATM objectives, such as separation assurance and flow management. The underlying hypothesis is that preserving trajectory flexibility of each individual aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by minimizing constraints without jeopardizing the intended ATM objectives. The paper presents conceptually how the two functions operate in a distributed control architecture that includes self separation. The paper illustrates the concept through hypothetical scenarios involving conflict resolution and flow management. It presents a functional analysis of the interaction and information flow between the functions. It also presents an analytical framework for defining metrics and developing methods to preserve trajectory flexibility and minimize its constraints. In this framework flexibility is defined in terms of robustness and adaptability to disturbances and the impact of constraints is illustrated through analysis of a trajectory solution space with limited degrees of freedom and in simple constraint situations involving meeting multiple times of arrival and resolving a conflict.

  16. The 120V 20A PWM switch for applications in high power distribution

    NASA Astrophysics Data System (ADS)

    Borelli, V.; Nimal, W.

    1989-08-01

    A 20A/120VDC (voltage direct current) PWM (Pulse Width Modulation) Solid State Power Controller (SSPC) developed under ESA contract to be used in the power distribution system of Columbus is described. The general characteristics are discussed and the project specification defined. The benefits of a PWM solution over a more conventional approach, for the specific application considered are presented. An introduction to the SSPC characteristics and a functional description are presented.

  17. On the distribution of local dissipation scales in turbulent flows

    NASA Astrophysics Data System (ADS)

    May, Ian; Morshed, Khandakar; Venayagamoorthy, Karan; Dasi, Lakshmi

    2014-11-01

    Universality of dissipation scales in turbulence relies on self-similar scaling and large scale independence. We show that the probability density function of dissipation scales, Q (η) , is analytically defined by the two-point correlation function, and the Reynolds number (Re). We also present a new analytical form for the two-point correlation function for the dissipation scales through a generalized definition of a directional Taylor microscale. Comparison of Q (η) predicted within this framework and published DNS data shows excellent agreement. It is shown that for finite Re no single similarity law exists even for the case of homogeneous isotropic turbulence. Instead a family of scaling is presented, defined by Re and a dimensionless local inhomogeneity parameter based on the spatial gradient of the rms velocity. For moderate Re inhomogeneous flows, we note a strong directional dependence of Q (η) dictated by the principal Reynolds stresses. It is shown that the mode of the distribution Q (η) significantly shifts to sub-Kolmogorov scales along the inhomogeneous directions, as in wall bounded turbulence. This work extends the classical Kolmogorov's theory to finite Re homogeneous isotropic turbulence as well as the case of inhomogeneous anisotropic turbulence.

  18. Estimate of uncertainties in polarized parton distributions

    NASA Astrophysics Data System (ADS)

    Miyama, M.; Goto, Y.; Hirai, M.; Kobayashi, H.; Kumano, S.; Morii, T.; Saito, N.; Shibata, T.-A.; Yamanishi, T.

    2001-10-01

    From \\chi^2 analysis of polarized deep inelastic scattering data, we determined polarized parton distribution functions (Y. Goto et al. (AAC), Phys. Rev. D 62, 34017 (2000).). In order to clarify the reliability of the obtained distributions, we should estimate uncertainties of the distributions. In this talk, we discuss the pol-PDF uncertainties by using a Hessian method. A Hessian matrix H_ij is given by second derivatives of the \\chi^2, and the error matrix \\varepsilon_ij is defined as the inverse matrix of H_ij. Using the error matrix, we calculate the error of a function F by (δ F)^2 = sum_i,j fracpartial Fpartial ai \\varepsilon_ij fracpartial Fpartial aj , where a_i,j are the parameters in the \\chi^2 analysis. Using this method, we show the uncertainties of the pol-PDF, structure functions g_1, and spin asymmetries A_1. Furthermore, we show a role of future experiments such as the RHIC-Spin. An important purpose of planned experiments in the near future is to determine the polarized gluon distribution function Δ g (x) in detail. We reanalyze the pol-PDF uncertainties including the gluon fake data which are expected to be given by the upcoming experiments. From this analysis, we discuss how much the uncertainties of Δ g (x) can be improved by such measurements.

  19. Coupling fine-scale root and canopy structure using ground-based remote sensing

    Treesearch

    Brady Hardiman; Christopher Gough; John Butnor; Gil Bohrer; Matteo Detto; Peter Curtis

    2017-01-01

    Ecosystem physical structure, defined by the quantity and spatial distribution of biomass, influences a range of ecosystem functions. Remote sensing tools permit the non-destructive characterization of canopy and root features, potentially providing opportunities to link above- and belowground structure at fine spatial resolution in...

  20. Sampling functions for geophysics

    NASA Technical Reports Server (NTRS)

    Giacaglia, G. E. O.; Lunquist, C. A.

    1972-01-01

    A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.

  1. Thermal equilibrium and statistical thermometers in special relativity.

    PubMed

    Cubero, David; Casado-Pascual, Jesús; Dunkel, Jörn; Talkner, Peter; Hänggi, Peter

    2007-10-26

    There is an intense debate in the recent literature about the correct generalization of Maxwell's velocity distribution in special relativity. The most frequently discussed candidate distributions include the Jüttner function as well as modifications thereof. Here we report results from fully relativistic one-dimensional molecular dynamics simulations that resolve the ambiguity. The numerical evidence unequivocally favors the Jüttner distribution. Moreover, our simulations illustrate that the concept of "thermal equilibrium" extends naturally to special relativity only if a many-particle system is spatially confined. They make evident that "temperature" can be statistically defined and measured in an observer frame independent way.

  2. Structure of LiPs ground and excited states

    NASA Astrophysics Data System (ADS)

    Bressanini, Dario

    2018-01-01

    The lithium atom in its ground state can bind positronium (Ps) forming LiPs, an electronically stable system. In this study we use the fixed node diffusion Monte Carlo method to perform a detailed investigation of the internal structure of LiPs, establishing to what extent it could be described by smaller interacting subsystems. To study the internal structure of positronic systems we propose a way to analyze the particle distribution functions: We first order the particle-nucleus distances, from the closest to the farthest. We then bin the ordered distances obtaining, for LiPs, five distribution functions that we call sorted distribution functions. We used them to show that Ps is a quite well-defined entity inside LiPs: The positron is forming positronium not only when it is far away from the nucleus, but also when it is in the same region of space occupied by the 2 s electrons. Hence, it is not correct to describe LiPs as positronium "orbiting" around a lithium atom, as sometimes has been done, since the positron penetrates the electronic distribution and can be found close to the nucleus.

  3. The Application of Nonstandard Analysis to the Study of Inviscid Shock Wave Jump Conditions

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Baty, R. S.

    1998-01-01

    The use of conservation laws in nonconservative form for deriving shock jump conditions by Schwartz distribution theory leads to ambiguous products of generalized functions. Nonstandard analysis is used to define a class of Heaviside functions where the jump from zero to one occurs on an infinitesimal interval. These Heaviside functions differ by their microstructure near x = 0, i.e., by the nature of the rise within the infinitesimal interval it is shown that the conservation laws in nonconservative form can relate the different Heaviside functions used to define jumps in different flow parameters. There are no mathematical or logical ambiguities in the derivation of the jump conditions. An important result is that the microstructure of the Heaviside function of the jump in entropy has a positive peak greater than one within the infinitesimal interval where the jump occurs. This phenomena is known from more sophisticated studies of the structure of shock waves using viscous fluid assumption. However, the present analysis is simpler and more direct.

  4. nth-Nearest-neighbor distribution functions of an interacting fluid from the pair correlation function: a hierarchical approach.

    PubMed

    Bhattacharjee, Biplab

    2003-04-01

    The paper presents a general formalism for the nth-nearest-neighbor distribution (NND) of identical interacting particles in a fluid confined in a nu-dimensional space. The nth-NND functions, W(n,r) (for n=1,2,3, em leader) in a fluid are obtained hierarchically in terms of the pair correlation function and W(n-1,r) alone. The radial distribution function (RDF) profiles obtained from the molecular dynamics (MD) simulation of Lennard-Jones (LJ) fluid is used to illustrate the results. It is demonstrated that the collective structural information contained in the maxima and minima of the RDF profiles being resolved in terms of individual NND functions may provide more insights about the microscopic neighborhood structure around a reference particle in a fluid. Representative comparison between the results obtained from the formalism and the MD simulation data shows good agreement. Apart from the quantities such as nth-NND functions and nth-nearest-neighbor distances, the average neighbor population number is defined. These quantities are evaluated for the LJ model system and interesting density dependence of the microscopic neighborhood shell structures are discussed in terms of them. The relevance of the NND functions in various phenomena is also pointed out.

  5. nth-nearest-neighbor distribution functions of an interacting fluid from the pair correlation function: A hierarchical approach

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Biplab

    2003-04-01

    The paper presents a general formalism for the nth-nearest-neighbor distribution (NND) of identical interacting particles in a fluid confined in a ν-dimensional space. The nth-NND functions, W(n,r¯) (for n=1,2,3,…) in a fluid are obtained hierarchically in terms of the pair correlation function and W(n-1,r¯) alone. The radial distribution function (RDF) profiles obtained from the molecular dynamics (MD) simulation of Lennard-Jones (LJ) fluid is used to illustrate the results. It is demonstrated that the collective structural information contained in the maxima and minima of the RDF profiles being resolved in terms of individual NND functions may provide more insights about the microscopic neighborhood structure around a reference particle in a fluid. Representative comparison between the results obtained from the formalism and the MD simulation data shows good agreement. Apart from the quantities such as nth-NND functions and nth-nearest-neighbor distances, the average neighbor population number is defined. These quantities are evaluated for the LJ model system and interesting density dependence of the microscopic neighborhood shell structures are discussed in terms of them. The relevance of the NND functions in various phenomena is also pointed out.

  6. Formation and distribution of fragments in the spontaneous fission of 240 Pu

    DOE PAGES

    Sadhukhan, Jhilam; Zhang, Chunli; Nazarewicz, Witold; ...

    2017-12-18

    We use the stochastic Langevin framework to simulate the nuclear evolution after the system tunnels through the multidimensional potential barrier. For a representative sample of different initial configurations along the outer turning-point line, we define effective fission paths by computing a large number of Langevin trajectories. We extract the relative contribution of each such path to the fragment distribution. We then use nucleon localization functions along effective fission pathways to analyze the characteristics of prefragments at prescission configurations.

  7. Pair distribution function study and mechanical behavior of as-cast and structurally relaxed Zr-based bulk metallic glasses

    NASA Astrophysics Data System (ADS)

    Fan, Cang; Liaw, P. K.; Wilson, T. W.; Choo, H.; Gao, Y. F.; Liu, C. T.; Proffen, Th.; Richardson, J. W.

    2006-12-01

    Contrary to reported results on structural relaxation inducing brittleness in amorphous alloys, the authors found that structural relaxation actually caused an increase in the strength of Zr55Cu35Al10 bulk metallic glass (BMG) without changing the plasticity. Three dimensional models were rebuilt for the as-cast and structurally relaxed BMGs by reverse Monte Carlo (RMC) simulations based on the pair distribution function (PDF) measured by neutron scattering. Only a small portion of the atom pairs was found to change to more dense packing. The concept of free volume was defined based on the PDF and RMC studies, and the mechanism of mechanical behavior was discussed.

  8. Theoretical study of the dependence of single impurity Anderson model on various parameters within distributional exact diagonalization method

    NASA Astrophysics Data System (ADS)

    Syaina, L. P.; Majidi, M. A.

    2018-04-01

    Single impurity Anderson model describes a system consisting of non-interacting conduction electrons coupled with a localized orbital having strongly interacting electrons at a particular site. This model has been proven successful to explain the phenomenon of metal-insulator transition through Anderson localization. Despite the well-understood behaviors of the model, little has been explored theoretically on how the model properties gradually evolve as functions of hybridization parameter, interaction energy, impurity concentration, and temperature. Here, we propose to do a theoretical study on those aspects of a single impurity Anderson model using the distributional exact diagonalization method. We solve the model Hamiltonian by randomly generating sampling distribution of some conducting electron energy levels with various number of occupying electrons. The resulting eigenvalues and eigenstates are then used to define the local single-particle Green function for each sampled electron energy distribution using Lehmann representation. Later, we extract the corresponding self-energy of each distribution, then average over all the distributions and construct the local Green function of the system to calculate the density of states. We repeat this procedure for various values of those controllable parameters, and discuss our results in connection with the criteria of the occurrence of metal-insulator transition in this system.

  9. Prospects for AGN Science using the ART-XC on the SRG Mission

    NASA Technical Reports Server (NTRS)

    Swartz, Douglas A.; Elsner, Ronald F.; Gubarev, Mikhail V.; O'Dell, Stephen L.; Ramsey, Brian D.; Bonamente, Massimiliano

    2012-01-01

    The enhanced hard X-ray sensitivity provided by the Astronomical Roentgen Telescope to the Spectrum Roentgen Gamma mission facilitates the detection of heavily obscured and other hard-spectrum cosmic X-ray sources. The SRG all-sky survey will obtain large, statistically-well-defined samples of active galactic nuclei (AGN) including a significant population of local heavily-obscured AGN. In anticipation of the SRG all-sky survey, we investigate the prospects for refining the bright end of the AGN luminosity function and determination of the local black hole mass function and comparing the spatial distribution of AGN with large-scale structure defined by galaxy clusters and groups. Particular emphasis is placed on studies of the deep survey Ecliptic Pole regions.

  10. Second-order Boltzmann equation: gauge dependence and gauge invariance

    NASA Astrophysics Data System (ADS)

    Naruko, Atsushi; Pitrou, Cyril; Koyama, Kazuya; Sasaki, Misao

    2013-08-01

    In the context of cosmological perturbation theory, we derive the second-order Boltzmann equation describing the evolution of the distribution function of radiation without a specific gauge choice. The essential steps in deriving the Boltzmann equation are revisited and extended given this more general framework: (i) the polarization of light is incorporated in this formalism by using a tensor-valued distribution function; (ii) the importance of a choice of the tetrad field to define the local inertial frame in the description of the distribution function is emphasized; (iii) we perform a separation between temperature and spectral distortion, both for the intensity and polarization for the first time; (iv) the gauge dependence of all perturbed quantities that enter the Boltzmann equation is derived, and this enables us to check the correctness of the perturbed Boltzmann equation by explicitly showing its gauge-invariance for both intensity and polarization. We finally discuss several implications of the gauge dependence for the observed temperature.

  11. Statistical methods for investigating quiescence and other temporal seismicity patterns

    USGS Publications Warehouse

    Matthews, M.V.; Reasenberg, P.A.

    1988-01-01

    We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.

  12. Dynamic data driven bidirectional reflectance distribution function measurement system

    NASA Astrophysics Data System (ADS)

    Nauyoks, Stephen E.; Freda, Sam; Marciniak, Michael A.

    2014-09-01

    The bidirectional reflectance distribution function (BRDF) is a fitted distribution function that defines the scatter of light off of a surface. The BRDF is dependent on the directions of both the incident and scattered light. Because of the vastness of the measurement space of all possible incident and reflected directions, the calculation of BRDF is usually performed using a minimal amount of measured data. This may lead to poor fits and uncertainty in certain regions of incidence or reflection. A dynamic data driven application system (DDDAS) is a concept that uses an algorithm on collected data to influence the collection space of future data acquisition. The authors propose a DDD-BRDF algorithm that fits BRDF data as it is being acquired and uses on-the-fly fittings of various BRDF models to adjust the potential measurement space. In doing so, it is hoped to find the best model to fit a surface and the best global fit of the BRDF with a minimum amount of collection space.

  13. TMD parton distributions based on three-body decay functions in NLL order of QCD

    NASA Astrophysics Data System (ADS)

    Tanaka, Hidekazu

    2015-04-01

    Three-body decay functions in space-like parton branches are implemented to evaluate transverse-momentum-dependent (TMD) parton distribution functions in the next-to-leading logarithmic (NLL) order of quantum chromodynamics (QCD). Interference contributions due to the next-to-leading-order terms are taken into account for the evaluation of the transverse momenta in initial state parton radiations. Some properties of the decay functions are also examined. As an example, the calculated results are compared with those evaluated by an algorithm proposed in [M. A. Kimber, A. D. Martin, and M. G. Ryskin, Eur. Phys. J. C 12, 655 (2000)], [M. A. Kimber, A. D. Martin, and M. G. Ryskin, Phys. Rev. D 63, 11402 (2001)], [G. Watt, A. D. Martin, and M. G. Ryskin, Eur. Phys. J. C 31, 73 (2003)], and [A. D. Martin, M. G. Ryskin, and G. Watt, Eur. Phys. J. C 66, 167 (2010)], in which the TMD parton distributions are defined based on the k_t-factorization method with angular ordering conditions due to interference effects.

  14. Solvation of Na^+ in water from first-principles molecular dynamics

    NASA Astrophysics Data System (ADS)

    White, J. A.; Schwegler, E.; Galli, G.; Gygi, F.

    2000-03-01

    We have carried out ab initio molecular dynamics (MD) simulations of the Na^+ ion in water with an MD cell containing a single alkali ion and 53 water molecules. The electron-electron and electron-ion interactions were modeled by density functional theory with a generalized gradient approximation for the exchange-correlation functional. The computed radial distribution functions, coordination numbers, and angular distributions are consistent with available experimental data. The first solvation shell contains 5.2±0.6 water molecules, with some waters occasionally exchanging with those of the second shell. The computed Na^+ hydration number is larger than that from calculations for water clusters surrounding an Na^+ ion, but is consistent with that derived from x-ray measurements. Our results also indicate that the first hydration shell is better defined for Na^+ than for K^+ [1], as indicated by the first minimum in the Na-O pair distribution function. [1] L.M. Ramaniah, M. Bernasconi, and M. Parrinello, J. Chem. Phys. 111, 1587 (1999). This work was performed for DOE under contract W-7405-ENG-48.

  15. 26 CFR 1.401(a)(9)-6 - Required minimum distributions for defined benefit plans and annuity contracts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Required minimum distributions for defined...-Sharing, Stock Bonus Plans, Etc. § 1.401(a)(9)-6 Required minimum distributions for defined benefit plans and annuity contracts. Q-1. How must distributions under a defined benefit plan be paid in order to...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined bymore » the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.« less

  17. Space Station Environmental Control/Life Support System engineering

    NASA Technical Reports Server (NTRS)

    Miller, C. W.; Heppner, D. B.

    1985-01-01

    The present paper is concerned with a systems engineering study which has provided an understanding of the overall Space Station ECLSS (Environmental Control and Life Support System). ECLSS/functional partitioning is considered along with function criticality, technology alternatives, a technology description, single thread systems, Space Station architectures, ECLSS distribution, mechanical schematics per space station, and Space Station ECLSS characteristics. Attention is given to trade studies and system synergism. The Space Station functional description had been defined by NASA. The ECLSS will utilize technologies which embody regenerative concepts to minimize the use of expendables.

  18. Pacific Yew: A Facultative Riparian Conifer with an Uncertain Future

    Treesearch

    Stanley Scher; Bert Schwarzschild

    1989-01-01

    Increasing demands for Pacific yew bark, a source of an anticancer agent, have generated interest in defining the yew resource and in exploring strategies to conserve this species. The distribution, riparian requirements and ecosystem functions of yew populations in coastal and inland forests of northern California are outlined and alternative approaches to conserving...

  19. Airframe materials for HSR

    NASA Technical Reports Server (NTRS)

    Bales, Thomas T.

    1992-01-01

    Vugraphs are presented to show the use of refractory materials for the skin of the High speed Civil Transport (HSCT). Examples are given of skin temperature ranges, failure mode weight distribution, tensile properties as a function of temperature, and components to be constructed from composite materials. The responsibilities of various aircraft companies for specific aircraft components are defined.

  20. A Distributed System for Learning Programming On-Line

    ERIC Educational Resources Information Center

    Verdu, Elena; Regueras, Luisa M.; Verdu, Maria J.; Leal, Jose P.; de Castro, Juan P.; Queiros, Ricardo

    2012-01-01

    Several Web-based on-line judges or on-line programming trainers have been developed in order to allow students to train their programming skills. However, their pedagogical functionalities in the learning of programming have not been clearly defined. EduJudge is a project which aims to integrate the "UVA On-line Judge", an existing…

  1. Annular feed air breathing fuel cell stack

    DOEpatents

    Wilson, Mahlon S.; Neutzler, Jay K.

    1997-01-01

    A stack of polymer electrolyte fuel cells is formed from a plurality of unit cells where each unit cell includes fuel cell components defining a periphery and distributed along a common axis, where the fuel cell components include a polymer electrolyte membrane, an anode and a cathode contacting opposite sides of the membrane, and fuel and oxygen flow fields contacting the anode and the cathode, respectively, wherein the components define an annular region therethrough along the axis. A fuel distribution manifold within the annular region is connected to deliver fuel to the fuel flow field in each of the unit cells. The fuel distribution manifold is formed from a hydrophilic-like material to redistribute water produced by fuel and oxygen reacting at the cathode. In a particular embodiment, a single bolt through the annular region clamps the unit cells together. In another embodiment, separator plates between individual unit cells have an extended radial dimension to function as cooling fins for maintaining the operating temperature of the fuel cell stack.

  2. Complex Interdependence Regulates Heterotypic Transcription Factor Distribution and Coordinates Cardiogenesis.

    PubMed

    Luna-Zurita, Luis; Stirnimann, Christian U; Glatt, Sebastian; Kaynak, Bogac L; Thomas, Sean; Baudin, Florence; Samee, Md Abul Hassan; He, Daniel; Small, Eric M; Mileikovsky, Maria; Nagy, Andras; Holloway, Alisha K; Pollard, Katherine S; Müller, Christoph W; Bruneau, Benoit G

    2016-02-25

    Transcription factors (TFs) are thought to function with partners to achieve specificity and precise quantitative outputs. In the developing heart, heterotypic TF interactions, such as between the T-box TF TBX5 and the homeodomain TF NKX2-5, have been proposed as a mechanism for human congenital heart defects. We report extensive and complex interdependent genomic occupancy of TBX5, NKX2-5, and the zinc finger TF GATA4 coordinately controlling cardiac gene expression, differentiation, and morphogenesis. Interdependent binding serves not only to co-regulate gene expression but also to prevent TFs from distributing to ectopic loci and activate lineage-inappropriate genes. We define preferential motif arrangements for TBX5 and NKX2-5 cooperative binding sites, supported at the atomic level by their co-crystal structure bound to DNA, revealing a direct interaction between the two factors and induced DNA bending. Complex interdependent binding mechanisms reveal tightly regulated TF genomic distribution and define a combinatorial logic for heterotypic TF regulation of differentiation. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Properties of field functionals and characterization of local functionals

    NASA Astrophysics Data System (ADS)

    Brouder, Christian; Dang, Nguyen Viet; Laurent-Gengoux, Camille; Rejzner, Kasia

    2018-02-01

    Functionals (i.e., functions of functions) are widely used in quantum field theory and solid-state physics. In this paper, functionals are given a rigorous mathematical framework and their main properties are described. The choice of the proper space of test functions (smooth functions) and of the relevant concept of differential (Bastiani differential) are discussed. The relation between the multiple derivatives of a functional and the corresponding distributions is described in detail. It is proved that, in a neighborhood of every test function, the support of a smooth functional is uniformly compactly supported and the order of the corresponding distribution is uniformly bounded. Relying on a recent work by Dabrowski, several spaces of functionals are furnished with a complete and nuclear topology. In view of physical applications, it is shown that most formal manipulations can be given a rigorous meaning. A new concept of local functionals is proposed and two characterizations of them are given: the first one uses the additivity (or Hammerstein) property, the second one is a variant of Peetre's theorem. Finally, the first step of a cohomological approach to quantum field theory is carried out by proving a global Poincaré lemma and defining multi-vector fields and graded functionals within our framework.

  4. Finding Mount Everest and handling voids.

    PubMed

    Storch, Tobias

    2011-01-01

    Evolutionary algorithms (EAs) are randomized search heuristics that solve problems successfully in many cases. Their behavior is often described in terms of strategies to find a high location on Earth's surface. Unfortunately, many digital elevation models describing it contain void elements. These are elements not assigned an elevation. Therefore, we design and analyze simple EAs with different strategies to handle such partially defined functions. They are experimentally investigated on a dataset describing the elevation of Earth's surface. The largest value found by an EA within a certain runtime is measured, and the median over a few runs is computed and compared for the different EAs. For the dataset, the distribution of void elements seems to be neither random nor adversarial. They are so-called semirandomly distributed. To deepen our understanding of the behavior of the different EAs, they are theoretically considered on well-known pseudo-Boolean functions transferred to partially defined ones. These modifications are also performed in a semirandom way. The typical runtime until an optimum is found by an EA is analyzed, namely bounded from above and below, and compared for the different EAs. We figure out that for the random model it is a good strategy to assume that a void element has a worse function value than all previous elements. Whereas for the adversary model it is a good strategy to assume that a void element has the best function value of all previous elements.

  5. Distributed analysis functional testing using GangaRobot in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Legger, Federica; ATLAS Collaboration

    2011-12-01

    Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.

  6. [What do functionally defined populations contribute to the explanation of regional differences in medical care?].

    PubMed

    Graf von Stillfried, D; Czihal, T

    2014-02-01

    Geographic variation in health care is increasingly subject to analysis and health policy aiming at the suitable allocation of resources and the reduction of unwarranted variation for the patient populations concerned. As in the case of area-level indicators, in most cases populations are geographically defined. The concept of geographically defined populations, however, may be self-limiting with respect to identifying the potential for improvement. As an alternative, we explored how a functional definition of populations would support defining the scope for reducing unwarranted geographical variations. Given that patients in Germany have virtually no limits in accessing physicians of their choice, we adapted a method that has been developed in the United States to create virtual networks of physicians based on commonly treated patients. Using the physician claims data under statutory insurance, which covers 90% of the population, we defined 43,006 populations-and networks-in 2010. We found that there is considerable variation between the population in terms of their risk structure and the share of the primary care practice in the total services provided. Moreover, there are marked differences in the size and structure of networks between cities, densely populated regions, and rural regions. We analyzed the variation for two area-level indicators: the proportion of diabetics with at least one HbA1c test per year for diabetics, and the proportion of patients with low back pain undergoing computed tomography and/or magnetic resonance imaging. Variation at the level of functionally defined populations proved to be larger than for geographically defined populations. The pattern of distribution gives evidence on the degree to which consensus targets could be reached and which networks need to be addressed in order to reduce unwarranted regional variation. The concept of functionally defined populations needs to be further developed before implementation.

  7. White Matter Connectivity of the Thalamus Delineates the Functional Architecture of Competing Thalamocortical Systems

    PubMed Central

    O'Muircheartaigh, Jonathan; Keller, Simon S.; Barker, Gareth J.; Richardson, Mark P.

    2015-01-01

    There is an increasing awareness of the involvement of thalamic connectivity on higher level cortical functioning in the human brain. This is reflected by the influence of thalamic stimulation on cortical activity and behavior as well as apparently cortical lesion syndromes occurring as a function of small thalamic insults. Here, we attempt to noninvasively test the correspondence of structural and functional connectivity of the human thalamus using diffusion-weighted and resting-state functional MRI. Using a large sample of 102 adults, we apply tensor independent component analysis to diffusion MRI tractography data to blindly parcellate bilateral thalamus according to diffusion tractography-defined structural connectivity. Using resting-state functional MRI collected in the same subjects, we show that the resulting structurally defined thalamic regions map to spatially distinct, and anatomically predictable, whole-brain functional networks in the same subjects. Although there was significant variability in the functional connectivity patterns, the resulting 51 structural and functional patterns could broadly be reduced to a subset of 7 similar core network types. These networks were distinct from typical cortical resting-state networks. Importantly, these networks were distributed across the brain and, in a subset, map extremely well to known thalamocortico-basal-ganglial loops. PMID:25899706

  8. A Collaborative Knowledge Plane for Autonomic Networks

    NASA Astrophysics Data System (ADS)

    Mbaye, Maïssa; Krief, Francine

    Autonomic networking aims to give network components self-managing capabilities. Several autonomic architectures have been proposed. Each of these architectures includes sort of a knowledge plane which is very important to mimic an autonomic behavior. Knowledge plane has a central role for self-functions by providing suitable knowledge to equipment and needs to learn new strategies for more accuracy.However, defining knowledge plane's architecture is still a challenge for researchers. Specially, defining the way cognitive supports interact each other in knowledge plane and implementing them. Decision making process depends on these interactions between reasoning and learning parts of knowledge plane. In this paper we propose a knowledge plane's architecture based on machine learning (inductive logic programming) paradigm and situated view to deal with distributed environment. This architecture is focused on two self-functions that include all other self-functions: self-adaptation and self-organization. Study cases are given and implemented.

  9. Photon path distribution and optical responses of turbid media: theoretical analysis based on the microscopic Beer-Lambert law.

    PubMed

    Tsuchiya, Y

    2001-08-01

    A concise theoretical treatment has been developed to describe the optical responses of a highly scattering inhomogeneous medium using functions of the photon path distribution (PPD). The treatment is based on the microscopic Beer-Lambert law and has been found to yield a complete set of optical responses by time- and frequency-domain measurements. The PPD is defined for possible photons having a total zigzag pathlength of l between the points of light input and detection. Such a distribution is independent of the absorption properties of the medium and can be uniquely determined for the medium under quantification. Therefore, the PPD can be calculated with an imaginary reference medium having the same optical properties as the medium under quantification except for the absence of absorption. One of the advantages of this method is that the optical responses, the total attenuation, the mean pathlength, etc are expressed by functions of the PPD and the absorption distribution.

  10. Anti-microbial Functions of Group 3 Innate Lymphoid Cells in Gut-Associated Lymphoid Tissues Are Regulated by G-Protein-Coupled Receptor 183.

    PubMed

    Chu, Coco; Moriyama, Saya; Li, Zhi; Zhou, Lei; Flamar, Anne-Laure; Klose, Christoph S N; Moeller, Jesper B; Putzel, Gregory G; Withers, David R; Sonnenberg, Gregory F; Artis, David

    2018-06-26

    The intestinal tract is constantly exposed to various stimuli. Group 3 innate lymphoid cells (ILC3s) reside in lymphoid organs and in the intestinal tract and are required for immunity to enteric bacterial infection. However, the mechanisms that regulate the ILC3s in vivo remain incompletely defined. Here, we show that GPR183, a chemotactic receptor expressed on murine and human ILC3s, regulates ILC3 migration toward its ligand 7α,25-dihydroxycholesterol (7α,25-OHC) in vitro, and GPR183 deficiency in vivo leads to a disorganized distribution of ILC3s in mesenteric lymph nodes and decreased ILC3 accumulation in the intestine. GPR183 functions intrinsically in ILC3s, and GPR183-deficient mice are more susceptible to enteric bacterial infection. Together, these results reveal a role for the GPR183-7α,25-OHC pathway in regulating the accumulation, distribution, and anti-microbial and tissue-protective functions of ILC3s and define a critical role for this pathway in promoting innate immunity to enteric bacterial infection. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Human intelligence and brain networks

    PubMed Central

    Colom, Roberto; Karama, Sherif; Jung, Rex E.; Haier, Richard J.

    2010-01-01

    Intelligence can be defined as a general mental ability for reasoning, problem solving, and learning. Because of its general nature, intelligence integrates cognitive functions such as perception, attention, memory, language, or planning. On the basis of this definition, intelligence can be reliably measured by standardized tests with obtained scores predicting several broad social outcomes such as educational achievement, job performance, health, and longevity. A detailed understanding of the brain mechanisms underlying this general mental ability could provide significant individual and societal benefits. Structural and functional neuroimaging studies have generally supported a frontoparietal network relevant for intelligence. This same network has also been found to underlie cognitive functions related to perception, short-term memory storage, and language. The distributed nature of this network and its involvement in a wide range of cognitive functions fits well with the integrative nature of intelligence. A new key phase of research is beginning to investigate how functional networks relate to structural networks, with emphasis on how distributed brain areas communicate with each other. PMID:21319494

  12. Crack propagation in functionally graded strip under thermal shock

    NASA Astrophysics Data System (ADS)

    Ivanov, I. V.; Sadowski, T.; Pietras, D.

    2013-09-01

    The thermal shock problem in a strip made of functionally graded composite with an interpenetrating network micro-structure of Al2O3 and Al is analysed numerically. The material considered here could be used in brake disks or cylinder liners. In both applications it is subjected to thermal shock. The description of the position-dependent properties of the considered functionally graded material are based on experimental data. Continuous functions were constructed for the Young's modulus, thermal expansion coefficient, thermal conductivity and thermal diffusivity and implemented as user-defined material properties in user-defined subroutines of the commercial finite element software ABAQUS™. The thermal stress and the residual stress of the manufacturing process distributions inside the strip are considered. The solution of the transient heat conduction problem for thermal shock is used for crack propagation simulation using the XFEM method. The crack length developed during the thermal shock is the criterion for crack resistance of the different graduation profiles as a step towards optimization of the composition gradient with respect to thermal shock sensitivity.

  13. Spinal Interneurons and Forelimb Plasticity after Incomplete Cervical Spinal Cord Injury in Adult Rats

    PubMed Central

    Rombola, Angela M.; Rousseau, Celeste A.; Mercier, Lynne M.; Fitzpatrick, Garrett M.; Reier, Paul J.; Fuller, David D.; Lane, Michael A.

    2015-01-01

    Abstract Cervical spinal cord injury (cSCI) disrupts bulbospinal projections to motoneurons controlling the upper limbs, resulting in significant functional impairments. Ongoing clinical and experimental research has revealed several lines of evidence for functional neuroplasticity and recovery of upper extremity function after SCI. The underlying neural substrates, however, have not been thoroughly characterized. The goals of the present study were to map the intraspinal motor circuitry associated with a defined upper extremity muscle, and evaluate chronic changes in the distribution of this circuit following incomplete cSCI. Injured animals received a high cervical (C2) lateral hemisection (Hx), which compromises supraspinal input to ipsilateral spinal motoneurons controlling the upper extremities (forelimb) in the adult rat. A battery of behavioral tests was used to characterize the time course and extent of forelimb motor recovery over a 16 week period post-injury. A retrograde transneuronal tracer – pseudorabies virus – was used to define the motor and pre-motor circuitry controlling the extensor carpi radialis longus (ECRL) muscle in spinal intact and injured animals. In the spinal intact rat, labeling was observed unilaterally within the ECRL motoneuron pool and within spinal interneurons bilaterally distributed within the dorsal horn and intermediate gray matter. No changes in labeling were observed 16 weeks post-injury, despite a moderate degree of recovery of forelimb motor function. These results suggest that recovery of the forelimb function assessed following C2Hx injury does not involve recruitment of new interneurons into the ipsilateral ECRL motor pathway. However, the functional significance of these existing interneurons to motor recovery requires further exploration. PMID:25625912

  14. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  15. The transverse momentum distribution of hadrons within jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Zhong -Bo; Liu, Xiaohui; Ringer, Felix

    We study the transverse momentum distribution of hadrons within jets, where the transverse momentum is defined with respect to the standard jet axis. We consider the case where the jet substructure measurement is performed for an inclusive jet sample pp → jet + X. We demonstrate that this observable provides new opportunities to study transverse momentum dependent fragmentation functions (TMDFFs) which are currently poorly constrained from data, especially for gluons. The factorization of the cross section is obtained within Soft Collinear Effective Theory (SCET), and we show that the relevant TMDFFs are the same as for the more traditional processesmore » semi-inclusive deep inelastic scattering (SIDIS) and electron-positron annihilation. Different than in SIDIS, the observable for the in-jet fragmentation does not depend on TMD parton distribution functions which allows for a cleaner and more direct probe of TMDFFs. We present numerical results and compare to available data from the LHC.« less

  16. The transverse momentum distribution of hadrons within jets

    DOE PAGES

    Kang, Zhong -Bo; Liu, Xiaohui; Ringer, Felix; ...

    2017-11-13

    We study the transverse momentum distribution of hadrons within jets, where the transverse momentum is defined with respect to the standard jet axis. We consider the case where the jet substructure measurement is performed for an inclusive jet sample pp → jet + X. We demonstrate that this observable provides new opportunities to study transverse momentum dependent fragmentation functions (TMDFFs) which are currently poorly constrained from data, especially for gluons. The factorization of the cross section is obtained within Soft Collinear Effective Theory (SCET), and we show that the relevant TMDFFs are the same as for the more traditional processesmore » semi-inclusive deep inelastic scattering (SIDIS) and electron-positron annihilation. Different than in SIDIS, the observable for the in-jet fragmentation does not depend on TMD parton distribution functions which allows for a cleaner and more direct probe of TMDFFs. We present numerical results and compare to available data from the LHC.« less

  17. Landau damping of Langmuir twisted waves with kappa distributed electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arshad, Kashif, E-mail: kashif.arshad.butt@gmail.com; Aman-ur-Rehman; Mahmood, Shahzad

    2015-11-15

    The kinetic theory of Landau damping of Langmuir twisted modes is investigated in the presence of orbital angular momentum of the helical (twisted) electric field in plasmas with kappa distributed electrons. The perturbed distribution function and helical electric field are considered to be decomposed by Laguerre-Gaussian mode function defined in cylindrical geometry. The Vlasov-Poisson equation is obtained and solved analytically to obtain the weak damping rates of the Langmuir twisted waves in a nonthermal plasma. The strong damping effects of the Langmuir twisted waves at wavelengths approaching Debye length are also obtained by using an exact numerical method and aremore » illustrated graphically. The damping rates of the planar Langmuir waves are found to be larger than the twisted Langmuir waves in plasmas which shows opposite behavior as depicted in Fig. 3 by J. T. Mendoça [Phys. Plasmas 19, 112113 (2012)].« less

  18. Advances in Highly Constrained Multi-Phase Trajectory Generation using the General Pseudospectral Optimization Software (GPOPS)

    DTIC Science & Technology

    2013-08-01

    release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by

  19. Developing a Coalition Battle Management Language to Facilitate Interoperability Between Operation CIS, and Simulations in Support of Training and Mission Rehearsal

    DTIC Science & Technology

    2005-06-01

    virtualisation of distributed computing and data resources such as processing, network bandwidth, and storage capacity, to create a single system...and Simulation (M&S) will be integrated into this heterogeneous SOA. M&S functionality will be available in the form of operational M&S services. One...documents defining net centric warfare, the use of M&S functionality is a common theme. Alberts and Hayes give a good overview on net centric operations

  20. Bridging Zirconia Nodes within a Metal–Organic Framework via Catalytic Ni-Hydroxo Clusters to Form Heterobimetallic Nanowires

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platero-Prats, Ana E.; League, Aaron B.; Bernales, Varinia

    2017-07-24

    Metal-organic frameworks (MOFs), with their well-ordered pore networks and tunable surface chemistries, offer a versatile platform for preparing well-defined nanostructures wherein functionality such as catalysis can be incorporated. We resolved the atomic structure of Ni-oxo species deposited in the MOF NU-1000 through atomic layer deposition using local and long-range structure probes, including X-ray absorption spectroscopy, pair distribution function analysis and difference envelope density analysis, with electron microscopy imaging and computational modeling.

  1. Differential invariants and exact solutions of the Einstein equations

    NASA Astrophysics Data System (ADS)

    Lychagin, Valentin; Yumaguzhin, Valeriy

    2017-06-01

    In this paper (cf. Lychagin and Yumaguzhin, in Anal Math Phys, 2016) a class of totally geodesics solutions for the vacuum Einstein equations is introduced. It consists of Einstein metrics of signature (1,3) such that 2-dimensional distributions, defined by the Weyl tensor, are completely integrable and totally geodesic. The complete and explicit description of metrics from these class is given. It is shown that these metrics depend on two functions in one variable and one harmonic function.

  2. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  3. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE PAGES

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...

    2017-09-19

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  4. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  5. Local 4/5-law and energy dissipation anomaly in turbulence of incompressible MHD Equations

    NASA Astrophysics Data System (ADS)

    Guo, Shanshan; Tan, Zhong

    2016-12-01

    In this paper, we establish the longitudinal and transverse local energy balance equation of distributional solutions of the incompressible three-dimensional MHD equations. In particular, we find that the functions D_L^ɛ (u,B) and D_T^ɛ (u,B) appeared in the energy balance, all converging to the defect distribution (in the sense of distributions) D(u,B) which has been defined in Gao et al. (Acta Math Sci 33:865-871, 2013). Furthermore, we give a simpler form of defect distribution term, which is similar to the relation in turbulence theory, called the "4 / 3-law." As a corollary, we give the analogous "4 / 5-law" holds in the local sense.

  6. On the existence of a scaling relation in the evolution of cellular systems

    NASA Astrophysics Data System (ADS)

    Fortes, M. A.

    1994-05-01

    A mean field approximation is used to analyze the evolution of the distribution of sizes in systems formed by individual 'cells,' each of which grows or shrinks, in such a way that the total number of cells decreases (e.g. polycrystals, soap froths, precipitate particles in a matrix). The rate of change of the size of a cell is defined by a growth function that depends on the size (x) of the cell and on moments of the size distribution, such as the average size (bar-x). Evolutionary equations for the distribution of sizes and of reduced sizes (i.e. x/bar-x) are established. The stationary (or steady state) solutions of the equations are obtained for various particular forms of the growth function. A steady state of the reduced size distribution is equivalent to a scaling behavior. It is found that there are an infinity of steady state solutions which form a (continuous) one-parameter family of functions, but they are not, in general, reached from an arbitrary initial state. These properties are at variance from those that can be derived from models based on von Neumann-Mullins equation.

  7. Guidelines for Implementing Advanced Distribution Management Systems-Requirements for DMS Integration with DERMS and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui; Chen, Chen; Lu, Xiaonan

    2015-08-01

    This guideline focuses on the integration of DMS with DERMS and microgrids connected to the distribution grid by defining generic and fundamental design and implementation principles and strategies. It starts by addressing the current status, objectives, and core functionalities of each system, and then discusses the new challenges and the common principles of DMS design and implementation for integration with DERMS and microgrids to realize enhanced grid operation reliability and quality power delivery to consumers while also achieving the maximum energy economics from the DER and microgrid connections.

  8. Quantum Field Theory on Spacetimes with a Compactly Generated Cauchy Horizon

    NASA Astrophysics Data System (ADS)

    Kay, Bernard S.; Radzikowski, Marek J.; Wald, Robert M.

    1997-02-01

    We prove two theorems which concern difficulties in the formulation of the quantum theory of a linear scalar field on a spacetime, (M,g_{ab}), with a compactly generated Cauchy horizon. These theorems demonstrate the breakdown of the theory at certain base points of the Cauchy horizon, which are defined as 'past terminal accumulation points' of the horizon generators. Thus, the theorems may be interpreted as giving support to Hawking's 'Chronology Protection Conjecture', according to which the laws of physics prevent one from manufacturing a 'time machine'. Specifically, we prove: Theorem 1. There is no extension to (M,g_{ab}) of the usual field algebra on the initial globally hyperbolic region which satisfies the condition of F-locality at any base point. In other words, any extension of the field algebra must, in any globally hyperbolic neighbourhood of any base point, differ from the algebra one would define on that neighbourhood according to the rules for globally hyperbolic spacetimes. Theorem 2. The two-point distribution for any Hadamard state defined on the initial globally hyperbolic region must (when extended to a distributional bisolution of the covariant Klein-Gordon equation on the full spacetime) be singular at every base point x in the sense that the difference between this two point distribution and a local Hadamard distribution cannot be given by a bounded function in any neighbourhood (in M 2 M) of (x,x). In consequence of Theorem 2, quantities such as the renormalized expectation value of J2 or of the stress-energy tensor are necessarily ill-defined or singular at any base point. The proof of these theorems relies on the 'Propagation of Singularities' theorems of Duistermaat and Hörmander.

  9. The Cluster Variation Method: A Primer for Neuroscientists.

    PubMed

    Maren, Alianna J

    2016-09-30

    Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.

  10. Log-normal distribution of the trace element data results from a mixture of stocahstic input and deterministic internal dynamics.

    PubMed

    Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya

    2002-04-01

    In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.

  11. The Cluster Variation Method: A Primer for Neuroscientists

    PubMed Central

    Maren, Alianna J.

    2016-01-01

    Effective Brain–Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found. PMID:27706022

  12. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  13. Quasi-parton distribution functions: A study in the diquark spectator model

    DOE PAGES

    Gamberg, Leonard; Kang, Zhong -Bo; Vitev, Ivan; ...

    2015-02-12

    A set of quasi-parton distribution functions (quasi-PDFs) have been recently proposed by Ji. Defined as the matrix elements of equal-time spatial correlations, they can be computed on the lattice and should reduce to the standard PDFs when the proton momentum P z is very large. Since taking the P z → ∞ limit is not feasible in lattice simulations, it is essential to provide guidance for which values of P z the quasi-PDFs are good approximations of standard PDFs. Within the framework of the spectator diquark model, we evaluate both the up and down quarks' quasi-PDFs and standard PDFs formore » all leading-twist distributions (unpolarized distribution f₁, helicity distribution g₁, and transversity distribution h₁). We find that, for intermediate parton momentum fractions x , quasi-PDFs are good approximations to standard PDFs (within 20–30%) when P z ≳ 1.5–2 GeV. On the other hand, for large x~1 much larger P z > 4 GeV is necessary to obtain a satisfactory agreement between the two sets. We further test the Soffer positivity bound, and find that it does not hold in general for quasi-PDFs.« less

  14. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  15. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  16. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, Heung-Rae

    1997-01-01

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.

  17. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  18. Simulation of Avifauna Distributions Using Remote Sensing

    NASA Technical Reports Server (NTRS)

    Smith, James A.

    2004-01-01

    Remote sensing has proved a fruitful tool for understanding the distribution and functioning of plant communities at multiple scales and to understand their coupling to bioclimatic and anthropogenic factors. But a similar approach to understanding the distribution and abundance of bird species as well as many other animal organisms is lacking. The increasing need for such understanding is evident with the recent examples of threats to human health via avian vector transmission and the increasing emphasis on global conservation biology. From experimental observations we know that species richness tends to track biological or environmental gradients. In this paper, we explore the fundamental idea that thermal and water-relation environments of birds, as estimated from satellite data and biophysical models, can define the constraints on their Occurrences and richness. We develop individual bird energy budget models and use these models to define the climate space niche of birds. Using satellite data assimilation products to drive our models, we disperse a distribution of virtual or actual bird species across the landscape in accordance to the limits expressed by their climate space niche. Here, we focus on the North American summer breeding season and give two examples to illustrate our approach. The first is a tundra loving bird, e.g. corresponding to the Culidris genus, and a second genus example, Myiurchus, that corresponds to arid or semi-arid regions. We define these birds in terms of their basic physiology and morphological characteristics, construct avian energetic simulations to predict their allowable metabolic ranges and climate space limits.

  19. Using hazard functions to assess changes in processing capacity in an attentional cuing paradigm.

    PubMed

    Wenger, Michael J; Gibson, Bradley S

    2004-08-01

    Processing capacity--defined as the relative ability to perform mental work in a unit of time--is a critical construct in cognitive psychology and is central to theories of visual attention. The unambiguous use of the construct, experimentally and theoretically, has been hindered by both conceptual confusions and the use of measures that are at best only coarsely mapped to the construct. However, more than 25 years ago, J. T. Townsend and F. G. Ashby (1978) suggested that the hazard function on the response time (RT) distribution offered a number of conceptual advantages as a measure of capacity. The present study suggests that a set of statistical techniques, well-known outside the cognitive and perceptual literatures, offers the ability to perform hypothesis tests on RT-distribution hazard functions. These techniques are introduced, and their use is illustrated in application to data from the contingent attentional capture paradigm.

  20. Solutions to Kuessner's integral equation in unsteady flow using local basis functions

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Halstead, D. W.

    1975-01-01

    The computational procedure and numerical results are presented for a new method to solve Kuessner's integral equation in the case of subsonic compressible flow about harmonically oscillating planar surfaces with controls. Kuessner's equation is a linear transformation from pressure to normalwash. The unknown pressure is expanded in terms of prescribed basis functions and the unknown basis function coefficients are determined in the usual manner by satisfying the given normalwash distribution either collocationally or in the complex least squares sense. The present method of solution differs from previous ones in that the basis functions are defined in a continuous fashion over a relatively small portion of the aerodynamic surface and are zero elsewhere. This method, termed the local basis function method, combines the smoothness and accuracy of distribution methods with the simplicity and versatility of panel methods. Predictions by the local basis function method for unsteady flow are shown to be in excellent agreement with other methods. Also, potential improvements to the present method and extensions to more general classes of solutions are discussed.

  1. Application of Probabilistic Methods for the Determination of an Economically Robust HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.

    1996-01-01

    This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.

  2. General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1997-04-01

    To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.

  3. A Langevin approach to multi-scale modeling

    DOE PAGES

    Hirvijoki, Eero

    2018-04-13

    In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less

  4. A Langevin approach to multi-scale modeling

    NASA Astrophysics Data System (ADS)

    Hirvijoki, Eero

    2018-04-01

    In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this letter, we propose a multi-scale method which allows us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. This allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.

  5. A Langevin approach to multi-scale modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirvijoki, Eero

    In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less

  6. Modeling of Disordered Binary Alloys Under Thermal Forcing: Effect of Nanocrystallite Dissociation on Thermal Expansion of AuCu3

    NASA Astrophysics Data System (ADS)

    Kim, Y. W.; Cress, R. P.

    2016-11-01

    Disordered binary alloys are modeled as a randomly close-packed assembly of nanocrystallites intermixed with randomly positioned atoms, i.e., glassy-state matter. The nanocrystallite size distribution is measured in a simulated macroscopic medium in two dimensions. We have also defined, and measured, the degree of crystallinity as the probability of a particle being a member of nanocrystallites. Both the distribution function and the degree of crystallinity are found to be determined by alloy composition. When heated, the nanocrystallites become smaller in size due to increasing thermal fluctuation. We have modeled this phenomenon as a case of thermal dissociation by means of the law of mass action. The crystallite size distribution function is computed for AuCu3 as a function of temperature by solving some 12 000 coupled algebraic equations for the alloy. The results show that linear thermal expansion of the specimen has contributions from the temperature dependence of the degree of crystallinity, in addition to respective thermal expansions of the nanocrystallites and glassy-state matter.

  7. Consistency criteria for generalized Cuddeford systems

    NASA Astrophysics Data System (ADS)

    Ciotti, Luca; Morganti, Lucia

    2010-01-01

    General criteria to check the positivity of the distribution function (phase-space consistency) of stellar systems of assigned density and anisotropy profile are useful starting points in Jeans-based modelling. Here, we substantially extend previous results, and present the inversion formula and the analytical necessary and sufficient conditions for phase-space consistency of the family of multicomponent Cuddeford spherical systems: the distribution function of each density component of these systems is defined as the sum of an arbitrary number of Cuddeford distribution functions with arbitrary values of the anisotropy radius, but identical angular momentum exponent. The radial trend of anisotropy that can be realized by these models is therefore very general. As a surprising byproduct of our study, we found that the `central cusp-anisotropy theorem' (a necessary condition for consistency relating the values of the central density slope and of the anisotropy parameter) holds not only at the centre but also at all radii in consistent multicomponent generalized Cuddeford systems. This last result suggests that the so-called mass-anisotropy degeneracy could be less severe than what is sometimes feared.

  8. r.randomwalk v1.0, a multi-functional conceptual tool for mass movement routing

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Krenn, J.; Chu, H.-J.

    2015-09-01

    We introduce r.randomwalk, a flexible and multi-functional open source tool for backward- and forward-analyses of mass movement propagation. r.randomwalk builds on GRASS GIS, the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are: (i) multiple break criteria can be combined to compute an impact indicator score, (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter settings, resulting in an impact indicator index in the range 0-1, (iii) built-in functions for validation and visualization of the results are provided, (iv) observed landslides can be back-analyzed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk (i) for a single event, the Acheron Rock Avalanche in New Zealand, (ii) for landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) for lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.

  9. r.randomwalk v1, a multi-functional conceptual tool for mass movement routing

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Krenn, J.; Chu, H.-J.

    2015-12-01

    We introduce r.randomwalk, a flexible and multi-functional open-source tool for backward and forward analyses of mass movement propagation. r.randomwalk builds on GRASS GIS (Geographic Resources Analysis Support System - Geographic Information System), the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are (i) multiple break criteria can be combined to compute an impact indicator score; (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter sets, resulting in an impact indicator index in the range 0-1; (iii) built-in functions for validation and visualization of the results are provided; (iv) observed landslides can be back analysed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk for (i) a single event, the Acheron rock avalanche in New Zealand; (ii) landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.

  10. Annular feed air breathing fuel cell stack

    DOEpatents

    Wilson, Mahlon S.

    1996-01-01

    A stack of polymer electrolyte fuel cells is formed from a plurality of unit cells where each unit cell includes fuel cell components defining a periphery and distributed along a common axis, where the fuel cell components include a polymer electrolyte membrane, an anode and a cathode contacting opposite sides of the membrane, and fuel and oxygen flow fields contacting the anode and the cathode, respectively, wherein the components define an annular region therethrough along the axis. A fuel distribution manifold within the annular region is connected to deliver fuel to the fuel flow field in each of the unit cells. In a particular embodiment, a single bolt through the annular region clamps the unit cells together. In another embodiment, separator plates between individual unit cells have an extended radial dimension to function as cooling fins for maintaining the operating temperature of the fuel cell stack.

  11. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  12. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  13. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  14. Middle-high latitude N2O distributions related to the arctic vortex breakup

    NASA Astrophysics Data System (ADS)

    Zhou, L. B.; Zou, H.; Gao, Y. Q.

    2006-03-01

    The relationship of N2O distributions with the Arctic vortex breakup is first analyzed with a probability distribution function (PDF) analysis. The N2O concentration shows different distributions between the early and late vortex breakup years. In the early breakup years, the N2O concentration shows low values and large dispersions after the vortex breakup, which is related to the inhomogeneity in the vertical advection in the middle and high latitude lower stratosphere. The horizontal diffusion coefficient (K,,) shows a larger value accordingly. In the late breakup years, the N2O concentration shows high values and more uniform distributions than in the early years after the vortex breakup, with a smaller vertical advection and K,, after the vortex breakup. It is found that the N2O distributions are largely affected by the Arctic vortex breakup time but the dynamically defined vortex breakup time is not the only factor.

  15. Application of the mobility power flow approach to structural response from distributed loading

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1988-01-01

    The problem of the vibration power flow through coupled substructures when one of the substructures is subjected to a distributed load is addressed. In all the work performed thus far, point force excitation was considered. However, in the case of the excitation of an aircraft fuselage, distributed loading on the whole surface of a panel can be as important as the excitation from directly applied forces at defined locations on the structures. Thus using a mobility power flow approach, expressions are developed for the transmission of vibrational power between two coupled plate substructures in an L configuration, with one of the surfaces of one of the plate substructures being subjected to a distributed load. The types of distributed loads that are considered are a force load with an arbitrary function in space and a distributed load similar to that from acoustic excitation.

  16. Node degree distribution in spanning trees

    NASA Astrophysics Data System (ADS)

    Pozrikidis, C.

    2016-03-01

    A method is presented for computing the number of spanning trees involving one link or a specified group of links, and excluding another link or a specified group of links, in a network described by a simple graph in terms of derivatives of the spanning-tree generating function defined with respect to the eigenvalues of the Kirchhoff (weighted Laplacian) matrix. The method is applied to deduce the node degree distribution in a complete or randomized set of spanning trees of an arbitrary network. An important feature of the proposed method is that the explicit construction of spanning trees is not required. It is shown that the node degree distribution in the spanning trees of the complete network is described by the binomial distribution. Numerical results are presented for the node degree distribution in square, triangular, and honeycomb lattices.

  17. Evaluation of Kurtosis into the product of two normally distributed variables

    NASA Astrophysics Data System (ADS)

    Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio

    2016-06-01

    Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.

  18. Army Civil Affairs Functional Specialists: On the Verge of Extinction

    DTIC Science & Technology

    2012-03-22

    the following six areas: rule of law, economic stability , infrastructure, governance, public health and welfare, and public education and information...as defined in table 1. Rule of Law Economic Stability Infrastructure Rule of law pertains to the fair, competent, and efficient application and... Economic stability pertains to the efficient management (for example, production, distribution, trade, and consumption) of resources, goods

  19. Stretchable Conductive Elastomers for Soldier Biosensing Applications: Final Report

    DTIC Science & Technology

    2016-03-01

    public release; distribution is unlimited. 7 the electrical impedance tunability that we required. Representative data for resistance versus volume...Technology Directorate’s (VTD) electric field mediated morphing wing research effort. Fig. 5 Resistance values of EEG electrodes as a function of...extend the resistance range of the developed polymer EEG electrodes to potentially provide insight into defining an optimum electrical performance for

  20. Flexible configuration-interaction shell-model many-body solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  1. Spacelab Level 4 Programmatic Implementation Assessment Study. Volume 4: Executive summary

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The study objectives of the Spacelab level 4 analysis were defined, along with the most significant results. The approach used in the synthesis and selection of alternate level 4 integration is described; the options included distributed site, lead center, and launch site. Principal characteristics, as well as the functional flow diagrams for each option, are presented and explained.

  2. Multiple Scattering in Random Mechanical Systems and Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun

    2013-10-01

    This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.

  3. Interplanetary Radiation and Internal Charging Environment Models for Solar Sails

    NASA Technical Reports Server (NTRS)

    Minow, Joseph I.; Altstatt, Richard L.; Neergaard, Linda F.

    2004-01-01

    A Solar Sail Radiation Environment (SSRE) model has been developed for characterizing the radiation dose and internal charging environments in the solar wind. The SSRE model defines the 0.01 keV to 1 MeV charged particle environment for use in testing the radiation dose vulnerability of candidate solar sail materials and for use in evaluating the internal charging effects in the interplanetary environment. Solar wind and energetic particle instruments aboard the Ulysses spacecraft provide the particle data used to derive the environments for the high inclination 0.5 AU Solar Polar Imager mission and the 1.0 AU L1 solar sail missions. Ulysses is the only spacecraft to sample high latitude solar wind environments far from the ecliptic plane and is therefore uniquely capable of providing the information necessary for defining radiation environments for the Solar Polar Imager spacecraft. Cold plasma moments are used to derive differential flux spectra based on Kappa distribution functions. Energetic particle flux measurements are used to constrain the high energy, non-thermal tails of the distribution functions providing a comprehensive electron, proton, and helium spectra from less than 0.01 keV to a few MeV.

  4. Modified stochastic fragmentation of an interval as an ageing process

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  5. Generalized Quantum Theory and Mathematical Foundations of Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Maroun, Michael Anthony

    This dissertation is divided into two main topics. The first is the generalization of quantum dynamics when the Schrodinger partial differential equation is not defined even in the weak mathematical sense because the potential function itself is a distribution in the spatial variable, the same variable that is used to define the kinetic energy operator, i.e. the Laplace operator. The procedure is an extension and broadening of the distributional calculus and offers spectral results as an alternative to the only other two known methods to date, namely a) the functional calculi; and b) non-standard analysis. Furthermore, the generalizations of quantum dynamics presented within give a resolution to the time asymmetry paradox created by multi-particle quantum mechanics due to the time evolution still being unitary. A consequence is the randomization of phases needed for the fundamental justification Pauli master equation. The second topic is foundations of the quantum theory of fields. The title is phrased as ``foundations'' to emphasize that there is no claim of uniqueness but rather a proposal is put forth, which is markedly different than that of constructive or axiomatic field theory. In particular, the space of fields is defined as a space of generalized functions with involutive symmetry maps (the CPT invariance) that affect the topology of the field space. The space of quantum fields is then endowed the Frechet property and interactions change the topology in such a way as to cause some field spaces to be incompatible with others. This is seen in the consequences of the Haag theorem. Various examples and discussions are given that elucidate a new view of the quantum theory of fields and its (lack of) mathematical structure.

  6. Are fractal dimensions of the spatial distribution of mineral deposits meaningful?

    USGS Publications Warehouse

    Raines, G.L.

    2008-01-01

    It has been proposed that the spatial distribution of mineral deposits is bifractal. An implication of this property is that the number of deposits in a permissive area is a function of the shape of the area. This is because the fractal density functions of deposits are dependent on the distance from known deposits. A long thin permissive area with most of the deposits in one end, such as the Alaskan porphyry permissive area, has a major portion of the area far from known deposits and consequently a low density of deposits associated with most of the permissive area. On the other hand, a more equi-dimensioned permissive area, such as the Arizona porphyry permissive area, has a more uniform density of deposits. Another implication of the fractal distribution is that the Poisson assumption typically used for estimating deposit numbers is invalid. Based on datasets of mineral deposits classified by type as inputs, the distributions of many different deposit types are found to have characteristically two fractal dimensions over separate non-overlapping spatial scales in the range of 5-1000 km. In particular, one typically observes a local dimension at spatial scales less than 30-60 km, and a regional dimension at larger spatial scales. The deposit type, geologic setting, and sample size influence the fractal dimensions. The consequence of the geologic setting can be diminished by using deposits classified by type. The crossover point between the two fractal domains is proportional to the median size of the deposit type. A plot of the crossover points for porphyry copper deposits from different geologic domains against median deposit sizes defines linear relationships and identifies regions that are significantly underexplored. Plots of the fractal dimension can also be used to define density functions from which the number of undiscovered deposits can be estimated. This density function is only dependent on the distribution of deposits and is independent of the definition of the permissive area. Density functions for porphyry copper deposits appear to be significantly different for regions in the Andes, Mexico, United States, and western Canada. Consequently, depending on which regional density function is used, quite different estimates of numbers of undiscovered deposits can be obtained. These fractal properties suggest that geologic studies based on mapping at scales of 1:24,000 to 1:100,000 may not recognize processes that are important in the formation of mineral deposits at scales larger than the crossover points at 30-60 km. ?? 2008 International Association for Mathematical Geology.

  7. A Concept for Run-Time Support of the Chapel Language

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A document presents a concept for run-time implementation of other concepts embodied in the Chapel programming language. (Now undergoing development, Chapel is intended to become a standard language for parallel computing that would surpass older such languages in both computational performance in the efficiency with which pre-existing code can be reused and new code written.) The aforementioned other concepts are those of distributions, domains, allocations, and access, as defined in a separate document called "A Semantic Framework for Domains and Distributions in Chapel" and linked to a language specification defined in another separate document called "Chapel Specification 0.3." The concept presented in the instant report is recognition that a data domain that was invented for Chapel offers a novel approach to distributing and processing data in a massively parallel environment. The concept is offered as a starting point for development of working descriptions of functions and data structures that would be necessary to implement interfaces to a compiler for transforming the aforementioned other concepts from their representations in Chapel source code to their run-time implementations.

  8. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  9. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  10. Biomechanical, psychosocial and individual risk factors predicting low back functional impairment among furniture distribution employees

    PubMed Central

    Ferguson, Sue A.; Allread, W. Gary; Burr, Deborah L.; Heaney, Catherine; Marras, William S.

    2013-01-01

    Background Biomechanical, psychosocial and individual risk factors for low back disorder have been studied extensively however few researchers have examined all three risk factors. The objective of this was to develop a low back disorder risk model in furniture distribution workers using biomechanical, psychosocial and individual risk factors. Methods This was a prospective study with a six month follow-up time. There were 454 subjects at 9 furniture distribution facilities enrolled in the study. Biomechanical exposure was evaluated using the American Conference of Governmental Industrial Hygienists (2001) lifting threshold limit values for low back injury risk. Psychosocial and individual risk factors were evaluated via questionnaires. Low back health functional status was measured using the lumbar motion monitor. Low back disorder cases were defined as a loss of low back functional performance of −0.14 or more. Findings There were 92 cases of meaningful loss in low back functional performance and 185 non cases. A multivariate logistic regression model included baseline functional performance probability, facility, perceived workload, intermediated reach distance number of exertions above threshold limit values, job tenure manual material handling, and age combined to provide a model sensitivity of 68.5% and specificity of 71.9%. Interpretation: The results of this study indicate which biomechanical, individual and psychosocial risk factors are important as well as how much of each risk factor is too much resulting in increased risk of low back disorder among furniture distribution workers. PMID:21955915

  11. Cole-Davidson dynamics of simple chain models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dotson, Taylor C.; McCoy, John Dwane; Adolf, Douglas Brian

    2008-10-01

    Rotational relaxation functions of the end-to-end vector of short, freely jointed and freely rotating chains were determined from molecular dynamics simulations. The associated response functions were obtained from the one-sided Fourier transform of the relaxation functions. The Cole-Davidson function was used to fit the response functions with extensive use being made of Cole-Cole plots in the fitting procedure. For the systems studied, the Cole-Davidson function provided remarkably accurate fits [as compared to the transform of the Kohlrausch-Williams-Watts (KWW) function]. The only appreciable deviations from the simulation results were in the high frequency limit and were due to ballistic or freemore » rotation effects. The accuracy of the Cole-Davidson function appears to be the result of the transition in the time domain from stretched exponential behavior at intermediate time to single exponential behavior at long time. Such a transition can be explained in terms of a distribution of relaxation times with a well-defined longest relaxation time. Since the Cole-Davidson distribution has a sharp cutoff in relaxation time (while the KWW function does not), it makes sense that the Cole-Davidson would provide a better frequency-domain description of the associated response function than the KWW function does.« less

  12. Eliciting the Functional Taxonomy from protein annotations and taxa

    PubMed Central

    Falda, Marco; Lavezzo, Enrico; Fontana, Paolo; Bianco, Luca; Berselli, Michele; Formentin, Elide; Toppo, Stefano

    2016-01-01

    The advances of omics technologies have triggered the production of an enormous volume of data coming from thousands of species. Meanwhile, joint international efforts like the Gene Ontology (GO) consortium have worked to provide functional information for a vast amount of proteins. With these data available, we have developed FunTaxIS, a tool that is the first attempt to infer functional taxonomy (i.e. how functions are distributed over taxa) combining functional and taxonomic information. FunTaxIS is able to define a taxon specific functional space by exploiting annotation frequencies in order to establish if a function can or cannot be used to annotate a certain species. The tool generates constraints between GO terms and taxa and then propagates these relations over the taxonomic tree and the GO graph. Since these constraints nearly cover the whole taxonomy, it is possible to obtain the mapping of a function over the taxonomy. FunTaxIS can be used to make functional comparative analyses among taxa, to detect improper associations between taxa and functions, and to discover how functional knowledge is either distributed or missing. A benchmark test set based on six different model species has been devised to get useful insights on the generated taxonomic rules. PMID:27534507

  13. Non-Fickian dispersion of groundwater age

    PubMed Central

    Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.

    2014-01-01

    We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651

  14. System of HPC content archiving

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Ivashchenko, A.

    2017-12-01

    This work is aimed to develop a system, that will effectively solve the problem of storing and analyzing files containing text data, by using modern software development tools, techniques and approaches. The main challenge of storing a large number of text documents defined at the problem formulation stage, have to be resolved with such functionality as full text search and document clustering depends on their contents. Main system features could be described with notions of distributed multilevel architecture, flexibility and interchangeability of components, achieved through the standard functionality incapsulation in independent executable modules.

  15. Angular Distributions of Discrete Mesoscale Mapping Functions

    NASA Astrophysics Data System (ADS)

    Kroszczyński, Krzysztof

    2015-08-01

    The paper presents the results of analyses of numerical experiments concerning GPS signal propagation delays in the atmosphere and the discrete mapping functions defined on their basis. The delays were determined using data from the mesoscale non-hydrostatic weather model operated in the Centre of Applied Geomatics, Military University of Technology. A special attention was paid to investigating angular characteristics of GPS slant delays for low angles of elevation. The investigation proved that the temporal and spatial variability of the slant delays depends to a large extent on current weather conditions.

  16. Supporting scalability and flexibility in a distributed management platform

    NASA Astrophysics Data System (ADS)

    Jardin, P.

    1996-06-01

    The TeMIP management platform was developed to manage very large distributed systems such as telecommunications networks. The management of these networks imposes a number of fairly stringent requirements including the partitioning of the network, division of work based on skills and target system types and the ability to adjust the functions to specific operational requirements. This requires the ability to cluster managed resources into domains that are totally defined at runtime based on operator policies. This paper addresses some of the issues that must be addressed in order to add a dynamic dimension to a management solution.

  17. Techniques for the Cellular and Subcellular Localization of Endocannabinoid Receptors and Enzymes in the Mammalian Brain.

    PubMed

    Cristino, Luigia; Imperatore, Roberta; Di Marzo, Vincenzo

    2017-01-01

    This chapter attempts to piece together knowledge about new advanced microscopy techniques to study the neuroanatomical distribution of endocannabinoid receptors and enzymes at the level of cellular and subcellular structures and organelles in the brain. Techniques ranging from light to electron microscopy up to the new advanced LBM, PALM, and STORM super-resolution microscopy will be discussed in the context of their contribution to define the spatial distribution and organization of receptors and enzymes of the endocannabinoid system (ECS), and to better understand ECS brain functions. © 2017 Elsevier Inc. All rights reserved.

  18. Hard exclusive pion electroproduction at backward angles with CLAS

    NASA Astrophysics Data System (ADS)

    Park, K.; Guidal, M.; Gothe, R. W.; Pire, B.; Semenov-Tian-Shansky, K.; Laget, J.-M.; Adhikari, K. P.; Adhikari, S.; Akbar, Z.; Avakian, H.; Ball, J.; Balossino, I.; Baltzell, N. A.; Barion, L.; Battaglieri, M.; Bedlinskiy, I.; Biselli, A. S.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Cao, F. T.; Carman, D. S.; Celentano, A.; Charles, G.; Chetry, T.; Ciullo, G.; Clark, L.; Cole, P. L.; Contalbrigo, M.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Defurne, M.; Deur, A.; Djalali, C.; Dupre, R.; Egiyan, H.; El Alaoui, A.; El Fassi, L.; Elouadrhiri, L.; Eugenio, P.; Fedotov, G.; Fersch, R.; Filippi, A.; Garçon, M.; Ghandilyan, Y.; Gilfoyle, G. P.; Girod, F. X.; Golovatch, E.; Griffioen, K. A.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Hattawy, M.; Heddle, D.; Hicks, K.; Holtrop, M.; Hyde, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jenkins, D.; Johnston, S.; Joo, K.; Kabir, M. L.; Keller, D.; Khachatryan, G.; Khachatryan, M.; Khandaker, M.; Kim, W.; Klein, F. J.; Kubarovsky, V.; Kuhn, S. E.; Lanza, L.; Livingston, K.; MacGregor, I. J. D.; Markov, N.; McKinnon, B.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Munoz Camacho, C.; Nadel-Turonski, P.; Niccolai, S.; Niculescu, G.; Osipenko, M.; Paolone, M.; Paremuzyan, R.; Pasyuk, E.; Phelps, W.; Pogorelko, O.; Poudel, J.; Price, J. W.; Prok, Y.; Protopopescu, D.; Ripani, M.; Rizzo, A.; Rossi, P.; Sabatié, F.; Salgado, C.; Schumacher, R. A.; Sharabian, Y.; Skorodumina, Iu.; Smith, G. D.; Sokhan, D.; Sparveris, N.; Stepanyan, S.; Strakovsky, I. I.; Strauch, S.; Taiuti, M.; Tan, J. A.; Ungaro, M.; Voskanyan, H.; Voutier, E.; Wei, X.; Zachariou, N.; Zhang, J.

    2018-05-01

    We report on the first measurement of cross sections for exclusive deeply virtual pion electroproduction off the proton, ep →e‧ nπ+, above the resonance region at backward pion center-of-mass angles. The φπ* -dependent cross sections were measured, from which we extracted three combinations of structure functions of the proton. Our results are compatible with calculations based on nucleon-to-pion transition distribution amplitudes (TDAs). These non-perturbative objects are defined as matrix elements of three-quark-light-cone-operators and characterize partonic correlations with a particular emphasis on baryon charge distribution inside a nucleon.

  19. Numerical Calculation of Neoclassical Distribution Functions and Current Profiles in Low Collisionality, Axisymmetric Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    B.C. Lyons, S.C. Jardin, and J.J. Ramos

    2012-06-28

    A new code, the Neoclassical Ion-Electron Solver (NIES), has been written to solve for stationary, axisymmetric distribution functions (f ) in the conventional banana regime for both ions and elec trons using a set of drift-kinetic equations (DKEs) with linearized Fokker-Planck-Landau collision operators. Solvability conditions on the DKEs determine the relevant non-adiabatic pieces of f (called h ). We work in a 4D phase space in which Ψ defines a flux surface, θ is the poloidal angle, v is the total velocity referenced to the mean flow velocity, and λ is the dimensionless magnetic moment parameter. We expand h inmore » finite elements in both v and λ . The Rosenbluth potentials, φ and ψ, which define the integral part of the collision operator, are expanded in Legendre series in cos χ , where χ is the pitch angle, Fourier series in cos θ , and finite elements in v . At each ψ , we solve a block tridiagonal system for hi (independent of fe ), then solve another block tridiagonal system for he (dependent on fi ). We demonstrate that such a formulation can be accurately and efficiently solved. NIES is coupled to the MHD equilibrium code JSOLVER [J. DeLucia, et al., J. Comput. Phys. 37 , pp 183-204 (1980).] allowing us to work with realistic magnetic geometries. The bootstrap current is calculated as a simple moment of the distribution function. Results are benchmarked against the Sauter analytic formulas and can be used as a kinetic closure for an MHD code (e.g., M3D-C1 [S.C. Jardin, et al ., Computational Science & Discovery, 4 (2012).]).« less

  20. Ensemble theory for slightly deformable granular matter.

    PubMed

    Tejada, Ignacio G

    2014-09-01

    Given a granular system of slightly deformable particles, it is possible to obtain different static and jammed packings subjected to the same macroscopic constraints. These microstates can be compared in a mathematical space defined by the components of the force-moment tensor (i.e. the product of the equivalent stress by the volume of the Voronoi cell). In order to explain the statistical distributions observed there, an athermal ensemble theory can be used. This work proposes a formalism (based on developments of the original theory of Edwards and collaborators) that considers both the internal and the external constraints of the problem. The former give the density of states of the points of this space, and the latter give their statistical weight. The internal constraints are those caused by the intrinsic features of the system (e.g. size distribution, friction, cohesion). They, together with the force-balance condition, determine which the possible local states of equilibrium of a particle are. Under the principle of equal a priori probabilities, and when no other constraints are imposed, it can be assumed that particles are equally likely to be found in any one of these local states of equilibrium. Then a flat sampling over all these local states turns into a non-uniform distribution in the force-moment space that can be represented with density of states functions. Although these functions can be measured, some of their features are explored in this paper. The external constraints are those macroscopic quantities that define the ensemble and are fixed by the protocol. The force-moment, the volume, the elastic potential energy and the stress are some examples of quantities that can be expressed as functions of the force-moment. The associated ensembles are included in the formalism presented here.

  1. Distribution and photobiology of Symbiodinium types in different light environments for three colour morphs of the coral Madracis pharensis: is there more to it than total irradiance?

    NASA Astrophysics Data System (ADS)

    Frade, P. R.; Englebert, N.; Faria, J.; Visser, P. M.; Bak, R. P. M.

    2008-12-01

    The role of symbiont variation in the photobiology of reef corals was addressed by investigating the links among symbiont genetic diversity, function and ecological distribution in a single host species, Madracis pharensis. Symbiont distribution was studied for two depths (10 and 25 m), two different light habitats (exposed and shaded) and three host colour morphs (brown, purple and green). Two Symbiodinium genotypes were present, as defined by nuclear internal transcribed spacer 2 ribosomal DNA (ITS2-rDNA) variation. Symbiont distribution was depth- and colour morph-dependent. Type B15 occurred predominantly on the deeper reef and in green and purple colonies, while type B7 was present in shallow environments and brown colonies. Different light microhabitats at fixed depths had no effect on symbiont presence. This ecological distribution suggests that symbiont presence is potentially driven by light spectral niches. A reciprocal depth transplantation experiment indicated steady symbiont populations under environment change. Functional parameters such as pigment composition, chlorophyll a fluorescence and cell densities were measured for 25 m and included in multivariate analyses. Most functional variation was explained by two photobiological assemblages that relate to either symbiont identity or light microhabitat, suggesting adaptation and acclimation, respectively. Type B15 occurs with lower cell densities and larger sizes, higher cellular pigment concentrations and higher peridinin to chlorophyll a ratio than type B7. Type B7 relates to a larger xanthophyll-pool size. These unambiguous differences between symbionts can explain their distributional patterns, with type B15 being potentially more adapted to darker or deeper environments than B7. Symbiont cell size may play a central role in the adaptation of coral holobionts to the deeper reef. The existence of functional differences between B-types shows that the clade classification does not necessarily correspond to functional identity. This study supports the use of ITS2 as an ecological and functionally meaningful marker in Symbiodinium.

  2. Measuring skew in average surface roughness as a function of surface preparation

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.

    2015-08-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  3. Bivariate drought frequency analysis using the copula method

    NASA Astrophysics Data System (ADS)

    Mirabbasi, Rasoul; Fakheri-Fard, Ahmad; Dinpashoh, Yagob

    2012-04-01

    Droughts are major natural hazards with significant environmental and economic impacts. In this study, two-dimensional copulas were applied to the analysis of the meteorological drought characteristics of the Sharafkhaneh gauge station, located in the northwest of Iran. Two major drought characteristics, duration and severity, as defined by the standardized precipitation index, were abstracted from observed drought events. Since drought duration and severity exhibited a significant correlation and since they were modeled using different distributions, copulas were used to construct the joint distribution function of the drought characteristics. The parameter of copulas was estimated using the method of the Inference Function for Margins. Several copulas were tested in order to determine the best data fit. According to the error analysis and the tail dependence coefficient, the Galambos copula provided the best fit for the observed drought data. Some bivariate probabilistic properties of droughts, based on the derived copula-based joint distribution, were also investigated. These probabilistic properties can provide useful information for water resource planning and management.

  4. Deformation dependence of proton decay rates and angular distributions in a time-dependent approach

    NASA Astrophysics Data System (ADS)

    Carjan, N.; Talou, P.; Strottman, D.

    1998-12-01

    A new, time-dependent, approach to proton decay from axially symmetric deformed nuclei is presented. The two-dimensional time-dependent Schrödinger equation for the interaction between the emitted proton and the rest of the nucleus is solved numerically for well defined initial quasi-stationary proton states. Applied to the hypothetical proton emission from excited states in deformed nuclei of 208Pb, this approach shows that the problem cannot be reduced to one dimension. There are in general more than one directions of emission with wide distributions around them, determined mainly by the quantum numbers of the initial wave function rather than by the potential landscape. The distribution of the "residual" angular momentum and its variation in time play a major role in the determination of the decay rate. In a couple of cases, no exponential decay was found during the calculated time evolution (2×10-21 sec) although more than half of the wave function escaped during that time.

  5. A diffusion model of protected population on bilocal habitat with generalized resource

    NASA Astrophysics Data System (ADS)

    Vasilyev, Maxim D.; Trofimtsev, Yuri I.; Vasilyeva, Natalya V.

    2017-11-01

    A model of population distribution in a two-dimensional area divided by an ecological barrier, i.e. the boundaries of natural reserve, is considered. Distribution of the population is defined by diffusion, directed migrations and areal resource. The exchange of specimens occurs between two parts of the habitat. The mathematical model is presented in the form of a boundary value problem for a system of non-linear parabolic equations with variable parameters of diffusion and growth function. The splitting space variables, sweep method and simple iteration methods were used for the numerical solution of a system. A set of programs was coded in Python. Numerical simulation results for the two-dimensional unsteady non-linear problem are analyzed in detail. The influence of migration flow coefficients and functions of natural birth/death ratio on the distributions of population densities is investigated. The results of the research would allow to describe the conditions of the stable and sustainable existence of populations in bilocal habitat containing the protected and non-protected zones.

  6. Basic features of the pion valence-quark distribution function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Lei; Mezrag, Cédric; Moutarde, Hervé

    2014-10-07

    The impulse-approximation expression used hitherto to define the pion's valence-quark distribution function is flawed because it omits contributions from the gluons which bind quarks into the pion. A corrected leading-order expression produces the model-independent result that quarks dressed via the rainbow–ladder truncation, or any practical analogue, carry all the pion's light-front momentum at a characteristic hadronic scale. Corrections to the leading contribution may be divided into two classes, responsible for shifting dressed-quark momentum into glue and sea-quarks. Working with available empirical information, we use an algebraic model to express the principal impact of both classes of corrections. This enables amore » realistic comparison with experiment that allows us to highlight the basic features of the pion's measurable valence-quark distribution, q π(x); namely, at a characteristic hadronic scale, q π(x)~(1-x) 2 for x≳0.85; and the valence-quarks carry approximately two-thirds of the pion's light-front momentum.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, Frank; Detmold, William; Gambhir, Arjun S.

    The role of gluons in the structure of the nucleon and light nuclei is investigated using lattice quantum chromodynamics (QCD) calculations. The first moment of the unpolarised gluon distribution is studied in nuclei up to atomic numbermore » $A=3$ at quark masses corresponding to pion masses of $$m_\\pi\\sim 450$$ and $806$ MeV. Nuclear modification of this quantity defines a gluonic analogue of the EMC effect and is constrained to be less than $$\\sim 10$$% in these nuclei. This is consistent with expectations from phenomenological quark distributions and the momentum sum rule. In the deuteron, the combination of gluon distributions corresponding to the $$b_1$$ structure function is found to have a small first moment compared with the corresponding momentum fraction. The first moment of the gluon transversity structure function is also investigated in the spin-1 deuteron, where a non-zero signal is observed at $$m_\\pi \\sim 806$$ MeV. In conclusion, this is the first indication of gluon contributions to nuclear structure that can not be associated with an individual nucleon.« less

  8. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  9. Kinetic study of ion acoustic twisted waves with kappa distributed electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arshad, Kashif, E-mail: kashif.arshad.butt@gmail.com; Aman-ur-Rehman, E-mail: amansadiq@gmail.com; Mahmood, Shahzad, E-mail: shahzadm100@gmail.com

    2016-05-15

    The kinetic theory of Landau damping of ion acoustic twisted modes is developed in the presence of orbital angular momentum of the helical (twisted) electric field in plasmas with kappa distributed electrons and Maxwellian ions. The perturbed distribution function and helical electric field are considered to be decomposed by Laguerre-Gaussian mode function defined in cylindrical geometry. The Vlasov-Poisson equation is obtained and solved analytically to obtain the weak damping rates of the ion acoustic twisted waves in a non-thermal plasma. The strong damping effects of ion acoustic twisted waves at low values of temperature ratio of electrons and ions aremore » also obtained by using exact numerical method and illustrated graphically, where the weak damping wave theory fails to explain the phenomenon properly. The obtained results of Landau damping rates of the twisted ion acoustic wave are discussed at different values of azimuthal wave number and non-thermal parameter kappa for electrons.« less

  10. Equipartition terms in transition path ensemble: Insights from molecular dynamics simulations of alanine dipeptide.

    PubMed

    Li, Wenjin

    2018-02-28

    Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.

  11. Modulating nanoparticle superlattice structure using proteins with tunable bond distributions

    DOE PAGES

    McMillan, Janet R.; Brodin, Jeffrey D.; Millan, Jaime A.; ...

    2017-01-25

    Here, we investigate the use of proteins with tunable DNA modification distributions to modulate nanoparticle superlattice structure. Using Beta-galactosidase (βgal) as a model system, we have employed the orthogonal chemical reactivities of surface amines and thiols to synthesize protein-DNA conjugates with 36 evenly distributed or 8 specifically positioned oligonucleotides. When assembled into crystalline superlattices with AuNPs, we find that the distribution of DNA modifications modulates the favored structure: βgal with uniformly distributed DNA bonding elements results in body-centered cubic crystals, whereas DNA functionalization of cysteines results in AB 2 packing. We probe the role of protein oligonucleotide number and conjugatemore » size on this observation, which revealed the importance of oligonucleotide distribution and number in this observed assembly behavior. These results indicate that proteins with defined DNA-modification patterns are powerful tools to control the nanoparticle superlattices architecture, and establish the importance of oligonucleotide distribution in the assembly behavior of protein-DNA conjugates.« less

  12. SPLICER - A GENETIC ALGORITHM TOOL FOR SEARCH AND OPTIMIZATION, VERSION 1.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Wang, L.

    1994-01-01

    SPLICER is a genetic algorithm tool which can be used to solve search and optimization problems. Genetic algorithms are adaptive search procedures (i.e. problem solving methods) based loosely on the processes of natural selection and Darwinian "survival of the fittest." SPLICER provides the underlying framework and structure for building a genetic algorithm application. These algorithms apply genetically-inspired operators to populations of potential solutions in an iterative fashion, creating new populations while searching for an optimal or near-optimal solution to the problem at hand. SPLICER 1.0 was created using a modular architecture that includes a Genetic Algorithm Kernel, interchangeable Representation Libraries, Fitness Modules and User Interface Libraries, and well-defined interfaces between these components. The architecture supports portability, flexibility, and extensibility. SPLICER comes with all source code and several examples. For instance, a "traveling salesperson" example searches for the minimum distance through a number of cities visiting each city only once. Stand-alone SPLICER applications can be used without any programming knowledge. However, to fully utilize SPLICER within new problem domains, familiarity with C language programming is essential. SPLICER's genetic algorithm (GA) kernel was developed independent of representation (i.e. problem encoding), fitness function or user interface type. The GA kernel comprises all functions necessary for the manipulation of populations. These functions include the creation of populations and population members, the iterative population model, fitness scaling, parent selection and sampling, and the generation of population statistics. In addition, miscellaneous functions are included in the kernel (e.g., random number generators). Different problem-encoding schemes and functions are defined and stored in interchangeable representation libraries. This allows the GA kernel to be used with any representation scheme. The SPLICER tool provides representation libraries for binary strings and for permutations. These libraries contain functions for the definition, creation, and decoding of genetic strings, as well as multiple crossover and mutation operators. Furthermore, the SPLICER tool defines the appropriate interfaces to allow users to create new representation libraries. Fitness modules are the only component of the SPLICER system a user will normally need to create or alter to solve a particular problem. Fitness functions are defined and stored in interchangeable fitness modules which must be created using C language. Within a fitness module, a user can create a fitness (or scoring) function, set the initial values for various SPLICER control parameters (e.g., population size), create a function which graphically displays the best solutions as they are found, and provide descriptive information about the problem. The tool comes with several example fitness modules, while the process of developing a fitness module is fully discussed in the accompanying documentation. The user interface is event-driven and provides graphic output in windows. SPLICER is written in Think C for Apple Macintosh computers running System 6.0.3 or later and Sun series workstations running SunOS. The UNIX version is easily ported to other UNIX platforms and requires MIT's X Window System, Version 11 Revision 4 or 5, MIT's Athena Widget Set, and the Xw Widget Set. Example executables and source code are included for each machine version. The standard distribution media for the Macintosh version is a set of three 3.5 inch Macintosh format diskettes. The standard distribution medium for the UNIX version is a .25 inch streaming magnetic tape cartridge in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. SPLICER was developed in 1991.

  13. Dependence of Capillary Properties of Contemporary Clinker Bricks on Their Microstructure

    NASA Astrophysics Data System (ADS)

    Wesołowska, Maria; Kaczmarek, Anna

    2017-10-01

    Contemporary clinker bricks are applied for outer layers of walls built from other materials and walls which should have high durability and aesthetic qualities. The intended effect depends not only on the mortar applied but also on clinker properties. Traditional macroscopic tests do not allow to predict clinker behaviour in contact with mortars and external environment. The basic information for this issue is open porosity of material. It defines the material ability to absorb liquids: rain water (through the face wall surface) and grout from mortar (through base surface). The main capillary flow goes on in pores with diameters from 300 to 3000nm. It is possible to define pore distribution and their size using the Mercury Intrusion Porosimetry method. The aim of these research is evaluation of clinker brick capillary properties (initial water absorption and capillary rate) and analysis of differences in microstructure of the face and base wall of a product. Detailed results allowed to show pore distribution in function of their diameters and definition of pore amount responsible for capillary flow. Based on relation between volume function differential and pore diameter, a differential distribution curve was obtained which helped to determine the dominant diameters. The results obtained let us state that face wall of bricks was characterized with the lowest material density and open porosity. In this layer (most burnt) part of pores could be closed by locally appearing liquid phase during brick burning. Thus density is lower comparing to other part of the product.

  14. The stellar orbit distribution in present-day galaxies inferred from the CALIFA survey

    NASA Astrophysics Data System (ADS)

    Zhu, Ling; van de Ven, Glenn; Bosch, Remco van den; Rix, Hans-Walter; Lyubenova, Mariya; Falcón-Barroso, Jesús; Martig, Marie; Mao, Shude; Xu, Dandan; Jin, Yunpeng; Obreja, Aura; Grand, Robert J. J.; Dutton, Aaron A.; Macciò, Andrea V.; Gómez, Facundo A.; Walcher, Jakob C.; García-Benito, Rubén; Zibetti, Stefano; Sánchez, Sebastian F.

    2018-03-01

    Galaxy formation entails the hierarchical assembly of mass, along with the condensation of baryons and the ensuing, self-regulating star formation1,2. The stars form a collisionless system whose orbit distribution retains dynamical memory that can constrain a galaxy's formation history3. The orbits dominated by ordered rotation, with near-maximum circularity λz ≈ 1, are called kinematically cold, and the orbits dominated by random motion, with low circularity λz ≈ 0, are kinematically hot. The fraction of stars on `cold' orbits, compared with the fraction on `hot' orbits, speaks directly to the quiescence or violence of the galaxies' formation histories4,5. Here we present such orbit distributions, derived from stellar kinematic maps through orbit-based modelling for a well-defined, large sample of 300 nearby galaxies. The sample, drawn from the CALIFA survey6, includes the main morphological galaxy types and spans a total stellar mass range from 108.7 to 1011.9 solar masses. Our analysis derives the orbit-circularity distribution as a function of galaxy mass and its volume-averaged total distribution. We find that across most of the considered mass range and across morphological types, there are more stars on `warm' orbits defined as 0.25 ≤ λz ≤ 0.8 than on either `cold' or `hot' orbits. This orbit-based `Hubble diagram' provides a benchmark for galaxy formation simulations in a cosmological context.

  15. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  16. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, H.R.

    1997-11-18

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.

  17. Segregation of face sensitive areas within the fusiform gyrus using global signal regression? A study on amygdala resting-state functional connectivity.

    PubMed

    Kruschwitz, Johann D; Meyer-Lindenberg, Andreas; Veer, Ilya M; Wackerhagen, Carolin; Erk, Susanne; Mohnke, Sebastian; Pöhland, Lydia; Haddad, Leila; Grimm, Oliver; Tost, Heike; Romanczuk-Seiferth, Nina; Heinz, Andreas; Walter, Martin; Walter, Henrik

    2015-10-01

    The application of global signal regression (GSR) to resting-state functional magnetic resonance imaging data and its usefulness is a widely discussed topic. In this article, we report an observation of segregated distribution of amygdala resting-state functional connectivity (rs-FC) within the fusiform gyrus (FFG) as an effect of GSR in a multi-center-sample of 276 healthy subjects. Specifically, we observed that amygdala rs-FC was distributed within the FFG as distinct anterior versus posterior clusters delineated by positive versus negative rs-FC polarity when GSR was performed. To characterize this effect in more detail, post hoc analyses revealed the following: first, direct overlays of task-functional magnetic resonance imaging derived face sensitive areas and clusters of positive versus negative amygdala rs-FC showed that the positive amygdala rs-FC cluster corresponded best with the fusiform face area, whereas the occipital face area corresponded to the negative amygdala rs-FC cluster. Second, as expected from a hierarchical face perception model, these amygdala rs-FC defined clusters showed differential rs-FC with other regions of the visual stream. Third, dynamic connectivity analyses revealed that these amygdala rs-FC defined clusters also differed in their rs-FC variance across time to the amygdala. Furthermore, subsample analyses of three independent research sites confirmed reliability of the effect of GSR, as revealed by similar patterns of distinct amygdala rs-FC polarity within the FFG. In this article, we discuss the potential of GSR to segregate face sensitive areas within the FFG and furthermore discuss how our results may relate to the functional organization of the face-perception circuit. © 2015 Wiley Periodicals, Inc.

  18. Functional organization of area V2 in the alert macaque.

    PubMed

    Peterhans, E; von der Heydt, R

    1993-05-01

    We studied the relation between anatomical structure and functional properties of cells in area V2 of the macaque. Visual function was assessed in the alert animal during fixation of gaze. Recording sites were reconstructed with respect to cortical lamination and the cytochrome oxidase pattern. We measured orientation and direction selectivity, end-stopping, sensitivity to binocular disparity and ocular dominance, and determined more complex functions like sensitivity to anomalous contours and lines defined by coherent motion. Orientation selectivity was found in all parts of area V2, with high frequencies in the pale and thick stripes of the cytochrome oxidase pattern, and with lower frequency in the thin stripes. Representations of anomalous contours were found in the pale and thick stripes with similar frequencies, but generally not in the thin stripes, which have been thought to process colour. Lines defined by coherent motion were most frequently represented in the thick stripes; they were less frequent in the pale stripes, and (as with anomalous contours) were not found in the thin stripes. Sensitivity to binocular disparity was found in all types of stripes, but more frequently in the thick stripes, where the exclusively binocular neurons were also concentrated. By contrast, no segregation was found for direction selectivity and end-stopping. All neuronal properties were distributed evenly across cortical laminae. We conclude that mechanisms for figure-ground segregation involve the pale and the thick stripes of the cytochrome oxidase pattern, perhaps with greater emphasis on 'shape from motion' and 'stereoscopic depth' in the thick stripes, while more elementary neuronal properties are distributed almost evenly across the stripe pattern.

  19. Risk and utility in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Cohen, Morrel H.; Natoli, Vincent D.

    2003-06-01

    Modern portfolio theory (MPT) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets. In the original mean-variance approach of Markowitz, volatility is taken as a proxy for risk, conflating uncertainty with risk. There have been many subsequent attempts to alleviate that weakness which, typically, combine utility and risk. We present here a modification of MPT based on the inclusion of separate risk and utility criteria. We define risk as the probability of failure to meet a pre-established investment goal. We define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield. The emphasis throughout is on long investment horizons for which risk-free assets do not exist. Analytic results are presented for a Gaussian probability distribution. Risk-utility relations are explored via empirical stock-price data, and an illustrative portfolio is optimized using the empirical data.

  20. Teaching and Learning Activity Sequencing System using Distributed Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsunori; Ishikawa, Tomotake; Okamoto, Toshio

    The purpose of this study is development of a supporting system for teacher's design of lesson plan. Especially design of lesson plan which relates to the new subject "Information Study" is supported. In this study, we developed a system which generates teaching and learning activity sequences by interlinking lesson's activities corresponding to the various conditions according to the user's input. Because user's input is multiple information, there will be caused contradiction which the system should solve. This multiobjective optimization problem is resolved by Distributed Genetic Algorithms, in which some fitness functions are defined with reference models on lesson, thinking and teaching style. From results of various experiments, effectivity and validity of the proposed methods and reference models were verified; on the other hand, some future works on reference models and evaluation functions were also pointed out.

  1. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  2. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    PubMed

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  3. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  4. A radially resolved kinetic model for nonlocal electron ripple diffusion losses in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott

    A relatively simple radially resolved kinetic model is applied to the ripple diffusion problem for electrons in tokamaks. The distribution function f(r,v) is defined on a two-dimensional grid, where r is the radial coordinate and v is the velocity coordinate. Particle transport in the radial direction is from ripple and banana diffusion and transport in the velocity direction is described by the Fokker-Planck equation. Particles and energy are replaced by source functions that are adjusted to maintain a constant central density and temperature. The relaxed profiles of f(r,v) show that the electron distribution function at the wall contains suprathermal electronsmore » that have diffused from the interior that enhance ripple transport. The transport at the periphery is therefore nonlocal. The energy replacement times from the computational model are near to the experimental replacement times for tokamak discharges in the compilation by Pfeiffer and Waltz [Nucl. Fusion 19, 51 (1979)].« less

  5. Theory of a general class of dissipative processes.

    NASA Technical Reports Server (NTRS)

    Hale, J. K.; Lasalle, J. P.; Slemrod, M.

    1972-01-01

    Development of a theory of periodic processes that is of sufficient generality for being applied to systems defined by partial differential equations (distributed parameter systems) and functional differential equations of the retarded and neutral type (hereditary systems), as well as to systems arising in the theory of elasticity. In particular, the attempt is made to develop a meaningful general theory of dissipative periodic systems with a wide range of applications.

  6. A non-Gaussian option pricing model based on Kaniadakis exponential deformation

    NASA Astrophysics Data System (ADS)

    Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara

    2017-09-01

    A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.

  7. Thermodynamics of rock forming crystalline solutions

    NASA Technical Reports Server (NTRS)

    Saxena, S. K.

    1971-01-01

    Analysis of phase diagrams and cation distributions within crystalline solutions as means of obtaining thermodynamic data on rock forming crystalline solutions is discussed along with some aspects of partitioning of elements in coexisting phases. Crystalline solutions, components in a silicate mineral, and chemical potentials of these components were defined. Examples were given for calculating thermodynamic mixing functions in the CaW04-SrW04, olivine-chloride solution, and orthopyroxene systems.

  8. BRDF profile of Tyvek and its implementation in the Geant4 simulation toolkit.

    PubMed

    Nozka, Libor; Pech, Miroslav; Hiklova, Helena; Mandat, Dusan; Hrabovsky, Miroslav; Schovanek, Petr; Palatka, Miroslav

    2011-02-28

    Diffuse and specular characteristics of the Tyvek 1025-BL material are reported with respect to their implementation in the Geant4 Monte Carlo simulation toolkit. This toolkit incorporates the UNIFIED model. Coefficients defined by the UNIFIED model were calculated from the bidirectional reflectance distribution function (BRDF) profiles measured with a scatterometer for several angles of incidence. Results were amended with profile measurements made by a profilometer.

  9. Random field assessment of nanoscopic inhomogeneity of bone

    PubMed Central

    Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu

    2010-01-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128

  10. Detecting failure of climate predictions

    USGS Publications Warehouse

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  11. MyDas, an Extensible Java DAS Server

    PubMed Central

    Jimenez, Rafael C.; Quinn, Antony F.; Jenkinson, Andrew M.; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning

    2012-01-01

    A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users. We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details. PMID:23028496

  12. Forebody and afterbody solutions of the Navier-Stokes equations for supersonic flow over blunt bodies in a generalized orthogonal coordinate system

    NASA Technical Reports Server (NTRS)

    Gnoffo, P. A.

    1978-01-01

    A coordinate transformation, which can approximate many different two-dimensional and axisymmetric body shapes with an analytic function, is used as a basis for solving the Navier-Stokes equations for the purpose of predicting 0 deg angle of attack supersonic flow fields. The transformation defines a curvilinear, orthogonal coordinate system in which coordinate lines are perpendicular to the body and the body is defined by one coordinate line. This system is mapped in to a rectangular computational domain in which the governing flow field equations are solved numerically. Advantages of this technique are that the specification of boundary conditions are simplified and, most importantly, the entire flow field can be obtained, including flow in the wake. Good agreement has been obtained with experimental data for pressure distributions, density distributions, and heat transfer over spheres and cylinders in supersonic flow. Approximations to the Viking aeroshell and to a candidate Jupiter probe are presented and flow fields over these shapes are calculated.

  13. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  14. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  15. MyDas, an extensible Java DAS server.

    PubMed

    Salazar, Gustavo A; García, Leyla J; Jones, Philip; Jimenez, Rafael C; Quinn, Antony F; Jenkinson, Andrew M; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning

    2012-01-01

    A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users.We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details.

  16. Drosophila germ granules are structured and contain homotypic mRNA clusters

    PubMed Central

    Trcek, Tatjana; Grosch, Markus; York, Andrew; Shroff, Hari; Lionnet, Timothée; Lehmann, Ruth

    2015-01-01

    Germ granules, specialized ribonucleoprotein particles, are a hallmark of all germ cells. In Drosophila, an estimated 200 mRNAs are enriched in the germ plasm, and some of these have important, often conserved roles in germ cell formation, specification, survival and migration. How mRNAs are spatially distributed within a germ granule and whether their position defines functional properties is unclear. Here we show, using single-molecule FISH and structured illumination microscopy, a super-resolution approach, that mRNAs are spatially organized within the granule whereas core germ plasm proteins are distributed evenly throughout the granule. Multiple copies of single mRNAs organize into ‘homotypic clusters' that occupy defined positions within the center or periphery of the granule. This organization, which is maintained during embryogenesis and independent of the translational or degradation activity of mRNAs, reveals new regulatory mechanisms for germ plasm mRNAs that may be applicable to other mRNA granules. PMID:26242323

  17. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    PubMed

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  18. About an adaptively weighted Kaplan-Meier estimate.

    PubMed

    Plante, Jean-François

    2009-09-01

    The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

  19. Two stochastic models useful in petroleum exploration

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1972-01-01

    A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits

  20. Distributions and motions of nearby stars defined by objective prism surveys and Hipparcos data

    NASA Technical Reports Server (NTRS)

    Hemenway, P. D.; Lee, J. T.; Upgren, A. R.

    1997-01-01

    Material and objective prism spectral classification work is used to determine the space density distribution of nearby common stars to the limits of objective prism spectral surveys. The aim is to extend the knowledge of the local densities of specific spectral types from a radius of 25 pc from the sun, as limited in the Gliese catalog of nearby stars, to 50 pc or more. Future plans for the application of these results to studies of the kinematic and dynamical properties of stars in the solar neighborhood as a function of their physical properties and ages are described.

  1. Lattice QCD Studies of Transverse Momentum-Dependent Parton Distribution Functions

    NASA Astrophysics Data System (ADS)

    Engelhardt, M.; Musch, B.; Hägler, P.; Negele, J.; Schäfer, A.

    2015-09-01

    Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped gauge link. Such a definition opens the possibility of evaluating TMDs within lattice QCD. By parametrizing the aforementioned matrix elements in terms of invariant amplitudes, the problem can be cast in a Lorentz frame suited for the lattice calculation. Results for selected TMD observables are presented, including a particular focus on their dependence on a Collins-Soper-type evolution parameter, which quantifies proximity of the staple-shaped gauge links to the light cone.

  2. Rainbow Fourier Transform

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail D.; Cairns, Brian; Mishchenko, Michael I.

    2012-01-01

    We present a novel technique for remote sensing of cloud droplet size distributions. Polarized reflectances in the scattering angle range between 135deg and 165deg exhibit a sharply defined rainbow structure, the shape of which is determined mostly by single scattering properties of cloud particles, and therefore, can be modeled using the Mie theory. Fitting the observed rainbow with such a model (computed for a parameterized family of particle size distributions) has been used for cloud droplet size retrievals. We discovered that the relationship between the rainbow structures and the corresponding particle size distributions is deeper than it had been commonly understood. In fact, the Mie theory-derived polarized reflectance as a function of reduced scattering angle (in the rainbow angular range) and the (monodisperse) particle radius appears to be a proxy to a kernel of an integral transform (similar to the sine Fourier transform on the positive semi-axis). This approach, called the rainbow Fourier transform (RFT), allows us to accurately retrieve the shape of the droplet size distribution by the application of the corresponding inverse transform to the observed polarized rainbow. While the basis functions of the proxy-transform are not exactly orthogonal in the finite angular range, this procedure needs to be complemented by a simple regression technique, which removes the retrieval artifacts. This non-parametric approach does not require any a priori knowledge of the droplet size distribution functional shape and is computationally fast (no look-up tables, no fitting, computations are the same as for the forward modeling).

  3. A testable model of earthquake probability based on changes in mean event size

    NASA Astrophysics Data System (ADS)

    Imoto, Masajiro

    2003-02-01

    We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.

  4. On residual stresses and homeostasis: an elastic theory of functional adaptation in living matter.

    PubMed

    Ciarletta, P; Destrade, M; Gower, A L

    2016-04-26

    Living matter can functionally adapt to external physical factors by developing internal tensions, easily revealed by cutting experiments. Nonetheless, residual stresses intrinsically have a complex spatial distribution, and destructive techniques cannot be used to identify a natural stress-free configuration. This work proposes a novel elastic theory of pre-stressed materials. Imposing physical compatibility and symmetry arguments, we define a new class of free energies explicitly depending on the internal stresses. This theory is finally applied to the study of arterial remodelling, proving its potential for the non-destructive determination of the residual tensions within biological materials.

  5. Deconvolution Methods for Multi-Detectors

    DTIC Science & Technology

    1989-08-30

    in [7). We will say sometimes that the family of distributions jI,..,’m is strongly coprime. It might be useful to explain why is (4) called a...form g In the variable ?. given by n 3(11) g q(z~t,p):= 1 kz()(k k=1 Given a family of m entire holomorphic functions f n’*If m its zero set Z is defined...write g1 g jdk" Recall the k=l coefficients gi are holomorphic in both z and t. Let F be the vector valued holomorphic function F: - (f1 ’..’,f ) we

  6. Determining the parameters of Weibull function to estimate the wind power potential in conditions of limited source meteorological data

    NASA Astrophysics Data System (ADS)

    Fetisova, Yu. A.; Ermolenko, B. V.; Ermolenko, G. V.; Kiseleva, S. V.

    2017-04-01

    We studied the information basis for the assessment of wind power potential on the territory of Russia. We described the methodology to determine the parameters of the Weibull function, which reflects the density of distribution of probabilities of wind flow speeds at a defined basic height above the surface of the earth using the available data on the average speed at this height and its repetition by gradations. The application of the least square method for determining these parameters, unlike the use of graphical methods, allows performing a statistical assessment of the results of approximation of empirical histograms by the Weibull formula. On the basis of the computer-aided analysis of the statistical data, it was shown that, at a fixed point where the wind speed changes at different heights, the range of parameter variation of the Weibull distribution curve is relatively small, the sensitivity of the function to parameter changes is quite low, and the influence of changes on the shape of speed distribution curves is negligible. Taking this into consideration, we proposed and mathematically verified the methodology of determining the speed parameters of the Weibull function at other heights using the parameter computations for this function at a basic height, which is known or defined by the average speed of wind flow, or the roughness coefficient of the geological substrate. We gave examples of practical application of the suggested methodology in the development of the Atlas of Renewable Energy Resources in Russia in conditions of deficiency of source meteorological data. The proposed methodology, to some extent, may solve the problem related to the lack of information on the vertical profile of repeatability of the wind flow speeds in the presence of a wide assortment of wind turbines with different ranges of wind-wheel axis heights and various performance characteristics in the global market; as a result, this methodology can become a powerful tool for effective selection of equipment in the process of designing a power supply system in a certain location.

  7. Centralized versus distributed propulsion

    NASA Technical Reports Server (NTRS)

    Clark, J. P.

    1982-01-01

    The functions and requirements of auxiliary propulsion systems are reviewed. None of the three major tasks (attitude control, stationkeeping, and shape control) can be performed by a collection of thrusters at a single central location. If a centralized system is defined as a collection of separated clusters, made up of the minimum number of propulsion units, then such a system can provide attitude control and stationkeeping for most vehicles. A distributed propulsion system is characterized by more numerous propulsion units in a regularly distributed arrangement. Various proposed large space systems are reviewed and it is concluded that centralized auxiliary propulsion is best suited to vehicles with a relatively rigid core. These vehicles may carry a number of flexible or movable appendages. A second group, consisting of one or more large flexible flat plates, may need distributed propulsion for shape control. There is a third group, consisting of vehicles built up from multiple shuttle launches, which may be forced into a distributed system because of the need to add additional propulsion units as the vehicles grow. The effects of distributed propulsion on a beam-like structure were examined. The deflection of the structure under both translational and rotational thrusts is shown as a function of the number of equally spaced thrusters. When two thrusters only are used it is shown that location is an important parameter. The possibility of using distributed propulsion to achieve minimum overall system weight is also examined. Finally, an examination of the active damping by distributed propulsion is described.

  8. Probability and the changing shape of response distributions for orientation.

    PubMed

    Anderson, Britt

    2014-11-18

    Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.

  9. Construction and identification of a D-Vine model applied to the probability distribution of modal parameters in structural dynamics

    NASA Astrophysics Data System (ADS)

    Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.

    2018-01-01

    This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.

  10. Space station automation of common module power management and distribution

    NASA Technical Reports Server (NTRS)

    Miller, W.; Jones, E.; Ashworth, B.; Riedesel, J.; Myers, C.; Freeman, K.; Steele, D.; Palmer, R.; Walsh, R.; Gohring, J.

    1989-01-01

    The purpose is to automate a breadboard level Power Management and Distribution (PMAD) system which possesses many functional characteristics of a specified Space Station power system. The automation system was built upon 20 kHz ac source with redundancy of the power buses. There are two power distribution control units which furnish power to six load centers which in turn enable load circuits based upon a system generated schedule. The progress in building this specified autonomous system is described. Automation of Space Station Module PMAD was accomplished by segmenting the complete task in the following four independent tasks: (1) develop a detailed approach for PMAD automation; (2) define the software and hardware elements of automation; (3) develop the automation system for the PMAD breadboard; and (4) select an appropriate host processing environment.

  11. Energy loss analysis of an integrated space power distribution system

    NASA Technical Reports Server (NTRS)

    Kankam, M. D.; Ribeiro, P. F.

    1992-01-01

    The results of studies related to conceptual topologies of an integrated utility-like space power system are described. The system topologies are comparatively analyzed by considering their transmission energy losses as functions of mainly distribution voltage level and load composition. The analysis is expedited by use of a Distribution System Analysis and Simulation (DSAS) software. This recently developed computer program by the Electric Power Research Institute (EPRI) uses improved load models to solve the power flow within the system. However, present shortcomings of the software with regard to space applications, and incompletely defined characteristics of a space power system make the results applicable to only the fundamental trends of energy losses of the topologies studied. Accountability, such as included, for the effects of the various parameters on the system performance can constitute part of a planning tool for a space power distribution system.

  12. A development framework for artificial intelligence based distributed operations support systems

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1990-01-01

    Advanced automation is required to reduce costly human operations support requirements for complex space-based and ground control systems. Existing knowledge based technologies have been used successfully to automate individual operations tasks. Considerably less progress has been made in integrating and coordinating multiple operations applications for unified intelligent support systems. To fill this gap, SOCIAL, a tool set for developing Distributed Artificial Intelligence (DAI) systems is being constructed. SOCIAL consists of three primary language based components defining: models of interprocess communication across heterogeneous platforms; models for interprocess coordination, concurrency control, and fault management; and for accessing heterogeneous information resources. DAI applications subsystems, either new or existing, will access these distributed services non-intrusively, via high-level message-based protocols. SOCIAL will reduce the complexity of distributed communications, control, and integration, enabling developers to concentrate on the design and functionality of the target DAI system itself.

  13. Representations and uses of light distribution functions

    NASA Astrophysics Data System (ADS)

    Lalonde, Paul Albert

    1998-11-01

    At their lowest level, all rendering algorithms depend on models of local illumination to define the interplay of light with the surfaces being rendered. These models depend both on the representations of light scattering at a surface due to reflection and to an equal extent on the representation of light sources and light fields. Both emission and reflection have in common that they describe how light leaves a surface as a function of direction. Reflection also depends on an incident light direction. Emission can depend on the position on the light source We call the functions representing emission and reflection light distribution functions (LDF's). There are some difficulties to using measured light distribution functions. The data sets are very large-the size of the data grows with the fourth power of the sampling resolution. For example, a bidirectional reflectance distribution function (BRDF) sampled at five degrees angular resolution, which is arguably insufficient to capture highlights and other high frequency effects in the reflection, can easily require one and a half million samples. Once acquired this data requires some form of interpolation to use them. Any compression method used must be efficient, both in space and in the time required to evaluate the function at a point or over a range of points. This dissertation examines a wavelet representation of light distribution functions that addresses these issues. A data structure is presented that allows efficient reconstruction of LDFs for a given set of parameters, making the wavelet representation feasible for rendering tasks. Texture mapping methods that take advantage of our LDF representations are examined, as well as techniques for filtering LDFs, and methods for using wavelet compressed bidirection reflectance distribution functions (BRDFs) and light sources with Monte Carlo path tracing algorithms. The wavelet representation effectively compresses BRDF and emission data while inducing only a small error in the reconstructed signal. The representation can be used to evaluate efficiently some integrals that appear in shading computation which allows fast, accurate computation of local shading. The representation can be used to represent light fields and is used to reconstruct views of environments interactively from a precomputed set of views. The representation of the BRDF also allows the efficient generation of reflected directions for Monte Carlo array tracing applications. The method can be integrated into many different global illumination algorithms, including ray tracers and wavelet radiosity systems.

  14. Energetic investigation of the adsorption process of CH4, C2H6 and N2 on activated carbon: Numerical and statistical physics treatment

    NASA Astrophysics Data System (ADS)

    Ben Torkia, Yosra; Ben Yahia, Manel; Khalfaoui, Mohamed; Al-Muhtaseb, Shaheen A.; Ben Lamine, Abdelmottaleb

    2014-01-01

    The adsorption energy distribution (AED) function of a commercial activated carbon (BDH-activated carbon) was investigated. For this purpose, the integral equation is derived by using a purely analytical statistical physics treatment. The description of the heterogeneity of the adsorbent is significantly clarified by defining the parameter N(E). This parameter represents the energetic density of the spatial density of the effectively occupied sites. To solve the integral equation, a numerical method was used based on an adequate algorithm. The Langmuir model was adopted as a local adsorption isotherm. This model is developed by using the grand canonical ensemble, which allows defining the physico-chemical parameters involved in the adsorption process. The AED function is estimated by a normal Gaussian function. This method is applied to the adsorption isotherms of nitrogen, methane and ethane at different temperatures. The development of the AED using a statistical physics treatment provides an explanation of the gas molecules behaviour during the adsorption process and gives new physical interpretations at microscopic levels.

  15. Methods and limitations in radar target imagery

    NASA Astrophysics Data System (ADS)

    Bertrand, P.

    An analytical examination of the reflectivity of radar targets is presented for the two-dimensional case of flat targets. A complex backscattering coefficient is defined for the amplitude and phase of the received field in comparison with the emitted field. The coefficient is dependent on the frequency of the emitted signal and the orientation of the target with respect to the transmitter. The target reflection is modeled in terms of the density of illumined, colored points independent from one another. The target therefore is represented as an infinite family of densities indexed by the observational angle. Attention is given to the reflectivity parameters and their distribution function, and to the conjunct distribution function for the color, position, and the directivity of bright points. It is shown that a fundamental ambiguity exists between the localization of the illumined points and the determination of their directivity and color.

  16. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  17. A new type of simplified fuzzy rule-based system

    NASA Astrophysics Data System (ADS)

    Angelov, Plamen; Yager, Ronald

    2012-02-01

    Over the last quarter of a century, two types of fuzzy rule-based (FRB) systems dominated, namely Mamdani and Takagi-Sugeno type. They use the same type of scalar fuzzy sets defined per input variable in their antecedent part which are aggregated at the inference stage by t-norms or co-norms representing logical AND/OR operations. In this paper, we propose a significantly simplified alternative to define the antecedent part of FRB systems by data Clouds and density distribution. This new type of FRB systems goes further in the conceptual and computational simplification while preserving the best features (flexibility, modularity, and human intelligibility) of its predecessors. The proposed concept offers alternative non-parametric form of the rules antecedents, which fully reflects the real data distribution and does not require any explicit aggregation operations and scalar membership functions to be imposed. Instead, it derives the fuzzy membership of a particular data sample to a Cloud by the data density distribution of the data associated with that Cloud. Contrast this to the clustering which is parametric data space decomposition/partitioning where the fuzzy membership to a cluster is measured by the distance to the cluster centre/prototype ignoring all the data that form that cluster or approximating their distribution. The proposed new approach takes into account fully and exactly the spatial distribution and similarity of all the real data by proposing an innovative and much simplified form of the antecedent part. In this paper, we provide several numerical examples aiming to illustrate the concept.

  18. BRAIN NETWORKS. Correlated gene expression supports synchronous activity in brain networks.

    PubMed

    Richiardi, Jonas; Altmann, Andre; Milazzo, Anna-Clare; Chang, Catie; Chakravarty, M Mallar; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Bromberg, Uli; Büchel, Christian; Conrod, Patricia; Fauth-Bühler, Mira; Flor, Herta; Frouin, Vincent; Gallinat, Jürgen; Garavan, Hugh; Gowland, Penny; Heinz, Andreas; Lemaître, Hervé; Mann, Karl F; Martinot, Jean-Luc; Nees, Frauke; Paus, Tomáš; Pausova, Zdenka; Rietschel, Marcella; Robbins, Trevor W; Smolka, Michael N; Spanagel, Rainer; Ströhle, Andreas; Schumann, Gunter; Hawrylycz, Mike; Poline, Jean-Baptiste; Greicius, Michael D

    2015-06-12

    During rest, brain activity is synchronized between different regions widely distributed throughout the brain, forming functional networks. However, the molecular mechanisms supporting functional connectivity remain undefined. We show that functional brain networks defined with resting-state functional magnetic resonance imaging can be recapitulated by using measures of correlated gene expression in a post mortem brain tissue data set. The set of 136 genes we identify is significantly enriched for ion channels. Polymorphisms in this set of genes significantly affect resting-state functional connectivity in a large sample of healthy adolescents. Expression levels of these genes are also significantly associated with axonal connectivity in the mouse. The results provide convergent, multimodal evidence that resting-state functional networks correlate with the orchestrated activity of dozens of genes linked to ion channel activity and synaptic function. Copyright © 2015, American Association for the Advancement of Science.

  19. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C..; Nielsen, J. E.

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE] on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [about 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.

  20. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C.; Nielsen, J. E.

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE) on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [approximately 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.

  1. Measurement of myocardial blood flow by cardiovascular magnetic resonance perfusion: comparison of distributed parameter and Fermi models with single and dual bolus.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik

    2015-02-17

    Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.

  2. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  3. Journal of Naval Science. Volume 2, Number 1

    DTIC Science & Technology

    1976-01-01

    has defined a probability distribution function which fits this type of data and forms the basis for statistical analysis of test results (see...Conditions to Assess the Performance of Fire-Resistant Fluids’. Wear, 28 (1974) 29. J.N.S., Vol. 2, No. 1 APPENDIX A Analysis of Fatigue Test Data...used to produce the impulse response and the equipment required for the analysis is relatively simple. The methods that must be used to produce

  4. The Chairmanship of the Joint Chiefs of Staff 1949-2016

    DTIC Science & Technology

    2016-01-01

    Distribution Office, Office of the Chief of Naval Operations; the Marine Corps History Division; the Air Force Historical Studies Office; the Pentagon... Studies Group, which came to play a significant role in defin- ing JCS positions but was less successful in winning OSD’s approval of them...force structure issues. To assist in per- forming this function, it recommended additional staff support for him in the studies , analysis, and gaming

  5. The Richness of Task-Evoked Hemodynamic Responses Defines a Pseudohierarchy of Functionally Meaningful Brain Networks

    PubMed Central

    Orban, Pierre; Doyon, Julien; Petrides, Michael; Mennes, Maarten; Hoge, Richard; Bellec, Pierre

    2015-01-01

    Functional magnetic resonance imaging can measure distributed and subtle variations in brain responses associated with task performance. However, it is unclear whether the rich variety of responses observed across the brain is functionally meaningful and consistent across individuals. Here, we used a multivariate clustering approach that grouped brain regions into clusters based on the similarity of their task-evoked temporal responses at the individual level, and then established the spatial consistency of these individual clusters at the group level. We observed a stable pseudohierarchy of task-evoked networks in the context of a delayed sequential motor task, where the fractionation of networks was driven by a gradient of involvement in motor sequence preparation versus execution. In line with theories about higher-level cognitive functioning, this gradient evolved in a rostro-caudal manner in the frontal lobe. In addition, parcellations in the cerebellum and basal ganglia matched with known anatomical territories and fiber pathways with the cerebral cortex. These findings demonstrate that subtle variations in brain responses associated with task performance are systematic enough across subjects to define a pseudohierarchy of task-evoked networks. Such networks capture meaningful functional features of brain organization as shaped by a given cognitive context. PMID:24729172

  6. Within-genome evolution of REPINs: a new family of miniature mobile DNA in bacteria.

    PubMed

    Bertels, Frederic; Rainey, Paul B

    2011-06-01

    Repetitive sequences are a conserved feature of many bacterial genomes. While first reported almost thirty years ago, and frequently exploited for genotyping purposes, little is known about their origin, maintenance, or processes affecting the dynamics of within-genome evolution. Here, beginning with analysis of the diversity and abundance of short oligonucleotide sequences in the genome of Pseudomonas fluorescens SBW25, we show that over-represented short sequences define three distinct groups (GI, GII, and GIII) of repetitive extragenic palindromic (REP) sequences. Patterns of REP distribution suggest that closely linked REP sequences form a functional replicative unit: REP doublets are over-represented, randomly distributed in extragenic space, and more highly conserved than singlets. In addition, doublets are organized as inverted repeats, which together with intervening spacer sequences are predicted to form hairpin structures in ssDNA or mRNA. We refer to these newly defined entities as REPINs (REP doublets forming hairpins) and identify short reads from population sequencing that reveal putative transposition intermediates. The proximal relationship between GI, GII, and GIII REPINs and specific REP-associated tyrosine transposases (RAYTs), combined with features of the putative transposition intermediate, suggests a mechanism for within-genome dissemination. Analysis of the distribution of REPs in a range of RAYT-containing bacterial genomes, including Escherichia coli K-12 and Nostoc punctiforme, show that REPINs are a widely distributed, but hitherto unrecognized, family of miniature non-autonomous mobile DNA.

  7. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  8. First lattice QCD study of the gluonic structure of light nuclei

    NASA Astrophysics Data System (ADS)

    Winter, Frank; Detmold, William; Gambhir, Arjun S.; Orginos, Kostas; Savage, Martin J.; Shanahan, Phiala E.; Wagman, Michael L.; Nplqcd Collaboration

    2017-11-01

    The role of gluons in the structure of the nucleon and light nuclei is investigated using lattice quantum chromodynamics (QCD) calculations. The first moment of the unpolarized gluon distribution is studied in nuclei up to atomic number A =3 at quark masses corresponding to pion masses of mπ˜450 and 806 MeV. Nuclear modification of this quantity defines a gluonic analogue of the EMC effect and is constrained to be less than ˜10 % in these nuclei. This is consistent with expectations from phenomenological quark distributions and the momentum sum rule. In the deuteron, the combination of gluon distributions corresponding to the b1 structure function is found to have a small first moment compared with the corresponding momentum fraction. The first moment of the gluon transversity structure function is also investigated in the spin-1 deuteron, where a nonzero signal is observed at mπ˜806 MeV . This is the first indication of gluon contributions to nuclear structure that can not be associated with an individual nucleon.

  9. Ionic Size Effects: Generalized Boltzmann Distributions, Counterion Stratification, and Modified Debye Length.

    PubMed

    Liu, Bo; Liu, Pei; Xu, Zhenli; Zhou, Shenggao

    2013-10-01

    Near a charged surface, counterions of different valences and sizes cluster; and their concentration profiles stratify. At a distance from such a surface larger than the Debye length, the electric field is screened by counterions. Recent studies by a variational mean-field approach that includes ionic size effects and by Monte Carlo simulations both suggest that the counterion stratification is determined by the ionic valence-to-volume ratios. Central in the mean-field approach is a free-energy functional of ionic concentrations in which the ionic size effects are included through the entropic effect of solvent molecules. The corresponding equilibrium conditions define the generalized Boltzmann distributions relating the ionic concentrations to the electrostatic potential. This paper presents a detailed analysis and numerical calculations of such a free-energy functional to understand the dependence of the ionic charge density on the electrostatic potential through the generalized Boltzmann distributions, the role of ionic valence-to-volume ratios in the counterion stratification, and the modification of Debye length due to the effect of ionic sizes.

  10. Ionic Size Effects: Generalized Boltzmann Distributions, Counterion Stratification, and Modified Debye Length

    PubMed Central

    Liu, Bo; Liu, Pei; Xu, Zhenli; Zhou, Shenggao

    2013-01-01

    Near a charged surface, counterions of different valences and sizes cluster; and their concentration profiles stratify. At a distance from such a surface larger than the Debye length, the electric field is screened by counterions. Recent studies by a variational mean-field approach that includes ionic size effects and by Monte Carlo simulations both suggest that the counterion stratification is determined by the ionic valence-to-volume ratios. Central in the mean-field approach is a free-energy functional of ionic concentrations in which the ionic size effects are included through the entropic effect of solvent molecules. The corresponding equilibrium conditions define the generalized Boltzmann distributions relating the ionic concentrations to the electrostatic potential. This paper presents a detailed analysis and numerical calculations of such a free-energy functional to understand the dependence of the ionic charge density on the electrostatic potential through the generalized Boltzmann distributions, the role of ionic valence-to-volume ratios in the counterion stratification, and the modification of Debye length due to the effect of ionic sizes. PMID:24465094

  11. A Collaboration Network Model Of Cytokine-Protein Network

    NASA Astrophysics Data System (ADS)

    Zou, Sheng-Rong; Zhou, Ta; Peng, Yu-Jing; Guo, Zhong-Wei; Gu, Chang-Gui; He, Da-Ren

    2008-03-01

    Complex networks provide us a new view for investigation of immune systems. We collect data through STRING database and present a network description with cooperation network model. The cytokine-protein network model we consider is constituted by two kinds of nodes, one is immune cytokine types which can be regarded as collaboration acts, the other one is protein type which can be regarded as collaboration actors. From act degree distribution that can be well described by typical SPL (shifted power law) functions [1], we find that HRAS, TNFRSF13C, S100A8, S100A1, MAPK8, S100A7, LIF, CCL4, CXCL13 are highly collaborated with other proteins. It reveals that these mediators are important in cytokine-protein network to regulate immune activity. Dyad in the collaboration networks can be defined as two proteins and they appear in one cytokine collaboration relationship. The dyad act degree distribution can also be well described by typical SPL functions. [1] Assortativity and act degree distribution of some collaboration networks, Hui Chang, Bei-Bei Su, Yue-Ping Zhou, Daren He, Physica A, 383 (2007) 687-702

  12. First lattice QCD study of the gluonic structure of light nuclei

    DOE PAGES

    Winter, Frank; Detmold, William; Gambhir, Arjun S.; ...

    2017-11-28

    The role of gluons in the structure of the nucleon and light nuclei is investigated using lattice quantum chromodynamics (QCD) calculations. The first moment of the unpolarised gluon distribution is studied in nuclei up to atomic numbermore » $A=3$ at quark masses corresponding to pion masses of $$m_\\pi\\sim 450$$ and $806$ MeV. Nuclear modification of this quantity defines a gluonic analogue of the EMC effect and is constrained to be less than $$\\sim 10$$% in these nuclei. This is consistent with expectations from phenomenological quark distributions and the momentum sum rule. In the deuteron, the combination of gluon distributions corresponding to the $$b_1$$ structure function is found to have a small first moment compared with the corresponding momentum fraction. The first moment of the gluon transversity structure function is also investigated in the spin-1 deuteron, where a non-zero signal is observed at $$m_\\pi \\sim 806$$ MeV. In conclusion, this is the first indication of gluon contributions to nuclear structure that can not be associated with an individual nucleon.« less

  13. Free-form geometric modeling by integrating parametric and implicit PDEs.

    PubMed

    Du, Haixia; Qin, Hong

    2007-01-01

    Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.

  14. Matching the quasiparton distribution in a momentum subtraction scheme

    NASA Astrophysics Data System (ADS)

    Stewart, Iain W.; Zhao, Yong

    2018-03-01

    The quasiparton distribution is a spatial correlation of quarks or gluons along the z direction in a moving nucleon which enables direct lattice calculations of parton distribution functions. It can be defined with a nonperturbative renormalization in a regularization independent momentum subtraction scheme (RI/MOM), which can then be perturbatively related to the collinear parton distribution in the MS ¯ scheme. Here we carry out a direct matching from the RI/MOM scheme for the quasi-PDF to the MS ¯ PDF, determining the non-singlet quark matching coefficient at next-to-leading order in perturbation theory. We find that the RI/MOM matching coefficient is insensitive to the ultraviolet region of convolution integral, exhibits improved perturbative convergence when converting between the quasi-PDF and PDF, and is consistent with a quasi-PDF that vanishes in the unphysical region as the proton momentum Pz→∞ , unlike other schemes. This direct approach therefore has the potential to improve the accuracy for converting quasidistribution lattice calculations to collinear distributions.

  15. Establishing Functional Relationships between Abiotic Environment, Macrophyte Coverage, Resource Gradients and the Distribution of Mytilus trossulus in a Brackish Non-Tidal Environment.

    PubMed

    Kotta, Jonne; Oganjan, Katarina; Lauringson, Velda; Pärnoja, Merli; Kaasik, Ants; Rohtla, Liisa; Kotta, Ilmar; Orav-Kotta, Helen

    2015-01-01

    Benthic suspension feeding mussels are an important functional guild in coastal and estuarine ecosystems. To date we lack information on how various environmental gradients and biotic interactions separately and interactively shape the distribution patterns of mussels in non-tidal environments. Opposing to tidal environments, mussels inhabit solely subtidal zone in non-tidal waterbodies and, thereby, driving factors for mussel populations are expected to differ from the tidal areas. In the present study, we used the boosted regression tree modelling (BRT), an ensemble method for statistical techniques and machine learning, in order to explain the distribution and biomass of the suspension feeding mussel Mytilus trossulus in the non-tidal Baltic Sea. BRT models suggested that (1) distribution patterns of M. trossulus are largely driven by separate effects of direct environmental gradients and partly by interactive effects of resource gradients with direct environmental gradients. (2) Within its suitable habitat range, however, resource gradients had an important role in shaping the biomass distribution of M. trossulus. (3) Contrary to tidal areas, mussels were not competitively superior over macrophytes with patterns indicating either facilitative interactions between mussels and macrophytes or co-variance due to common stressor. To conclude, direct environmental gradients seem to define the distribution pattern of M. trossulus, and within the favourable distribution range, resource gradients in interaction with direct environmental gradients are expected to set the biomass level of mussels.

  16. Establishing Functional Relationships between Abiotic Environment, Macrophyte Coverage, Resource Gradients and the Distribution of Mytilus trossulus in a Brackish Non-Tidal Environment

    PubMed Central

    Kotta, Jonne; Oganjan, Katarina; Lauringson, Velda; Pärnoja, Merli; Kaasik, Ants; Rohtla, Liisa; Kotta, Ilmar; Orav-Kotta, Helen

    2015-01-01

    Benthic suspension feeding mussels are an important functional guild in coastal and estuarine ecosystems. To date we lack information on how various environmental gradients and biotic interactions separately and interactively shape the distribution patterns of mussels in non-tidal environments. Opposing to tidal environments, mussels inhabit solely subtidal zone in non-tidal waterbodies and, thereby, driving factors for mussel populations are expected to differ from the tidal areas. In the present study, we used the boosted regression tree modelling (BRT), an ensemble method for statistical techniques and machine learning, in order to explain the distribution and biomass of the suspension feeding mussel Mytilus trossulus in the non-tidal Baltic Sea. BRT models suggested that (1) distribution patterns of M. trossulus are largely driven by separate effects of direct environmental gradients and partly by interactive effects of resource gradients with direct environmental gradients. (2) Within its suitable habitat range, however, resource gradients had an important role in shaping the biomass distribution of M. trossulus. (3) Contrary to tidal areas, mussels were not competitively superior over macrophytes with patterns indicating either facilitative interactions between mussels and macrophytes or co-variance due to common stressor. To conclude, direct environmental gradients seem to define the distribution pattern of M. trossulus, and within the favourable distribution range, resource gradients in interaction with direct environmental gradients are expected to set the biomass level of mussels. PMID:26317668

  17. The canonical quantization of chaotic maps on the torus

    NASA Astrophysics Data System (ADS)

    Rubin, Ron Shai

    In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.

  18. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  19. Statistical framework and noise sensitivity of the amplitude radial correlation contrast method.

    PubMed

    Kipervaser, Zeev Gideon; Pelled, Galit; Goelman, Gadi

    2007-09-01

    A statistical framework for the amplitude radial correlation contrast (RCC) method, which integrates a conventional pixel threshold approach with cluster-size statistics, is presented. The RCC method uses functional MRI (fMRI) data to group neighboring voxels in terms of their degree of temporal cross correlation and compares coherences in different brain states (e.g., stimulation OFF vs. ON). By defining the RCC correlation map as the difference between two RCC images, the map distribution of two OFF states is shown to be normal, enabling the definition of the pixel cutoff. The empirical cluster-size null distribution obtained after the application of the pixel cutoff is used to define a cluster-size cutoff that allows 5% false positives. Assuming that the fMRI signal equals the task-induced response plus noise, an analytical expression of amplitude-RCC dependency on noise is obtained and used to define the pixel threshold. In vivo and ex vivo data obtained during rat forepaw electric stimulation are used to fine-tune this threshold. Calculating the spatial coherences within in vivo and ex vivo images shows enhanced coherence in the in vivo data, but no dependency on the anesthesia method, magnetic field strength, or depth of anesthesia, strengthening the generality of the proposed cutoffs. Copyright (c) 2007 Wiley-Liss, Inc.

  20. Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence

    NASA Astrophysics Data System (ADS)

    Hatch, David R.

    This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.

  1. Neurofilament protein defines regional patterns of cortical organization in the macaque monkey visual system: a quantitative immunohistochemical analysis

    NASA Technical Reports Server (NTRS)

    Hof, P. R.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)

    1995-01-01

    Visual function in monkeys is subserved at the cortical level by a large number of areas defined by their specific physiological properties and connectivity patterns. For most of these cortical fields, a precise index of their degree of anatomical specialization has not yet been defined, although many regional patterns have been described using Nissl or myelin stains. In the present study, an attempt has been made to elucidate the regional characteristics, and to varying degrees boundaries, of several visual cortical areas in the macaque monkey using an antibody to neurofilament protein (SMI32). This antibody labels a subset of pyramidal neurons with highly specific regional and laminar distribution patterns in the cerebral cortex. Based on the staining patterns and regional quantitative analysis, as many as 28 cortical fields were reliably identified. Each field had a homogeneous distribution of labeled neurons, except area V1, where increases in layer IVB cell and in Meynert cell counts paralleled the increase in the degree of eccentricity in the visual field representation. Within the occipitotemporal pathway, areas V3 and V4 and fields in the inferior temporal cortex were characterized by a distinct population of neurofilament-rich neurons in layers II-IIIa, whereas areas located in the parietal cortex and part of the occipitoparietal pathway had a consistent population of large labeled neurons in layer Va. The mediotemporal areas MT and MST displayed a distinct population of densely labeled neurons in layer VI. Quantitative analysis of the laminar distribution of the labeled neurons demonstrated that the visual cortical areas could be grouped in four hierarchical levels based on the ratio of neuron counts between infragranular and supragranular layers, with the first (areas V1, V2, V3, and V3A) and third (temporal and parietal regions) levels characterized by low ratios and the second (areas MT, MST, and V4) and fourth (frontal regions) levels characterized by high to very high ratios. Such density trends may correspond to differential representation of corticocortically (and corticosubcortically) projecting neurons at several functional steps in the integration of the visual stimuli. In this context, it is possible that neurofilament protein is crucial for the unique capacity of certain subsets of neurons to perform the highly precise mapping functions of the monkey visual system.

  2. Streamline integration as a method for two-dimensional elliptic grid generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.

    We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less

  3. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    NASA Astrophysics Data System (ADS)

    Mantry, Sonny; Petriello, Frank

    2010-05-01

    We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.

  4. Measuring Spatial Accessibility of Health Care Providers – Introduction of a Variable Distance Decay Function within the Floating Catchment Area (FCA) Method

    PubMed Central

    Groneberg, David A.

    2016-01-01

    We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649

  5. Orphan and gene related CpG Islands follow power-law-like distributions in several genomes: evidence of function-related and taxonomy-related modes of distribution.

    PubMed

    Tsiagkas, Giannis; Nikolaou, Christoforos; Almirantis, Yannis

    2014-12-01

    CpG Islands (CGIs) are compositionally defined short genomic stretches, which have been studied in the human, mouse, chicken and later in several other genomes. Initially, they were assigned the role of transcriptional regulation of protein-coding genes, especially the house-keeping ones, while more recently there is found evidence that they are involved in several other functions as well, which might include regulation of the expression of RNA genes, DNA replication etc. Here, an investigation of their distributional characteristics in a variety of genomes is undertaken for both whole CGI populations as well as for CGI subsets that lie away from known genes (gene-unrelated or "orphan" CGIs). In both cases power-law-like linearity in double logarithmic scale is found. An evolutionary model, initially put forward for the explanation of a similar pattern found in gene populations is implemented. It includes segmental duplication events and eliminations of most of the duplicated CGIs, while a moderate rate of non-duplicated CGI eliminations is also applied in some cases. Simulations reproduce all the main features of the observed inter-CGI chromosomal size distributions. Our results on power-law-like linearity found in orphan CGI populations suggest that the observed distributional pattern is independent of the analogous pattern that protein coding segments were reported to follow. The power-law-like patterns in the genomic distributions of CGIs described herein are found to be compatible with several other features of the composition, abundance or functional role of CGIs reported in the current literature across several genomes, on the basis of the proposed evolutionary model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  7. Glass polymorphism in amorphous germanium probed by first-principles computer simulations

    NASA Astrophysics Data System (ADS)

    Mancini, G.; Celino, M.; Iesari, F.; Di Cicco, A.

    2016-01-01

    The low-density (LDA) to high-density (HDA) transformation in amorphous Ge at high pressure is studied by first-principles molecular dynamics simulations in the framework of density functional theory. Previous experiments are accurately reproduced, including the presence of a well-defined LDA-HDA transition above 8 GPa. The LDA-HDA density increase is found to be about 14%. Pair and bond-angle distributions are obtained in the 0-16 GPa pressure range and allowed us a detailed analysis of the transition. The local fourfold coordination is transformed in an average HDA sixfold coordination associated with different local geometries as confirmed by coordination number analysis and shape of the bond-angle distributions.

  8. A study of a space communication system for the control and monitoring of the electric distribution system. Volume 2: Supporting data and analyses

    NASA Technical Reports Server (NTRS)

    Vaisnys, A.

    1980-01-01

    It is technically feasible to design a satellite communication system to serve the United States electric utility industry's needs relative to load management, real-time operations management, remote meter reading and to determine the costs of various elements of the system. The functions associated with distribution automation and control and communication system requirements are defined. Factors related to formulating viable communication concepts, the relationship of various design factors to utility operating practices, and the results of the cost analysis are discussed The system concept and several ways in which the concept could be integrated into the utility industry are described.

  9. Integrated in silico analyses of regulatory and metabolic networks of Synechococcus sp. PCC 7002 reveal relationships between gene centrality and essentiality

    DOE PAGES

    Song, Hyun-Seob; McClure, Ryan S.; Bernstein, Hans C.; ...

    2015-03-27

    Cyanobacteria dynamically relay environmental inputs to intracellular adaptations through a coordinated adjustment of photosynthetic efficiency and carbon processing rates. The output of such adaptations is reflected through changes in transcriptional patterns and metabolic flux distributions that ultimately define growth strategy. To address interrelationships between metabolism and regulation, we performed integrative analyses of metabolic and gene co-expression networks in a model cyanobacterium, Synechococcus sp. PCC 7002. Centrality analyses using the gene co-expression network identified a set of key genes, which were defined here as ‘topologically important.’ Parallel in silico gene knock-out simulations, using the genome-scale metabolic network, classified what we termedmore » as ‘functionally important’ genes, deletion of which affected growth or metabolism. A strong positive correlation was observed between topologically and functionally important genes. Functionally important genes exhibited variable levels of topological centrality; however, the majority of topologically central genes were found to be functionally essential for growth. Subsequent functional enrichment analysis revealed that both functionally and topologically important genes in Synechococcus sp. PCC 7002 are predominantly associated with translation and energy metabolism, two cellular processes critical for growth. This research demonstrates how synergistic network-level analyses can be used for reconciliation of metabolic and gene expression data to uncover fundamental biological principles.« less

  10. Integrated in silico analyses of regulatory and metabolic networks of Synechococcus sp. PCC 7002 reveal relationships between gene centrality and essentiality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; McClure, Ryan S.; Bernstein, Hans C.

    Cyanobacteria dynamically relay environmental inputs to intracellular adaptations through a coordinated adjustment of photosynthetic efficiency and carbon processing rates. The output of such adaptations is reflected through changes in transcriptional patterns and metabolic flux distributions that ultimately define growth strategy. To address interrelationships between metabolism and regulation, we performed integrative analyses of metabolic and gene co-expression networks in a model cyanobacterium, Synechococcus sp. PCC 7002. Centrality analyses using the gene co-expression network identified a set of key genes, which were defined here as ‘topologically important.’ Parallel in silico gene knock-out simulations, using the genome-scale metabolic network, classified what we termedmore » as ‘functionally important’ genes, deletion of which affected growth or metabolism. A strong positive correlation was observed between topologically and functionally important genes. Functionally important genes exhibited variable levels of topological centrality; however, the majority of topologically central genes were found to be functionally essential for growth. Subsequent functional enrichment analysis revealed that both functionally and topologically important genes in Synechococcus sp. PCC 7002 are predominantly associated with translation and energy metabolism, two cellular processes critical for growth. This research demonstrates how synergistic network-level analyses can be used for reconciliation of metabolic and gene expression data to uncover fundamental biological principles.« less

  11. Shining light on neurons--elucidation of neuronal functions by photostimulation.

    PubMed

    Eder, Matthias; Zieglgänsberger, Walter; Dodt, Hans-Ulrich

    2004-01-01

    Many neuronal functions can be elucidated by techniques that allow for a precise stimulation of defined regions of a neuron and its afferents. Photolytic release of neurotransmitters from 'caged' derivates in the vicinity of visualized neurons in living brain slices meets this request. This technique allows the study of the subcellular distribution and properties of functional native neurotransmitter receptors. These are prerequisites for a detailed analysis of the expression and spatial specificity of synaptic plasticity. Photostimulation can further be used to fast map the synaptic connectivity between nearby and, more importantly, distant cells in a neuronal network. Here we give a personal review of some of the technical aspects of photostimulation and recent findings, which illustrate the advantages of this technique.

  12. Wave drag as the objective function in transonic fighter wing optimization

    NASA Technical Reports Server (NTRS)

    Phillips, P. S.

    1984-01-01

    The original computational method for determining wave drag in a three dimensional transonic analysis method was replaced by a wave drag formula based on the loss in momentum across an isentropic shock. This formula was used as the objective function in a numerical optimization procedure to reduce the wave drag of a fighter wing at transonic maneuver conditions. The optimization procedure minimized wave drag through modifications to the wing section contours defined by a wing profile shape function. A significant reduction in wave drag was achieved while maintaining a high lift coefficient. Comparisons of the pressure distributions for the initial and optimized wing geometries showed significant reductions in the leading-edge peaks and shock strength across the span.

  13. Equivalent peak resolution: characterization of the extent of separation for two components based on their relative peak overlap.

    PubMed

    Dvořák, Martin; Svobodová, Jana; Dubský, Pavel; Riesová, Martina; Vigh, Gyula; Gaš, Bohuslav

    2015-03-01

    Although the classical formula of peak resolution was derived to characterize the extent of separation only for Gaussian peaks of equal areas, it is often used even when the peaks follow non-Gaussian distributions and/or have unequal areas. This practice can result in misleading information about the extent of separation in terms of the severity of peak overlap. We propose here the use of the equivalent peak resolution value, a term based on relative peak overlap, to characterize the extent of separation that had been achieved. The definition of equivalent peak resolution is not constrained either by the form(s) of the concentration distribution function(s) of the peaks (Gaussian or non-Gaussian) or the relative area of the peaks. The equivalent peak resolution value and the classically defined peak resolution value are numerically identical when the separated peaks are Gaussian and have identical areas and SDs. Using our new freeware program, Resolution Analyzer, one can calculate both the classically defined and the equivalent peak resolution values. With the help of this tool, we demonstrate here that the classical peak resolution values mischaracterize the extent of peak overlap even when the peaks are Gaussian but have different areas. We show that under ideal conditions of the separation process, the relative peak overlap value is easily accessible by fitting the overall peak profile as the sum of two Gaussian functions. The applicability of the new approach is demonstrated on real separations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Functional diversification of taste cells in vertebrates

    PubMed Central

    Matsumoto, Ichiro; Ohmoto, Makoto; Abe, Keiko

    2012-01-01

    Tastes are senses resulting from the activation of taste cells distributed in oral epithelia. Sweet, umami, bitter, sour, and salty tastes are called the five “basic” tastes, but why five, and why these five? In this review, we dissect the peripheral gustatory system in vertebrates from molecular and cellular perspectives. Recent behavioral and molecular genetic studies have revealed the nature of functional taste receptors and cells and show that different taste qualities are accounted for by the activation of different subsets of taste cells. Based on this concept, the diversity of basic tastes should be defined by the diversity of taste cells in taste buds, which varies among species. PMID:23085625

  15. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces grinding saving both time and money and allows the science requirements to be better defined. In this study various materials are polished from a fine grind to a fine polish. Each sample's RMS surface roughness is measured at 81 locations in a 9x9 square grid using a Zygo white light interferometer at regular intervals during the polishing process. Each data set is fit with various standard distributions and tested for goodness of fit. We show that the skew in the RMS data changes as a function of polishing time.

  16. Spatial analysis of storm depths from an Arizona raingage network

    NASA Technical Reports Server (NTRS)

    Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodriguez-Iturbe, I.

    1986-01-01

    Eight years of summer rainstorm observations are analyzed by a dense network of 93 raingages operated by the U.S. Department of Agriculture, Agricultural Research Service, in the 150 km Walnut Gulch experimental catchment near Tucson, Arizona. Storms are defined by the total depths collected at each raingage during the noon-to-noon period for which there was depth recorded at any of the gages. For each of the resulting 428 storm days, the gage depths are interpolated onto a dense grid and the resulting random field analyzed to obtain moments, isohyetal plots, spatial correlation function, variance function, and the spatial distribution of storm depth.

  17. A generalization of algebraic surface drawing

    NASA Technical Reports Server (NTRS)

    Blinn, J. F.

    1982-01-01

    An implicit surface mathematical description of three-dimensional space is defined in terms of all points which satisfy some equation F(x, y, z) equals 0. This form is ideal for space-shaded picture drawing, where the coordinates are substituted for x and y and the equation is solved for z. A new algorithm is presented which is applicable to functional forms other than those of first- and second-order polynomial functions, such as the summation of several Gaussian density distributions. The algorithm was created in order to model electron density maps of molecular structures, but is shown to be capable of generating shapes of esthetic interest.

  18. From statistics of regular tree-like graphs to distribution function and gyration radius of branched polymers

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander Y.; Nechaev, Sergei K.

    2015-08-01

    We consider flexible branched polymer, with quenched branch structure, and show that its conformational entropy as a function of its gyration radius R, at large R, obeys, in the scaling sense, Δ S˜ {R}2/({a}2L), with a bond length (or Kuhn segment) and L defined as an average spanning distance. We show that this estimate is valid up to at most the logarithmic correction for any tree. We do so by explicitly computing the largest eigenvalues of Kramers matrices for both regular and ‘sparse’ three-branched trees, uncovering on the way their peculiar mathematical properties.

  19. Partial knowledge, entropy, and estimation

    PubMed Central

    MacQueen, James; Marschak, Jacob

    1975-01-01

    In a growing body of literature, available partial knowledge is used to estimate the prior probability distribution p≡(p1,...,pn) by maximizing entropy H(p)≡-Σpi log pi, subject to constraints on p which express that partial knowledge. The method has been applied to distributions of income, of traffic, of stock-price changes, and of types of brand-article purchases. We shall respond to two justifications given for the method: (α) It is “conservative,” and therefore good, to maximize “uncertainty,” as (uniquely) represented by the entropy parameter. (β) One should apply the mathematics of statistical thermodynamics, which implies that the most probable distribution has highest entropy. Reason (α) is rejected. Reason (β) is valid when “complete ignorance” is defined in a particular way and both the constraint and the estimator's loss function are of certain kinds. PMID:16578733

  20. Distributed traffic signal control using fuzzy logic

    NASA Technical Reports Server (NTRS)

    Chiu, Stephen

    1992-01-01

    We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.

  1. Property Values Associated with the Failure of Individual Links in a System with Multiple Weak and Strong Links.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional onmore » time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.« less

  2. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.; Douglass, Anne R.; Cerniglia, Mark C.; Sparling, Lynn C.; Nielsen, J. Eric

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of characterizing the observed variability. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High (low) potential vorticity at 300 hPa indicates that the tropopause is low (high), and the identification of these two groups is made to account for the dynamic variability. Conditional probability distribution functions are used to define the statistics of the ozone distribution from both observations and a three-dimensional model simulation using winds from the Goddard Earth Observing System Data Assimilation System for transport. Ozone data sets include ozonesonde observations from northern midlatitude stations (1991-96) and midlatitude observations made by the Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite (UARS) (1994- 1998). The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause (approximately 380K). The probability distribution functions are similar for the two data sources, despite differences in horizontal and vertical resolution and spatial and temporal sampling. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. Results show that during summer, much of the observed variability is explained by the height of the tropopause. During the winter and spring, when the tropopause fluctuations are larger, less of the variability is explained by tropopause height. This suggests that more mixing occurs during these seasons. During all seasons, there is a transition zone near the tropopause that contains air characteristic of both the troposphere and the stratosphere. The relevance of the results to the assessment of the environmental impact of aircraft effluence is also discussed.

  3. Preparation and development of block copolypeptide vesicles and hydrogels for biological and medical applications.

    PubMed

    Deming, Timothy J

    2014-01-01

    There have been many recent advances in the controlled polymerization of α-amino acid-N-carboxyanhydride (NCA) monomers into well-defined block copolypeptides. Transition metal initiating systems allow block copolypeptide synthesis with excellent control over number and lengths of block segments, chain length distribution, and chain-end functionality. Using this and other methods, block copolypeptides of controlled dimensions have been prepared and their self-assembly into organized structures studied by many research groups. The ability of well-defined block copolypeptides to assemble into supramolecular copolypeptide vesicles and hydrogels has led to the development of these materials for use in biological and medical applications. These assemblies have been found to possess unique properties that are derived from the amino acid building blocks and ordered conformations of the polypeptide segments. Recent work on the incorporation of active and stimulus-responsive functionality in these materials has tremendously increased their potential for use in biological and medical studies. © 2014 Wiley Periodicals, Inc.

  4. Partitioning of Function in a Distributed Graphics System.

    DTIC Science & Technology

    1985-03-01

    Interface specification ( VDI ) is yet another graphi:s standardization effort of ANSI committee X31133 [7]. As shown in figure 2-2, the Virtual Device... VDI specification could be realized in a real device, or at least a "black box" which the user treats as a hardware device. ’he device drivers would...be written by the manufacturer of the graphics device, instead of the author of the graphics system. Since the VDI specification is precisely defined

  5. Principal components analysis of the photoresponse nonuniformity of a matrix detector.

    PubMed

    Ferrero, Alejandro; Alda, Javier; Campos, Joaquín; López-Alonso, Jose Manuel; Pons, Alicia

    2007-01-01

    The principal component analysis is used to identify and quantify spatial distributions of relative photoresponse as a function of the exposure time for a visible CCD array. The analysis shows a simple way to define an invariant photoresponse nonuniformity and compare it with the definition of this invariant pattern as the one obtained for long exposure times. Experimental data of radiant exposure from levels of irradiance obtained in a stable and well-controlled environment are used.

  6. Development of preliminary design concept for multifunction display and control system for Orbiter crew station. Task 3: Concept analysis

    NASA Technical Reports Server (NTRS)

    Spiger, R. J.; Farrell, R. J.; Holcomb, G. A.

    1982-01-01

    The access schema developed to access both individual switch functions as well as automated or semiautomated procedures for the orbital maneuvering system and electrical power and distribution and control system discussed and the operation of the system is described. Feasibility tests and analyses used to define display parameters and to select applicable hardware choices for use in such a system are presented and the results are discussed.

  7. Current state of the mass storage system reference model

    NASA Technical Reports Server (NTRS)

    Coyne, Robert

    1993-01-01

    IEEE SSSWG was chartered in May 1990 to abstract the hardware and software components of existing and emerging storage systems and to define the software interfaces between these components. The immediate goal is the decomposition of a storage system into interoperable functional modules which vendors can offer as separate commercial products. The ultimate goal is to develop interoperable standards which define the software interfaces, and in the distributed case, the associated protocols to each of the architectural modules in the model. The topics are presented in viewgraph form and include the following: IEEE SSSWG organization; IEEE SSSWG subcommittees & chairs; IEEE standards activity board; layered view of the reference model; layered access to storage services; IEEE SSSWG emphasis; and features for MSSRM version 5.

  8. Software For Graphical Representation Of A Network

    NASA Technical Reports Server (NTRS)

    Mcallister, R. William; Mclellan, James P.

    1993-01-01

    System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.

  9. The International VEGA "Venus-Halley" (1984-1986) Experiment: Description and Scientific Objectives

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Venus-Halley (Vega) project will provide a unique opportunity to combine a mission over Venus with a transfer flight to Halley's comet. This project is based on three research goals: (1) to study the surface of Venus; (2) to study the air circulation on Venus and its meteorological parameters; and (3) to study Halley's comet. The objective of the study of Halley's comet is to: determine the physical characteristics of its nucleus; define the structure and dynamics of the coma around the nucleus; define the gas composition near the nucleus; investigate the dust particle distribution as a function of mass at various distances from the nucleus; and investigate the solar wind interaction with the atmosphere and ionosphere of the comet.

  10. Variola virus topoisomerase: DNA cleavage specificity and distribution of sites in Poxvirus genomes.

    PubMed

    Minkah, Nana; Hwang, Young; Perry, Kay; Van Duyne, Gregory D; Hendrickson, Robert; Lefkowitz, Elliot J; Hannenhalli, Sridhar; Bushman, Frederic D

    2007-08-15

    Topoisomerase enzymes regulate superhelical tension in DNA resulting from transcription, replication, repair, and other molecular transactions. Poxviruses encode an unusual type IB topoisomerase that acts only at conserved DNA sequences containing the core pentanucleotide 5'-(T/C)CCTT-3'. In X-ray structures of the variola virus topoisomerase bound to DNA, protein-DNA contacts were found to extend beyond the core pentanucleotide, indicating that the full recognition site has not yet been fully defined in functional studies. Here we report quantitation of DNA cleavage rates for an optimized 13 bp site and for all possible single base substitutions (40 total sites), with the goals of understanding the molecular mechanism of recognition and mapping topoisomerase sites in poxvirus genome sequences. The data allow a precise definition of enzyme-DNA interactions and the energetic contributions of each. We then used the resulting "action matrix" to show that favorable topoisomerase sites are distributed all along the length of poxvirus DNA sequences, consistent with a requirement for local release of superhelical tension in constrained topological domains. In orthopox genomes, an additional central cluster of sites was also evident. A negative correlation of predicted topoisomerase sites was seen relative to early terminators, but no correlation was seen with early or late promoters. These data define the full variola virus topoisomerase recognition site and provide a new window on topoisomerase function in vivo.

  11. Analysis of meteorological droughts and dry spells in semiarid regions: a comparative analysis of probability distribution functions in the Segura Basin (SE Spain)

    NASA Astrophysics Data System (ADS)

    Pérez-Sánchez, Julio; Senent-Aparicio, Javier

    2017-08-01

    Dry spells are an essential concept of drought climatology that clearly defines the semiarid Mediterranean environment and whose consequences are a defining feature for an ecosystem, so vulnerable with regard to water. The present study was conducted to characterize rainfall drought in the Segura River basin located in eastern Spain, marked by the self seasonal nature of these latitudes. A daily precipitation set has been utilized for 29 weather stations during a period of 20 years (1993-2013). Furthermore, four sets of dry spell length (complete series, monthly maximum, seasonal maximum, and annual maximum) are used and simulated for all the weather stations with the following probability distribution functions: Burr, Dagum, error, generalized extreme value, generalized logistic, generalized Pareto, Gumbel Max, inverse Gaussian, Johnson SB, Log-Logistic, Log-Pearson 3, Triangular, Weibull, and Wakeby. Only the series of annual maximum spell offer a good adjustment for all the weather stations, thereby gaining the role of Wakeby as the best result, with a p value means of 0.9424 for the Kolmogorov-Smirnov test (0.2 significance level). Probability of dry spell duration for return periods of 2, 5, 10, and 25 years maps reveal the northeast-southeast gradient, increasing periods with annual rainfall of less than 0.1 mm in the eastern third of the basin, in the proximity of the Mediterranean slope.

  12. Random field assessment of nanoscopic inhomogeneity of bone.

    PubMed

    Dong, X Neil; Luo, Qing; Sparkman, Daniel M; Millwater, Harry R; Wang, Xiaodu

    2010-12-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to represent the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. The role of root distribution in eco-hydrological modeling in semi-arid regions

    NASA Astrophysics Data System (ADS)

    Sivandran, G.; Bras, R. L.

    2010-12-01

    In semi arid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Niche separation, through rooting strategies, is one manner in which different species coexist. At present, land surface models prescribe rooting profiles as a function of only the plant functional type of interest with no consideration for the soil texture or rainfall regime of the region being modeled. These models do not incorporate the ability of vegetation to dynamically alter their rooting strategies in response to transient changes in environmental forcings and therefore tend to underestimate the resilience of many of these ecosystems. A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point scale simulations were carried out using two vertical root distribution schemes: (i) Static - a temporally invariant root distribution; and (ii) Dynamic - a temporally variable allocation of assimilated carbon at any depth within the root zone in order to minimize the soil moisture-induced stress on the vegetation. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semi-arid Walnut Gulch Experimental Watershed in Arizona. For the static root distribution scheme, a series of simulations were carried out varying the shape of the rooting profile. The optimal distribution for the simulation was defined as the root distribution with the maximum mean transpiration over a 200 year period. This optimal distribution was determined for 5 soil textures and using 2 plant functional types, and the results varied from case to case. The dynamic rooting simulations allow vegetation the freedom to adjust the allocation of assimilated carbon to different rooting depths in response to changes in stress caused by the redistribution and uptake of soil moisture. The results obtained from these experiments elucidate the strong link between plant functional type, soil texture and climate and highlight the potential errors in the modeling of hydrologic fluxes from imposing a static root profile.

  14. Ascl1 controls the number and distribution of astrocytes and oligodendrocytes in the gray matter and white matter of the spinal cord

    PubMed Central

    Vue, Tou Yia; Kim, Euiseok J.; Parras, Carlos M.; Guillemot, Francois; Johnson, Jane E.

    2014-01-01

    Glia constitute the majority of cells in the mammalian central nervous system and are crucial for neurological function. However, there is an incomplete understanding of the molecular control of glial cell development. We find that the transcription factor Ascl1 (Mash1), which is best known for its role in neurogenesis, also functions in both astrocyte and oligodendrocyte lineages arising in the mouse spinal cord at late embryonic stages. Clonal fate mapping in vivo reveals heterogeneity in Ascl1-expressing glial progenitors and shows that Ascl1 defines cells that are restricted to either gray matter (GM) or white matter (WM) as astrocytes or oligodendrocytes. Conditional deletion of Ascl1 post-neurogenesis shows that Ascl1 is required during oligodendrogenesis for generating the correct numbers of WM but not GM oligodendrocyte precursor cells, whereas during astrocytogenesis Ascl1 functions in balancing the number of dorsal GM protoplasmic astrocytes with dorsal WM fibrous astrocytes. Thus, in addition to its function in neurogenesis, Ascl1 marks glial progenitors and controls the number and distribution of astrocytes and oligodendrocytes in the GM and WM of the spinal cord. PMID:25249462

  15. Simulation of speckle patterns with pre-defined correlation distributions.

    PubMed

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S

    2016-03-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques.

  16. Simulation of speckle patterns with pre-defined correlation distributions

    PubMed Central

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.

    2016-01-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589

  17. Normal and abnormal tissue identification system and method for medical images such as digital mammograms

    NASA Technical Reports Server (NTRS)

    Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)

    2001-01-01

    A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.

  18. Flock Foraging Efficiency in Relation to Food Sensing Ability and Distribution: a Simulation Study

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Hee

    2013-08-01

    Flocking may be an advantageous strategy for acquiring food resources. The degree of advantage is related to two factors: the ability of flock members to detect food resources and patterns of food distribution in the environment. To understand foraging efficiency as a function of these factors, I constructed a two-dimensional (2D) flocking model incorporating the two factors. At the start of the simulation, food particles were heterogeneously distributed. The heterogeneity, H, was characterized as a value ranging from 0.0 to 1.0. For each flock member, food sensing ability was defined by two variables: sensing distance, R and sensing angle, θ. Foraging efficiency of a flock was defined as the time, τ, required for a flock to consume all the available food resources. Simulation results showed that flock foraging is most efficient when individuals had an intermediate sensing ability (R = 60), but decreased for low (R < 60) and high (R > 60) sensing ability. When R > 60, patterns in foraging efficiency with increasing sensing distance and food resource aggregation were less consistent. This inconsistency was due to instability of the flock and a higher rate of individuals failing to capture target food resources. In addition, I briefly discuss the benefits obtained by foraging in flocks from an evolutionary perspective.

  19. The underlying pathway structure of biochemical reaction networks

    PubMed Central

    Schilling, Christophe H.; Palsson, Bernhard O.

    1998-01-01

    Bioinformatics is yielding extensive, and in some cases complete, genetic and biochemical information about individual cell types and cellular processes, providing the composition of living cells and the molecular structure of its components. These components together perform integrated cellular functions that now need to be analyzed. In particular, the functional definition of biochemical pathways and their role in the context of the whole cell is lacking. In this study, we show how the mass balance constraints that govern the function of biochemical reaction networks lead to the translation of this problem into the realm of linear algebra. The functional capabilities of biochemical reaction networks, and thus the choices that cells can make, are reflected in the null space of their stoichiometric matrix. The null space is spanned by a finite number of basis vectors. We present an algorithm for the synthesis of a set of basis vectors for spanning the null space of the stoichiometric matrix, in which these basis vectors represent the underlying biochemical pathways that are fundamental to the corresponding biochemical reaction network. In other words, all possible flux distributions achievable by a defined set of biochemical reactions are represented by a linear combination of these basis pathways. These basis pathways thus represent the underlying pathway structure of the defined biochemical reaction network. This development is significant from a fundamental and conceptual standpoint because it yields a holistic definition of biochemical pathways in contrast to definitions that have arisen from the historical development of our knowledge about biochemical processes. Additionally, this new conceptual framework will be important in defining, characterizing, and studying biochemical pathways from the rapidly growing information on cellular function. PMID:9539712

  20. The EOSDIS Products Usability for Disaster Response.

    NASA Astrophysics Data System (ADS)

    Kafle, D. N.; Wanchoo, L.; Won, Y. I.; Michael, K.

    2016-12-01

    The Earth Observing System (EOS) Data and Information System (EOSDIS) is a key core capability in NASA's Earth Science Data System Program. The EOSDIS science operations are performed within a distributed system of interconnected nodes: the Science Investigator-led Processing Systems (SIPS), and the distributed, discipline-specific, Earth science Distributed Active Archive Centers (DAACs), which have specific responsibilities for the production, archiving, and distribution of Earth science data products. NASA also established the Land, Atmosphere Near real-time Capability for EOS (LANCE) program through which near real-time (NRT) products are produced and distributed within a latency of no more than 3 hours. These data, including NRT, have been widely used by scientists and researchers for studying Earth system science, climate change, natural variability, and enhanced climate predictions including disaster assessments. The Subcommittee on Disaster Reduction (SDR) has defined 15 major types of disasters such as flood, hurricane, earthquake, volcano, tsunami, etc. The focus of the study is to categorize both NRT and standard data products based on applicability to the SDR-defined disaster types. This will identify which datasets from current NASA satellite missions/instruments are best suited for disaster response. The distribution metrics of the products that have been used for studying various selected disasters that have occurred over last 5 years will be analyzed that include volume, number of files, number of users, user domains, user country, etc. This data usage analysis will provide information to the data centers' staff that can help them develop the functionality and allocate the resources needed for enhanced access and timely availability of the data products that are critical for the time-sensitive analyses.

  1. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  2. Fly's eye condenser based on chirped microlens arrays

    NASA Astrophysics Data System (ADS)

    Wippermann, Frank C.; Zeitner, Uwe-D.; Dannberg, Peter; Bräuer, Andreas; Sinzinger, Stefan

    2007-09-01

    Lens array arrangements are commonly used for the beam shaping of almost arbitrary input intensity distributions into a top-hat. The setup usually consists of a Fourier lens and two identical regular microlens arrays - often referred to as tandem lens array - where the second one is placed in the focal plane of the first microlenses. Due to the periodic structure of regular arrays the output intensity distribution is modulated by equidistant sharp intensity peaks which are disturbing the homogeneity. The equidistantly located intensity peaks can be suppressed when using a chirped and therefore non-periodic microlens array. A far field speckle pattern with more densely and irregularly located intensity peaks results leading to an improved homogeneity of the intensity distribution. In contrast to stochastic arrays, chirped arrays consist of individually shaped lenses defined by a parametric description of the cells optical function which can be derived completely from analytical functions. This gives the opportunity to build up tandem array setups enabling to achieve far field intensity distribution with an envelope of a top-hat. We propose a new concept for fly's eye condensers incorporating a chirped tandem microlens array for the generation of a top-hat far field intensity distribution with improved homogenization under coherent illumination. The setup is compliant to reflow of photoresist as fabrication technique since plane substrates accommodating the arrays are used. Considerations for the design of the chirped microlens arrays, design rules, wave optical simulations and measurements of the far field intensity distributions are presented.

  3. Whole Brain Functional Connectivity Pattern Homogeneity Mapping.

    PubMed

    Wang, Lijie; Xu, Jinping; Wang, Chao; Wang, Jiaojian

    2018-01-01

    Mounting studies have demonstrated that brain functions are determined by its external functional connectivity patterns. However, how to characterize the voxel-wise similarity of whole brain functional connectivity pattern is still largely unknown. In this study, we introduced a new method called functional connectivity homogeneity (FcHo) to delineate the voxel-wise similarity of whole brain functional connectivity patterns. FcHo was defined by measuring the whole brain functional connectivity patterns similarity of a given voxel with its nearest 26 neighbors using Kendall's coefficient concordance (KCC). The robustness of this method was tested in four independent datasets selected from a large repository of MRI. Furthermore, FcHo mapping results were further validated using the nearest 18 and six neighbors and intra-subject reproducibility with each subject scanned two times. We also compared FcHo distribution patterns with local regional homogeneity (ReHo) to identify the similarity and differences of the two methods. Finally, FcHo method was used to identify the differences of whole brain functional connectivity patterns between professional Chinese chess players and novices to test its application. FcHo mapping consistently revealed that the high FcHo was mainly distributed in association cortex including parietal lobe, frontal lobe, occipital lobe and default mode network (DMN) related areas, whereas the low FcHo was mainly found in unimodal cortex including primary visual cortex, sensorimotor cortex, paracentral lobule and supplementary motor area. These results were further supported by analyses of the nearest 18 and six neighbors and intra-subject similarity. Moreover, FcHo showed both similar and different whole brain distribution patterns compared to ReHo. Finally, we demonstrated that FcHo can effectively identify the whole brain functional connectivity pattern differences between professional Chinese chess players and novices. Our findings indicated that FcHo is a reliable method to delineate the whole brain functional connectivity pattern similarity and may provide a new way to study the functional organization and to reveal neuropathological basis for brain disorders.

  4. The interval testing procedure: A general framework for inference in functional data analysis.

    PubMed

    Pini, Alessia; Vantini, Simone

    2016-09-01

    We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.

  5. Compound Poisson Law for Hitting Times to Periodic Orbits in Two-Dimensional Hyperbolic Systems

    NASA Astrophysics Data System (ADS)

    Carney, Meagan; Nicol, Matthew; Zhang, Hong-Kun

    2017-11-01

    We show that a compound Poisson distribution holds for scaled exceedances of observables φ uniquely maximized at a periodic point ζ in a variety of two-dimensional hyperbolic dynamical systems with singularities (M,T,μ ), including the billiard maps of Sinai dispersing billiards in both the finite and infinite horizon case. The observable we consider is of form φ (z)=-ln d(z,ζ ) where d is a metric defined in terms of the stable and unstable foliation. The compound Poisson process we obtain is a Pólya-Aeppli distibution of index θ . We calculate θ in terms of the derivative of the map T. Furthermore if we define M_n=\\max {φ ,\\ldots ,φ circ T^n} and u_n (τ ) by \\lim _{n→ ∞} nμ (φ >u_n (τ ) )=τ the maximal process satisfies an extreme value law of form μ (M_n ≤ u_n)=e^{-θ τ }. These results generalize to a broader class of functions maximized at ζ , though the formulas regarding the parameters in the distribution need to be modified.

  6. Evolution simulation of lightning discharge based on a magnetohydrodynamics method

    NASA Astrophysics Data System (ADS)

    Fusheng, WANG; Xiangteng, MA; Han, CHEN; Yao, ZHANG

    2018-07-01

    In order to solve the load problem for aircraft lightning strikes, lightning channel evolution is simulated under the key physical parameters for aircraft lightning current component C. A numerical model of the discharge channel is established, based on magnetohydrodynamics (MHD) and performed by FLUENT software. With the aid of user-defined functions and a user-defined scalar, the Lorentz force, Joule heating and material parameters of an air thermal plasma are added. A three-dimensional lightning arc channel is simulated and the arc evolution in space is obtained. The results show that the temperature distribution of the lightning channel is symmetrical and that the hottest region occurs at the center of the lightning channel. The distributions of potential and current density are obtained, showing that the difference in electric potential or energy between two points tends to make the arc channel develop downwards. The arc channel comes into expansion on the anode surface due to stagnation of the thermal plasma and there exists impingement on the copper plate when the arc channel comes into contact with the anode plate.

  7. Origin and Functions of Tissue Macrophages

    PubMed Central

    Epelman, Slava; Lavine, Kory J.; Randolph, Gwendalyn J.

    2015-01-01

    Macrophages are distributed in tissues throughout the body and contribute to both homeostasis and disease. Recently, it has become evident that most adult tissue macrophages originate during embryonic development and not from circulating monocytes. Each tissue has its own composition of embryonically derived and adult-derived macrophages, but it is unclear whether macrophages of distinct origins are functionally interchangeable or have unique roles at steady state. This new understanding also prompts reconsideration of the function of circulating monocytes. Classical Ly6chi monocytes patrol the extravascular space in resting organs, and Ly6clo nonclassical monocytes patrol the vasculature. Inflammation triggers monocytes to differentiate into macrophages, but whether resident and newly recruited macrophages possess similar functions during inflammation is unclear. Here, we define the tools used for identifying the complex origin of tissue macrophages and discuss the relative contributions of tissue niche versus ontological origin to the regulation of macrophage functions during steady state and inflammation. PMID:25035951

  8. Worldwide seismicity in view of non-extensive statistical physics

    NASA Astrophysics Data System (ADS)

    Chochlaki, Kaliopi; Vallianatos, Filippos; Michas, George

    2014-05-01

    In the present work we study the distribution of worldwide shallow seismic events occurred from 1981 to 2011 extracted from the CMT catalog, with magnitude equal or greater than Mw 5.0. Our analysis based on the subdivision of the Earth surface into seismic zones that are homogeneous with regards to seismic activity and orientation of the predominant stress field. To this direction we use the Flinn-Engdahl regionalization (Flinn and Engdahl, 1965), which consists of 50 seismic zones as modified by Lombardi and Marzocchi (2007), where grouped the 50 FE zones into larger tectonically homogeneous ones, utilizing the cumulative moment tensor method. As a result Lombardi and Marzocchi (2007), limit the initial 50 regions to 39 ones, in which we apply the non- extensive statistical physics approach. The non-extensive statistical physics seems to be the most adequate and promising methodological tool for analyzing complex systems, such as the Earth's interior. In this frame, we introduce the q-exponential formulation as the expression of probability distribution function that maximizes the Sq entropy as defined by Tsallis, (1988). In the present work we analyze the interevent time distribution between successive earthquakes by a q-exponential function in each of the seismic zones defined by Lombardi and Marzocchi (2007).confirming the importance of long-range interactions and the existence of a power-law approximation in the distribution of the interevent times. Our findings supports the ideas of universality within the Tsallis approach to describe Earth's seismicity and present strong evidence on temporal clustering of seismic activity in each of the tectonic zones analyzed. Our analysis as applied in worldwide seismicity with magnitude equal or greater than Mw 5.5 and 6.) is presented and the dependence of our result on the cut-off magnitude is discussed. This research has been funded by the European Union (European Social Fund) and Greek national resources under the framework of the "THALES Program: SEISMO FEAR HELLARC" project of the "Education & Lifelong Learning" Operational Programme.

  9. Exploring image data assimilation in the prospect of high-resolution satellite oceanic observations

    NASA Astrophysics Data System (ADS)

    Durán Moro, Marina; Brankart, Jean-Michel; Brasseur, Pierre; Verron, Jacques

    2017-07-01

    Satellite sensors increasingly provide high-resolution (HR) observations of the ocean. They supply observations of sea surface height (SSH) and of tracers of the dynamics such as sea surface salinity (SSS) and sea surface temperature (SST). In particular, the Surface Water Ocean Topography (SWOT) mission will provide measurements of the surface ocean topography at very high-resolution (HR) delivering unprecedented information on the meso-scale and submeso-scale dynamics. This study investigates the feasibility to use these measurements to reconstruct meso-scale features simulated by numerical models, in particular on the vertical dimension. A methodology to reconstruct three-dimensional (3D) multivariate meso-scale scenes is developed by using a HR numerical model of the Solomon Sea region. An inverse problem is defined in the framework of a twin experiment where synthetic observations are used. A true state is chosen among the 3D multivariate states which is considered as a reference state. In order to correct a first guess of this true state, a two-step analysis is carried out. A probability distribution of the first guess is defined and updated at each step of the analysis: (i) the first step applies the analysis scheme of a reduced-order Kalman filter to update the first guess probability distribution using SSH observation; (ii) the second step minimizes a cost function using observations of HR image structure and a new probability distribution is estimated. The analysis is extended to the vertical dimension using 3D multivariate empirical orthogonal functions (EOFs) and the probabilistic approach allows the update of the probability distribution through the two-step analysis. Experiments show that the proposed technique succeeds in correcting a multivariate state using meso-scale and submeso-scale information contained in HR SSH and image structure observations. It also demonstrates how the surface information can be used to reconstruct the ocean state below the surface.

  10. Graphs for information security control in software defined networks

    NASA Astrophysics Data System (ADS)

    Grusho, Alexander A.; Abaev, Pavel O.; Shorgin, Sergey Ya.; Timonina, Elena E.

    2017-07-01

    Information security control in software defined networks (SDN) is connected with execution of the security policy rules regulating information accesses and protection against distribution of the malicious code and harmful influences. The paper offers a representation of a security policy in the form of hierarchical structure which in case of distribution of resources for the solution of tasks defines graphs of admissible interactions in a networks. These graphs define commutation tables of switches via the SDN controller.

  11. Developmental distribution of the plasma membrane-enriched proteome in the maize primary root growth zone.

    PubMed

    Zhang, Zhe; Voothuluru, Priyamvada; Yamaguchi, Mineo; Sharp, Robert E; Peck, Scott C

    2013-01-01

    Within the growth zone of the maize primary root, there are well-defined patterns of spatial and temporal organization of cell division and elongation. However, the processes underlying this organization remain poorly understood. To gain additional insights into the differences amongst the defined regions, we performed a proteomic analysis focusing on fractions enriched for plasma membrane (PM) proteins. The PM is the interface between the plant cell and the apoplast and/or extracellular space. As such, it is a key structure involved in the exchange of nutrients and other molecules as well as in the integration of signals that regulate growth and development. Despite the important functions of PM-localized proteins in mediating these processes, a full understanding of dynamic changes in PM proteomes is often impeded by low relative concentrations relative to total proteins. Using a relatively simple strategy of treating microsomal fractions with Brij-58 detergent to enrich for PM proteins, we compared the developmental distribution of proteins within the root growth zone which revealed a number of previously known as well as novel proteins with interesting patterns of abundance. For instance, the quantitative proteomic analysis detected a gradient of PM aquaporin proteins similar to that previously reported using immunoblot analyses, confirming the veracity of this strategy. Cellulose synthases increased in abundance with increasing distance from the root apex, consistent with expected locations of cell wall deposition. The similar distribution pattern for Brittle-stalk-2-like protein implicates that this protein may also have cell wall related functions. These results show that the simplified PM enrichment method previously demonstrated in Arabidopsis can be successfully applied to completely unrelated plant tissues and provide insights into differences in the PM proteome throughout growth and development zones of the maize primary root.

  12. Framework for event-based semidistributed modeling that unifies the SCS-CN method, VIC, PDM, and TOPMODEL

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2016-09-01

    Hydrologists and engineers may choose from a range of semidistributed rainfall-runoff models such as VIC, PDM, and TOPMODEL, all of which predict runoff from a distribution of watershed properties. However, these models are not easily compared to event-based data and are missing ready-to-use analytical expressions that are analogous to the SCS-CN method. The SCS-CN method is an event-based model that describes the runoff response with a rainfall-runoff curve that is a function of the cumulative storm rainfall and antecedent wetness condition. Here we develop an event-based probabilistic storage framework and distill semidistributed models into analytical, event-based expressions for describing the rainfall-runoff response. The event-based versions called VICx, PDMx, and TOPMODELx also are extended with a spatial description of the runoff concept of "prethreshold" and "threshold-excess" runoff, which occur, respectively, before and after infiltration exceeds a storage capacity threshold. For total storm rainfall and antecedent wetness conditions, the resulting ready-to-use analytical expressions define the source areas (fraction of the watershed) that produce runoff by each mechanism. They also define the probability density function (PDF) representing the spatial variability of runoff depths that are cumulative values for the storm duration, and the average unit area runoff, which describes the so-called runoff curve. These new event-based semidistributed models and the traditional SCS-CN method are unified by the same general expression for the runoff curve. Since the general runoff curve may incorporate different model distributions, it may ease the way for relating such distributions to land use, climate, topography, ecology, geology, and other characteristics.

  13. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property valuemore » at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.« less

  14. Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Sahraei, S.

    2016-12-01

    Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.

  15. A New Method for Nonlinear and Nonstationary Time Series Analysis and Its Application to the Earthquake and Building Response Records

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    1999-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum, Example of application of this method to earthquake and building response will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.

  16. The Vertebrate Brain, Evidence of Its Modular Organization and Operating System: Insights into the Brain's Basic Units of Structure, Function, and Operation and How They Influence Neuronal Signaling and Behavior.

    PubMed

    Baslow, Morris H

    2011-01-01

    The human brain is a complex organ made up of neurons and several other cell types, and whose role is processing information for use in eliciting behaviors. However, the composition of its repeating cellular units for both structure and function are unresolved. Based on recent descriptions of the brain's physiological "operating system", a function of the tri-cellular metabolism of N-acetylaspartate (NAA) and N-acetylaspartylglutamate (NAAG) for supply of energy, and on the nature of "neuronal words and languages" for intercellular communication, insights into the brain's modular structural and functional units have been gained. In this article, it is proposed that the basic structural unit in brain is defined by its physiological operating system, and that it consists of a single neuron, and one or more astrocytes, oligodendrocytes, and vascular system endothelial cells. It is also proposed that the basic functional unit in the brain is defined by how neurons communicate, and consists of two neurons and their interconnecting dendritic-synaptic-dendritic field. Since a functional unit is composed of two neurons, it requires two structural units to form a functional unit. Thus, the brain can be envisioned as being made up of the three-dimensional stacking and intertwining of myriad structural units which results not only in its gross structure, but also in producing a uniform distribution of binary functional units. Since the physiological NAA-NAAG operating system for supply of energy is repeated in every structural unit, it is positioned to control global brain function.

  17. Within-Genome Evolution of REPINs: a New Family of Miniature Mobile DNA in Bacteria

    PubMed Central

    Bertels, Frederic; Rainey, Paul B.

    2011-01-01

    Repetitive sequences are a conserved feature of many bacterial genomes. While first reported almost thirty years ago, and frequently exploited for genotyping purposes, little is known about their origin, maintenance, or processes affecting the dynamics of within-genome evolution. Here, beginning with analysis of the diversity and abundance of short oligonucleotide sequences in the genome of Pseudomonas fluorescens SBW25, we show that over-represented short sequences define three distinct groups (GI, GII, and GIII) of repetitive extragenic palindromic (REP) sequences. Patterns of REP distribution suggest that closely linked REP sequences form a functional replicative unit: REP doublets are over-represented, randomly distributed in extragenic space, and more highly conserved than singlets. In addition, doublets are organized as inverted repeats, which together with intervening spacer sequences are predicted to form hairpin structures in ssDNA or mRNA. We refer to these newly defined entities as REPINs (REP doublets forming hairpins) and identify short reads from population sequencing that reveal putative transposition intermediates. The proximal relationship between GI, GII, and GIII REPINs and specific REP-associated tyrosine transposases (RAYTs), combined with features of the putative transposition intermediate, suggests a mechanism for within-genome dissemination. Analysis of the distribution of REPs in a range of RAYT–containing bacterial genomes, including Escherichia coli K-12 and Nostoc punctiforme, show that REPINs are a widely distributed, but hitherto unrecognized, family of miniature non-autonomous mobile DNA. PMID:21698139

  18. Cell lineage distribution atlas of the human stomach reveals heterogeneous gland populations in the gastric antrum

    PubMed Central

    Choi, Eunyoung; Roland, Joseph T.; Barlow, Brittney J.; O’Neal, Ryan; Rich, Amy E.; Nam, Ki Taek; Shi, Chanjuan; Goldenring, James R.

    2014-01-01

    Objective The glands of the stomach body and antral mucosa contain a complex compendium of cell lineages. In lower mammals, the distribution of oxyntic glands and antral glands define the anatomical regions within the stomach. We examined in detail the distribution of the full range of cell lineages within the human stomach. Design We determined the distribution of gastric gland cell lineages with specific immunocytochemical markers in entire stomach specimens from three non-obese organ donors. Results The anatomical body and antrum of the human stomach were defined by the presence of ghrelin and gastrin cells, respectively. Concentrations of somatostatin cells were observed in the proximal stomach. Parietal cells were seen in all glands of the body of stomach as well as in over 50% of antral glands. MIST1-expressing chief cells were predominantly observed in the body, although individual glands of the antrum also showed MIST1-expressing chief cells. While classically-described antral glands were observed with gastrin cells and deep antral mucous cells without any parietal cells, we also observed a substantial population of mixed-type glands containing both parietal cells and G cells throughout the antrum. Conclusions Enteroendocrine cells show distinct patterns of localization in the human stomach. The existence of antral glands with mixed cell lineages indicates that human antral glands may be functionally chimeric with glands assembled from multiple distinct stem cell populations. PMID:24488499

  19. Bowel habit reference values and abnormalities in young Iranian healthy adults.

    PubMed

    Adibi, Peyman; Behzad, Ebrahim; Pirzadeh, Shahryar; Mohseni, Masood

    2007-08-01

    The purpose of this study was to estimate the prevalence of self-reported, ROME II-defined constipation and determine the average defecation frequency and stool types in the Iranian population. A self-reported questionnaire was distributed to 1045 participants, including items intended to identify the presence of ROME II-defined functional constipation and the dominant form of stool based on the Bristol Scale. The weekly mean bowel movement frequency in men and women was 12.5 +/- 7.3 and 13.8 +/- 8.0, respectively (p < 0.05). A total of 87.4% of participants had a stool frequency of between 3 and 21 times per week. The prevalence of functional constipation was 32.9%, whereas only 9.6% of participants reported themselves to be constipated (level of agreement kappa = 0.21, 95% confidence interval: 0.15 to 0.27). Soft or formed stool was reported in 75.7% of individuals. Functional constipation is common in the Iranian population, but its diagnosis could not rely on subjective patient complaints. Despite a higher average of bowel frequency, the previously reported normal range of defecation frequency can be applied for the Iranian population.

  20. Probability of Loss of Assured Safety in Systems with Multiple Time-Dependent Failure Modes: Incorporation of Delayed Link Failure in the Presence of Aleatory Uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less

  1. Space shuttle solid rocket booster recovery system definition, volume 1

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.

  2. Factorization method of quadratic template

    NASA Astrophysics Data System (ADS)

    Kotyrba, Martin

    2017-07-01

    Multiplication of two numbers is a one-way function in mathematics. Any attempt to distribute the outcome to its roots is called factorization. There are many methods such as Fermat's factorization, Dixońs method or quadratic sieve and GNFS, which use sophisticated techniques fast factorization. All the above methods use the same basic formula differing only in its use. This article discusses a newly designed factorization method. Effective implementation of this method in programs is not important, it only represents and clearly defines its properties.

  3. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  4. Selectivity by host plants affects the distribution of arbuscular mycorrhizal fungi: evidence from ITS rDNA sequence metadata.

    PubMed

    Yang, Haishui; Zang, Yanyan; Yuan, Yongge; Tang, Jianjun; Chen, Xin

    2012-04-12

    Arbuscular mycorrhizal fungi (AMF) can form obligate symbioses with the vast majority of land plants, and AMF distribution patterns have received increasing attention from researchers. At the local scale, the distribution of AMF is well documented. Studies at large scales, however, are limited because intensive sampling is difficult. Here, we used ITS rDNA sequence metadata obtained from public databases to study the distribution of AMF at continental and global scales. We also used these sequence metadata to investigate whether host plant is the main factor that affects the distribution of AMF at large scales. We defined 305 ITS virtual taxa (ITS-VTs) among all sequences of the Glomeromycota by using a comprehensive maximum likelihood phylogenetic analysis. Each host taxonomic order averaged about 53% specific ITS-VTs, and approximately 60% of the ITS-VTs were host specific. Those ITS-VTs with wide host range showed wide geographic distribution. Most ITS-VTs occurred in only one type of host functional group. The distributions of most ITS-VTs were limited across ecosystem, across continent, across biogeographical realm, and across climatic zone. Non-metric multidimensional scaling analysis (NMDS) showed that AMF community composition differed among functional groups of hosts, and among ecosystem, continent, biogeographical realm, and climatic zone. The Mantel test showed that AMF community composition was significantly correlated with plant community composition among ecosystem, among continent, among biogeographical realm, and among climatic zone. The structural equation modeling (SEM) showed that the effects of ecosystem, continent, biogeographical realm, and climatic zone were mainly indirect on AMF distribution, but plant had strongly direct effects on AMF. The distribution of AMF as indicated by ITS rDNA sequences showed a pattern of high endemism at large scales. This pattern indicates high specificity of AMF for host at different scales (plant taxonomic order and functional group) and high selectivity from host plants for AMF. The effects of ecosystemic, biogeographical, continental and climatic factors on AMF distribution might be mediated by host plants.

  5. Complete anatomy of {overline B_d} to {overline {text{K}}^{{*0}}} (→ Kπ) ℓ + ℓ - and its angular distribution

    NASA Astrophysics Data System (ADS)

    Matias, J.; Mescia, F.; Ramon, M.; Virto, J.

    2012-04-01

    We present a complete and optimal set of observables for the exclusive 4-body overline B meson decay {overline B_d} to {overline {text{K}}^{{*0}}} (→ Kπ) ℓ + ℓ -in the low dilepton mass region, that contains a maximal number of clean observables. This basis of observables is built in a systematic way. We show that all the previously defined observables and any observable that one can construct, can be expressed as a function of this basis. This set of observables contains all the information that can be extracted from the angular distribution in the cleanest possible way. We provide explicit expressions for the full and the uniangular distributions in terms of this basis. The conclusions presented here can be easily extended to the large- q 2 region. We study the sensitivity of the observables to right-handed currents and scalars. Finally, we present for the first time all the symmetries of the full distribution including massive terms and scalar contributions.

  6. Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.

    PubMed

    Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S

    2017-01-05

    The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.

  7. Posterior consistency in conditional distribution estimation

    PubMed Central

    Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.

    2014-01-01

    A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858

  8. On the probability distribution function of the mass surface density of molecular clouds. I

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-05-01

    The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org

  9. Corneal Power Distribution and Functional Optical Zone Following Small Incision Lenticule Extraction for Myopia.

    PubMed

    Qian, Yishan; Huang, Jia; Zhou, Xingtao; Hanna, Rewais Benjamin

    2015-08-01

    To evaluate corneal power distribution using the ray tracing method (corneal power) in eyes undergoing small incision lenticule extraction (SMILE) surgery and compare the functional optical zone with two lenticular sizes. This retrospective study evaluated 128 patients who underwent SMILE for the correction of myopia and astigmatism with a lenticular diameter of 6.5 mm (the 6.5-mm group) and 6.2 mm (the 6.2-mm group). The data include refraction, correction, and corneal power obtained via a Scheimpflug camera from the pupil center to 8 mm. The surgically induced changes in corneal power (Δcorneal power) were compared to correction and Δrefraction. The functional optical zone was defined as the largest ring diameter when the difference between the ring power and the pupil center power was 1.50 diopters or less. The functional optical zone was compared between two lenticular diameter groups. Corneal power distribution was measured by the ray tracing method. In the 6.5-mm group (n=100), Δcorneal power at 5 mm showed the smallest difference from Δrefraction and Δcorneal power at 0 mm exhibited the smallest difference from correction. In the 6.2-mm group (n=28), Δcorneal power at 2 mm displayed the lowest dissimilarity from Δrefraction and Δcorneal power at 4 mm demonstrated the lowest dissimilarity from correction. There was no significant difference between the mean postoperative functional optical zones in either group when their spherical equivalents were matched. Total corneal refactive power can be used in the evaluation of surgically induced changes following SMILE. A lenticular diameter of 6.2 mm should be recommended for patients with high myopia because there is no functional difference in the optical zone. Copyright 2015, SLACK Incorporated.

  10. Simulation of net infiltration and potential recharge using a distributed-parameter watershed model of the Death Valley region, Nevada and California

    USGS Publications Warehouse

    Hevesi, Joseph A.; Flint, Alan L.; Flint, Lorraine E.

    2003-01-01

    This report presents the development and application of the distributed-parameter watershed model, INFILv3, for estimating the temporal and spatial distribution of net infiltration and potential recharge in the Death Valley region, Nevada and California. The estimates of net infiltration quantify the downward drainage of water across the lower boundary of the root zone and are used to indicate potential recharge under variable climate conditions and drainage basin characteristics. Spatial variability in recharge in the Death Valley region likely is high owing to large differences in precipitation, potential evapotranspiration, bedrock permeability, soil thickness, vegetation characteristics, and contributions to recharge along active stream channels. The quantity and spatial distribution of recharge representing the effects of variable climatic conditions and drainage basin characteristics on recharge are needed to reduce uncertainty in modeling ground-water flow. The U.S. Geological Survey, in cooperation with the Department of Energy, developed a regional saturated-zone ground-water flow model of the Death Valley regional ground-water flow system to help evaluate the current hydrogeologic system and the potential effects of natural or human-induced changes. Although previous estimates of recharge have been made for most areas of the Death Valley region, including the area defined by the boundary of the Death Valley regional ground-water flow system, the uncertainty of these estimates is high, and the spatial and temporal variability of the recharge in these basins has not been quantified. To estimate the magnitude and distribution of potential recharge in response to variable climate and spatially varying drainage basin characteristics, the INFILv3 model uses a daily water-balance model of the root zone with a primarily deterministic representation of the processes controlling net infiltration and potential recharge. The daily water balance includes precipitation (as either rain or snow), snow accumulation, sublimation, snowmelt, infiltration into the root zone, evapotranspiration, drainage, water content change throughout the root-zone profile (represented as a 6-layered system), runoff (defined as excess rainfall and snowmelt) and surface water run-on (defined as runoff that is routed downstream), and net infiltration (simulated as drainage from the bottom root-zone layer). Potential evapotranspiration is simulated using an hourly solar radiation model to simulate daily net radiation, and daily evapotranspiration is simulated as an empirical function of root zone water content and potential evapotranspiration. The model uses daily climate records of precipitation and air temperature from a regionally distributed network of 132 climate stations and a spatially distributed representation of drainage basin characteristics defined by topography, geology, soils, and vegetation to simulate daily net infiltration at all locations, including stream channels with intermittent streamflow in response to runoff from rain and snowmelt. The temporal distribution of daily, monthly, and annual net infiltration can be used to evaluate the potential effect of future climatic conditions on potential recharge. The INFILv3 model inputs representing drainage basin characteristics were developed using a geographic information system (GIS) to define a set of spatially distributed input parameters uniquely assigned to each grid cell of the INFILv3 model grid. The model grid, which was defined by a digital elevation model (DEM) of the Death Valley region, consists of 1,252,418 model grid cells with a uniform grid cell dimension of 278.5 meters in the north-south and east-west directions. The elevation values from the DEM were used with monthly regression models developed from the daily climate data to estimate the spatial distribution of daily precipitation and air temperature. The elevation values were also used to simulate atmosp

  11. Global biodiversity, stoichiometry and ecosystem function responses to human-induced C-N-P imbalances.

    PubMed

    Carnicer, Jofre; Sardans, Jordi; Stefanescu, Constantí; Ubach, Andreu; Bartrons, Mireia; Asensio, Dolores; Peñuelas, Josep

    2015-01-01

    Global change analyses usually consider biodiversity as a global asset that needs to be preserved. Biodiversity is frequently analysed mainly as a response variable affected by diverse environmental drivers. However, recent studies highlight that gradients of biodiversity are associated with gradual changes in the distribution of key dominant functional groups characterized by distinctive traits and stoichiometry, which in turn often define the rates of ecosystem processes and nutrient cycling. Moreover, pervasive links have been reported between biodiversity, food web structure, ecosystem function and species stoichiometry. Here we review current global stoichiometric gradients and how future distributional shifts in key functional groups may in turn influence basic ecosystem functions (production, nutrient cycling, decomposition) and therefore could exert a feedback effect on stoichiometric gradients. The C-N-P stoichiometry of most primary producers (phytoplankton, algae, plants) has been linked to functional trait continua (i.e. to major axes of phenotypic variation observed in inter-specific analyses of multiple traits). In contrast, the C-N-P stoichiometry of higher-level consumers remains less precisely quantified in many taxonomic groups. We show that significant links are observed between trait continua across trophic levels. In spite of recent advances, the future reciprocal feedbacks between key functional groups, biodiversity and ecosystem functions remain largely uncertain. The reported evidence, however, highlights the key role of stoichiometric traits and suggests the need of a progressive shift towards an ecosystemic and stoichiometric perspective in global biodiversity analyses. Copyright © 2014 Elsevier GmbH. All rights reserved.

  12. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  13. Bidirectional reflection functions from surface bump maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabral, B.; Max, N.; Springmeyer, R.

    1987-04-29

    The Torrance-Sparrow model for calculating bidirectional reflection functions contains a geometrical attenuation factor to account for shadowing and occlusions in a hypothetical distribution of grooves on a rough surface. Using an efficient table-based method for determining the shadows and occlusions, we calculate the geometric attenuation factor for surfaces defined by a specific table of bump heights. Diffuse and glossy specular reflection of the environment can be handled in a unified manner by using an integral of the bidirectional reflection function times the environmental illumination, over the hemisphere of solid angle above a surface. We present a method of estimating themore » integral, by expanding the bidirectional reflection coefficient in spherical harmonics, and show how the coefficients in this expansion can be determined efficiently by reorganizing our geometric attenuation calculation.« less

  14. Tmd Factorization and Evolution for Tmd Correlation Functions

    NASA Astrophysics Data System (ADS)

    Mert Aybat, S.; Rogers, Ted C.

    We discuss the application of transverse momentum dependent (TMD) factorization theorems to phenomenology. Our treatment relies on recent extensions of the Collins-Soper-Sterman (CSS) formalism. Emphasis is placed on the importance of using well-defined TMD parton distribution functions (PDFs) and fragmentation functions (FFs) in calculating the evolution of these objects. We explain how parametrizations of unpolarized TMDs can be obtained from currently existing fixed-scale Gaussian fits and previous implementations of the CSS formalism in the Drell-Yan process, and provide some examples. We also emphasize the importance of agreed-upon definitions for having an unambiguous prescription for calculating higher orders in the hard part, and provide examples of higher order calculations. We end with a discussion of strategies for extending the phenomenological applications of TMD factorization to situations beyond the unpolarized case.

  15. Weakly Nonergodic Dynamics in the Gross-Pitaevskii Lattice

    NASA Astrophysics Data System (ADS)

    Mithun, Thudiyangal; Kati, Yagmur; Danieli, Carlo; Flach, Sergej

    2018-05-01

    The microcanonical Gross-Pitaevskii (also known as the semiclassical Bose-Hubbard) lattice model dynamics is characterized by a pair of energy and norm densities. The grand canonical Gibbs distribution fails to describe a part of the density space, due to the boundedness of its kinetic energy spectrum. We define Poincaré equilibrium manifolds and compute the statistics of microcanonical excursion times off them. The tails of the distribution functions quantify the proximity of the many-body dynamics to a weakly nonergodic phase, which occurs when the average excursion time is infinite. We find that a crossover to weakly nonergodic dynamics takes place inside the non-Gibbs phase, being unnoticed by the largest Lyapunov exponent. In the ergodic part of the non-Gibbs phase, the Gibbs distribution should be replaced by an unknown modified one. We relate our findings to the corresponding integrable limit, close to which the actions are interacting through a short range coupling network.

  16. Functional and topological characteristics of mammalian regulatory domains

    PubMed Central

    Symmons, Orsolya; Uslu, Veli Vural; Tsujimura, Taro; Ruf, Sandra; Nassari, Sonya; Schwarzer, Wibke; Ettwiller, Laurence; Spitz, François

    2014-01-01

    Long-range regulatory interactions play an important role in shaping gene-expression programs. However, the genomic features that organize these activities are still poorly characterized. We conducted a large operational analysis to chart the distribution of gene regulatory activities along the mouse genome, using hundreds of insertions of a regulatory sensor. We found that enhancers distribute their activities along broad regions and not in a gene-centric manner, defining large regulatory domains. Remarkably, these domains correlate strongly with the recently described TADs, which partition the genome into distinct self-interacting blocks. Different features, including specific repeats and CTCF-binding sites, correlate with the transition zones separating regulatory domains, and may help to further organize promiscuously distributed regulatory influences within large domains. These findings support a model of genomic organization where TADs confine regulatory activities to specific but large regulatory domains, contributing to the establishment of specific gene expression profiles. PMID:24398455

  17. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  18. Flame surface statistics of constant-pressure turbulent expanding premixed flames

    NASA Astrophysics Data System (ADS)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2014-04-01

    In this paper we investigate the local flame surface statistics of constant-pressure turbulent expanding flames. First the statistics of local length ratio is experimentally determined from high-speed planar Mie scattering images of spherically expanding flames, with the length ratio on the measurement plane, at predefined equiangular sectors, defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we then convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at the corresponding area-ratio pdfs. It is found that both the length ratio and area ratio pdfs are near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis.

  19. Screening method based on walking plantar impulse for detecting musculoskeletal senescence and injury.

    PubMed

    Fan, Yifang; Fan, Yubo; Li, Zhiyu; Newman, Tony; Lv, Changsheng; Zhou, Yi

    2013-01-01

    No consensus has been reached on how musculoskeletal system injuries or aging can be explained by a walking plantar impulse. We standardize the plantar impulse by defining a principal axis of plantar impulse. Based upon this standardized plantar impulse, two indexes are presented: plantar pressure record time series and plantar-impulse distribution along the principal axis of plantar impulse. These indexes are applied to analyze the plantar impulse collected by plantar pressure plates from three sources: Achilles tendon ruptures; elderly people (ages 62-71); and young people (ages 19-23). Our findings reveal that plantar impulse distribution curves for Achilles tendon ruptures change irregularly with subjects' walking speed changes. When comparing distribution curves of the young, we see a significant difference in the elderly subjects' phalanges plantar pressure record time series. This verifies our hypothesis that a plantar impulse can function as a means to assess and evaluate musculoskeletal system injuries and aging.

  20. Distribution of Quantum Coherence in Multipartite Systems

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Chandrashekar; Parthasarathy, Manikandan; Jambulingam, Segar; Byrnes, Tim

    2016-04-01

    The distribution of coherence in multipartite systems is examined. We use a new coherence measure with entropic nature and metric properties, based on the quantum Jensen-Shannon divergence. The metric property allows for the coherence to be decomposed into various contributions, which arise from local and intrinsic coherences. We find that there are trade-off relations between the various contributions of coherence, as a function of parameters of the quantum state. In bipartite systems the coherence resides on individual sites or is distributed among the sites, which contribute in a complementary way. In more complex systems, the characteristics of the coherence can display more subtle changes with respect to the parameters of the quantum state. In the case of the X X Z Heisenberg model, the coherence changes from a monogamous to a polygamous nature. This allows us to define the shareability of coherence, leading to monogamy relations for coherence.

  1. "Phase capture" in the perception of interpolated shape: cue combination and the influence function.

    PubMed

    Levi, Dennis M; Wing-Hong Li, Roger; Klein, Stanley A

    2003-09-01

    This study was concerned with what stimulus information observers use to judge the shape of simple objects. We used a string of four Gabor patches to define a contour. A fifth, center patch served as a test pattern. The observers' task was to judge the location of the test pattern relative to the contour. The contour was either a straight line, or an arc with positive or negative curvature (the radius of curvature was either 2 or 6 deg). We asked whether phase shifts in the inner or outer pairs of patches distributed along the contour influence the perceived shape. That is, we measured the phase shift influence function. We found that shifting the inner patches of the string by 0.25 cycle results in almost complete phase capture (attraction) at the smallest separation (2 lambda), and the capture effect falls off rapidly with separation. A 0.25 cycle shift of the outer pair of patches has a much smaller effect, in the opposite direction (repulsion). In our experiments, the contour is defined by two cues--the cue provided by the Gabor carrier (the 'feature' cue) and that defined by the Gaussian envelope (the 'envelope' cue). Our phase shift influence function can be thought of as a cue combination task. An ideal observer would weight the cues by the inverse variance of the two cues. The variance in each of these cues predicts the main features of our results quite accurately.

  2. The Galaxy Color-Magnitude Diagram in the Local Universe from GALEX and SDSS Data

    NASA Astrophysics Data System (ADS)

    Wyder, T. K.; GALEX Science Team

    2005-12-01

    We present the relative density of galaxies in the local universe as a function of their r-band absolute magnitudes and ultraviolet minus r-band colors. The Sloan Digital Sky Survey (SDSS) main galaxy sample selected in the r-band was matched with a sample of galaxies from the Galaxy Evolution Explorer (GALEX) Medium Imaging Survey in both the far-UV (FUV) and near-UV (NUV) bands. Simlar to previous optical studies, the distribution of galaxies in (FUV-r) and (NUV-r) is bimodal with well-defined blue and red sequences. We compare the distribution of galaxies in these colors with both the D4000 index measured from the SDSS spectra as well as the SDSS (u-r) color.

  3. On a two-phase Hele-Shaw problem with a time-dependent gap and distributions of sinks and sources

    NASA Astrophysics Data System (ADS)

    Savina, Tatiana; Akinyemi, Lanre; Savin, Avital

    2018-01-01

    A two-phase Hele-Shaw problem with a time-dependent gap describes the evolution of the interface, which separates two fluids sandwiched between two plates. The fluids have different viscosities. In addition to the change in the gap width of the Hele-Shaw cell, the interface is driven by the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected. Using the Schwarz function approach, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves and the curves do not form cusps. The family of curves are defined by the initial shape of the free boundary.

  4. System design considerations for a production-grade, ESR-based x-ray lithography beamline

    NASA Astrophysics Data System (ADS)

    Kovacs, Stephen; Melore, Dan; Cerrina, Franco; Cole, Richard K.

    1991-08-01

    As electron storage ring (ESR) based x-ray lithography technology moves closer to becoming an industrial reality, more and more attention has been devoted to studying problem areas related to its application in the production environment. A principle component is the x-ray lithography beamline (XLBL) and its associated design requirements. XLBL, an x-ray radiation transport system, is one of the three major subunits in the ESR-based x-ray lithography system (XLS) and has a pivotal role in defining performance characteristics of the entire XLS. Its major functions are to transport the synchrotron orbital radiation (SOR) to the lithography target area with defined efficiency and to modify SOR into the spectral distribution defined by the lithography process window. These functions must be performed reliably in order to satisfy the required high production rate and ensure 0.25 micron resolution lithography conditions. In this paper the authors attempt to answer some specific questions that arise during the formulation of an XLBL system design. Three principle issues that are essential to formulating a design are (1) Radiation transport efficiency, (2) X-ray optical configurations in the beamline, (3) Beamline system configurations. Some practical solutions to thee problem areas are presented, and the effects of these parameters on lithography production rate are examined.

  5. Elastic fibres are broadly distributed in tendon and highly localized around tenocytes

    PubMed Central

    Grant, Tyler M; Thompson, Mark S; Urban, Jill; Yu, Jing

    2013-01-01

    Elastic fibres have the unique ability to withstand large deformations and are found in numerous tissues, but their organization and structure have not been well defined in tendon. The objective of this study was to characterize the organization of elastic fibres in tendon to understand their function. Immunohistochemistry was used to visualize elastic fibres in bovine flexor tendon with fibrillin-1, fibrillin-2 and elastin antibodies. Elastic fibres were broadly distributed throughout tendon, and highly localized longitudinally around groups of cells and transversely between collagen fascicles. The close interaction of elastic fibres and cells suggests that elastic fibres are part of the pericellular matrix and therefore affect the mechanical environment of tenocytes. Fibres present between fascicles are likely part of the endotenon sheath, which enhances sliding between adjacent collagen bundles. These results demonstrate that elastic fibres are highly localized in tendon and may play an important role in cellular function and contribute to the tissue mechanics of the endotenon sheath. PMID:23587025

  6. Histone chaperones: assisting histone traffic and nucleosome dynamics.

    PubMed

    Gurard-Levin, Zachary A; Quivy, Jean-Pierre; Almouzni, Geneviève

    2014-01-01

    The functional organization of eukaryotic DNA into chromatin uses histones as components of its building block, the nucleosome. Histone chaperones, which are proteins that escort histones throughout their cellular life, are key actors in all facets of histone metabolism; they regulate the supply and dynamics of histones at chromatin for its assembly and disassembly. Histone chaperones can also participate in the distribution of histone variants, thereby defining distinct chromatin landscapes of importance for genome function, stability, and cell identity. Here, we discuss our current knowledge of the known histone chaperones and their histone partners, focusing on histone H3 and its variants. We then place them into an escort network that distributes these histones in various deposition pathways. Through their distinct interfaces, we show how they affect dynamics during DNA replication, DNA damage, and transcription, and how they maintain genome integrity. Finally, we discuss the importance of histone chaperones during development and describe how misregulation of the histone flow can link to disease.

  7. Distribution of lifetimes for coronal soft X-ray bright points

    NASA Technical Reports Server (NTRS)

    Golub, L.; Krieger, A. S.; Vaiana, G. S.

    1976-01-01

    The lifetime 'spectrum' of X-ray bright points (XBPs) is measured for a sample of 300 such features using soft X-ray images obtained with the S-054 X-ray spectrographic telescope aboard Skylab. 'Spectrum' here is defined as a function which gives the relative number of XBPs having a specific lifetime as a function of lifetime. The results indicate that a two-lifetime exponential can be fit to the decay curves of XBPs, that the spectrum is heavily weighted toward short lifetimes, and that the number of features lasting 20 to 30 hr or more is greater than expected. A short-lived component with an average lifetime of about 8 hr and a long-lived 1.5-day component are consistently found along with a few features lasting 50 hr or more. An examination of differences among the components shows that features lasting 2 days or less have a broad heliocentric-latitude distribution while nearly all the longer-lived features are observed within 30 deg of the solar equator.

  8. Distributed wavefront reconstruction with SABRE for real-time large scale adaptive optics control

    NASA Astrophysics Data System (ADS)

    Brunner, Elisabeth; de Visser, Cornelis C.; Verhaegen, Michel

    2014-08-01

    We present advances on Spline based ABerration REconstruction (SABRE) from (Shack-)Hartmann (SH) wavefront measurements for large-scale adaptive optics systems. SABRE locally models the wavefront with simplex B-spline basis functions on triangular partitions which are defined on the SH subaperture array. This approach allows high accuracy through the possible use of nonlinear basis functions and great adaptability to any wavefront sensor and pupil geometry. The main contribution of this paper is a distributed wavefront reconstruction method, D-SABRE, which is a 2 stage procedure based on decomposing the sensor domain into sub-domains each supporting a local SABRE model. D-SABRE greatly decreases the computational complexity of the method and removes the need for centralized reconstruction while obtaining a reconstruction accuracy for simulated E-ELT turbulences within 1% of the global method's accuracy. Further, a generalization of the methodology is proposed making direct use of SH intensity measurements which leads to an improved accuracy of the reconstruction compared to centroid algorithms using spatial gradients.

  9. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  10. FATE-HD: A spatially and temporally explicit integrated model for predicting vegetation structure and diversity at regional scale

    PubMed Central

    Isabelle, Boulangeat; Damien, Georges; Wilfried, Thuiller

    2014-01-01

    During the last decade, despite strenuous efforts to develop new models and compare different approaches, few conclusions have been drawn on their ability to provide robust biodiversity projections in an environmental change context. The recurring suggestions are that models should explicitly (i) include spatiotemporal dynamics; (ii) consider multiple species in interactions; and (iii) account for the processes shaping biodiversity distribution. This paper presents a biodiversity model (FATE-HD) that meets this challenge at regional scale by combining phenomenological and process-based approaches and using well-defined plant functional groups. FATE-HD has been tested and validated in a French National Park, demonstrating its ability to simulate vegetation dynamics, structure and diversity in response to disturbances and climate change. The analysis demonstrated the importance of considering biotic interactions, spatio-temporal dynamics, and disturbances in addition to abiotic drivers to simulate vegetation dynamics. The distribution of pioneer trees was particularly improved, as were all undergrowth functional groups. PMID:24214499

  11. Influence of Different Defects in Vertically Aligned Carbon Nanotubes on TiO2 Nanoparticle Formation through Atomic Layer Deposition.

    PubMed

    Acauan, Luiz; Dias, Anna C; Pereira, Marcelo B; Horowitz, Flavio; Bergmann, Carlos P

    2016-06-29

    The chemical inertness of carbon nanotubes (CNT) requires some degree of "defect engineering" for controlled deposition of metal oxides through atomic layer deposition (ALD). The type, quantity, and distribution of such defects rules the deposition rate and defines the growth behavior. In this work, we employed ALD to grow titanium oxide (TiO2) on vertically aligned carbon nanotubes (VACNT). The effects of nitrogen doping and oxygen plasma pretreatment of the CNT on the morphology and total amount of TiO2 were systematically studied using transmission electron microscopy, Raman spectroscopy, and thermogravimetric analysis. The induced chemical changes for each functionalization route were identified by X-ray photoelectron and Raman spectroscopies. The TiO2 mass fraction deposited with the same number of cycles for the pristine CNT, nitrogen-doped CNT, and plasma-treated CNT were 8, 47, and 80%, respectively. We demonstrate that TiO2 nucleation is dependent mainly on surface incorporation of heteroatoms and their distribution rather than structural defects that govern the growth behavior. Therefore, selecting the best way to functionalize CNT will allow us to tailor TiO2 distribution and hence fabricate complex heterostructures.

  12. STOCK: Structure mapper and online coarse-graining kit for molecular simulations

    DOE PAGES

    Bevc, Staš; Junghans, Christoph; Praprotnik, Matej

    2015-03-15

    We present a web toolkit STructure mapper and Online Coarse-graining Kit for setting up coarse-grained molecular simulations. The kit consists of two tools: structure mapping and Boltzmann inversion tools. The aim of the first tool is to define a molecular mapping from high, e.g. all-atom, to low, i.e. coarse-grained, resolution. Using a graphical user interface it generates input files, which are compatible with standard coarse-graining packages, e.g. VOTCA and DL_CGMAP. Our second tool generates effective potentials for coarse-grained simulations preserving the structural properties, e.g. radial distribution functions, of the underlying higher resolution model. The required distribution functions can be providedmore » by any simulation package. Simulations are performed on a local machine and only the distributions are uploaded to the server. The applicability of the toolkit is validated by mapping atomistic pentane and polyalanine molecules to a coarse-grained representation. Effective potentials are derived for systems of TIP3P (transferable intermolecular potential 3 point) water molecules and salt solution. The presented coarse-graining web toolkit is available at http://stock.cmm.ki.si.« less

  13. An exactly solvable coarse-grained model for species diversity

    NASA Astrophysics Data System (ADS)

    Suweis, Samir; Rinaldo, Andrea; Maritan, Amos

    2012-07-01

    We present novel analytical results concerning ecosystem species diversity that stem from a proposed coarse-grained neutral model based on birth-death processes. The relevance of the problem lies in the urgency for understanding and synthesizing both theoretical results from ecological neutral theory and empirical evidence on species diversity preservation. The neutral model of biodiversity deals with ecosystems at the same trophic level, where per capita vital rates are assumed to be species independent. Closed-form analytical solutions for the neutral theory are obtained within a coarse-grained model, where the only input is the species persistence time distribution. Our results pertain to: the probability distribution function of the number of species in the ecosystem, both in transient and in stationary states; the n-point connected time correlation function; and the survival probability, defined as the distribution of time spans to local extinction for a species randomly sampled from the community. Analytical predictions are also tested on empirical data from an estuarine fish ecosystem. We find that emerging properties of the ecosystem are very robust and do not depend on specific details of the model, with implications for biodiversity and conservation biology.

  14. Flood impacts on a water distribution network

    NASA Astrophysics Data System (ADS)

    Arrighi, Chiara; Tarani, Fabio; Vicario, Enrico; Castelli, Fabio

    2017-12-01

    Floods cause damage to people, buildings and infrastructures. Water distribution systems are particularly exposed, since water treatment plants are often located next to the rivers. Failure of the system leads to both direct losses, for instance damage to equipment and pipework contamination, and indirect impact, since it may lead to service disruption and thus affect populations far from the event through the functional dependencies of the network. In this work, we present an analysis of direct and indirect damages on a drinking water supply system, considering the hazard of riverine flooding as well as the exposure and vulnerability of active system components. The method is based on interweaving, through a semi-automated GIS procedure, a flood model and an EPANET-based pipe network model with a pressure-driven demand approach, which is needed when modelling water distribution networks in highly off-design conditions. Impact measures are defined and estimated so as to quantify service outage and potential pipe contamination. The method is applied to the water supply system of the city of Florence, Italy, serving approximately 380 000 inhabitants. The evaluation of flood impact on the water distribution network is carried out for different events with assigned recurrence intervals. Vulnerable elements exposed to the flood are identified and analysed in order to estimate their residual functionality and to simulate failure scenarios. Results show that in the worst failure scenario (no residual functionality of the lifting station and a 500-year flood), 420 km of pipework would require disinfection with an estimated cost of EUR 21 million, which is about 0.5 % of the direct flood losses evaluated for buildings and contents. Moreover, if flood impacts on the water distribution network are considered, the population affected by the flood is up to 3 times the population directly flooded.

  15. Ill-defined causes of death in Brazil: a redistribution method based on the investigation of such causes

    PubMed Central

    França, Elisabeth; Teixeira, Renato; Ishitani, Lenice; Duncan, Bruce Bartholow; Cortez-Escalante, Juan José; de Morais, Otaliba Libânio; Szwarcwald, Célia Landman

    2014-01-01

    OBJECTIVE To propose a method of redistributing ill-defined causes of death (IDCD) based on the investigation of such causes. METHODS In 2010, an evaluation of the results of investigating the causes of death classified as IDCD in accordance with chapter 18 of the International Classification of Diseases (ICD-10) by the Mortality Information System was performed. The redistribution coefficients were calculated according to the proportional distribution of ill-defined causes reclassified after investigation in any chapter of the ICD-10, except for chapter 18, and used to redistribute the ill-defined causes not investigated and remaining by sex and age. The IDCD redistribution coefficient was compared with two usual methods of redistribution: a) Total redistribution coefficient, based on the proportional distribution of all the defined causes originally notified and b) Non-external redistribution coefficient, similar to the previous, but excluding external causes. RESULTS Of the 97,314 deaths by ill-defined causes reported in 2010, 30.3% were investigated, and 65.5% of those were reclassified as defined causes after the investigation. Endocrine diseases, mental disorders, and maternal causes had a higher representation among the reclassified ill-defined causes, contrary to infectious diseases, neoplasms, and genitourinary diseases, with higher proportions among the defined causes reported. External causes represented 9.3% of the ill-defined causes reclassified. The correction of mortality rates by the total redistribution coefficient and non-external redistribution coefficient increased the magnitude of the rates by a relatively similar factor for most causes, contrary to the IDCD redistribution coefficient that corrected the different causes of death with differentiated weights. CONCLUSIONS The proportional distribution of causes among the ill-defined causes reclassified after investigation was not similar to the original distribution of defined causes. Therefore, the redistribution of the remaining ill-defined causes based on the investigation allows for more appropriate estimates of the mortality risk due to specific causes. PMID:25210826

  16. Ill-defined causes of death in Brazil: a redistribution method based on the investigation of such causes.

    PubMed

    França, Elisabeth; Teixeira, Renato; Ishitani, Lenice; Duncan, Bruce Bartholow; Cortez-Escalante, Juan José; Morais Neto, Otaliba Libânio de; Szwarcwald, Célia Landman

    2014-08-01

    OBJECTIVE To propose a method of redistributing ill-defined causes of death (IDCD) based on the investigation of such causes. METHODS In 2010, an evaluation of the results of investigating the causes of death classified as IDCD in accordance with chapter 18 of the International Classification of Diseases (ICD-10) by the Mortality Information System was performed. The redistribution coefficients were calculated according to the proportional distribution of ill-defined causes reclassified after investigation in any chapter of the ICD-10, except for chapter 18, and used to redistribute the ill-defined causes not investigated and remaining by sex and age. The IDCD redistribution coefficient was compared with two usual methods of redistribution: a) Total redistribution coefficient, based on the proportional distribution of all the defined causes originally notified and b) Non-external redistribution coefficient, similar to the previous, but excluding external causes. RESULTS Of the 97,314 deaths by ill-defined causes reported in 2010, 30.3% were investigated, and 65.5% of those were reclassified as defined causes after the investigation. Endocrine diseases, mental disorders, and maternal causes had a higher representation among the reclassified ill-defined causes, contrary to infectious diseases, neoplasms, and genitourinary diseases, with higher proportions among the defined causes reported. External causes represented 9.3% of the ill-defined causes reclassified. The correction of mortality rates by the total redistribution coefficient and non-external redistribution coefficient increased the magnitude of the rates by a relatively similar factor for most causes, contrary to the IDCD redistribution coefficient that corrected the different causes of death with differentiated weights. CONCLUSIONS The proportional distribution of causes among the ill-defined causes reclassified after investigation was not similar to the original distribution of defined causes. Therefore, the redistribution of the remaining ill-defined causes based on the investigation allows for more appropriate estimates of the mortality risk due to specific causes.

  17. Rényi entropy, abundance distribution, and the equivalence of ensembles.

    PubMed

    Mora, Thierry; Walczak, Aleksandra M

    2016-05-01

    Distributions of abundances or frequencies play an important role in many fields of science, from biology to sociology, as does the Rényi entropy, which measures the diversity of a statistical ensemble. We derive a mathematical relation between the abundance distribution and the Rényi entropy, by analogy with the equivalence of ensembles in thermodynamics. The abundance distribution is mapped onto the density of states, and the Rényi entropy to the free energy. The two quantities are related in the thermodynamic limit by a Legendre transform, by virtue of the equivalence between the micro-canonical and canonical ensembles. In this limit, we show how the Rényi entropy can be constructed geometrically from rank-frequency plots. This mapping predicts that non-concave regions of the rank-frequency curve should result in kinks in the Rényi entropy as a function of its order. We illustrate our results on simple examples, and emphasize the limitations of the equivalence of ensembles when a thermodynamic limit is not well defined. Our results help choose reliable diversity measures based on the experimental accuracy of the abundance distributions in particular frequency ranges.

  18. Green polymer chemistry: The role of Candida antarctica lipase B in polymer functionalization

    NASA Astrophysics Data System (ADS)

    Castano Gil, Yenni Marcela

    The synthesis of functional polymers with well-defined structure, end-group fidelity and physico-chemical properties useful for biomedical applications has proven challenging. Chemo-enzymatic methods are an alternative strategy to increase the diversity of functional groups in polymeric materials. Specifically, enzyme-catalyzed polymer functionalization carried out under solventless conditions is a great advancement in the design of green processes for biomedical applications, where the toxicity of solvents and catalyst residues need to be considered. Enzymes offer several distinct advantages, including high efficiency, catalyst recyclability, and mild reaction conditions. This reseach aimed to precisely functionalized polymers using two methods: enzyme-catalyzed functionalization via polymerization and chemo-enzymatic functionalization of pre-made polymers for drug delivery. In the first method, well-defined poly(caprolactone)s were generated using alkyne-based initiating systems catalyzed by CALB. Propargyl alcohol and 4-dibenzocyclooctynol (DIBO) were shown to efficiently initiate the ring opening polymerization of epsilon-caprolactone under metal free conditions and yielded polymers with Mn ~4 to 24 KDa and relatively narrow molecular mass distribution. In the second methodology, we present quantitative enzyme-catalyzed transesterification of vinyl esters and ethyl esters with poly(ethylene glycol)s (PEG)s that will serve as building blocks for dendrimer synthesis, followed by introducing a new process for the exclusive gamma-conjugation of folic acid. Specifically, fluorescein-acrylate was enzymatically conjugated with PEG. Additionally, halo-ester functionalized PEGs were successfully prepared by the transesterification of alkyl halo-esters with PEGs. 1H and 13C NMR spectroscopy, SEC and MALDI-ToF mass spectrometry confirmed the structure and purity of the products.

  19. Femtosecond movies of water near interfaces at sub-Angstrom resolution

    NASA Astrophysics Data System (ADS)

    Coridan, Robert; Hwee Lai, Ghee; Schmidt, Nathan; Abbamonte, Peter; Wong, Gerard C. L.

    2010-03-01

    The behavior of liquid water near interfaces with nanoscopic variations in chemistry influences a broad range of phenomena in biology. Using inelastic x-ray scattering (IXS) data from 3rd-generation synchrotron x-ray sources, we reconstruct the Greens function of liquid water, which describes the å-scale spatial and femtosecond-scale temporal evolution of density fluctuations. We extend this response function formalism to reconstruct the evolution of hydration structures near dynamic surfaces with different charge distributions, in order to define more precisely the molecular signature of hydrophilicity and hydrophobicity. Moreover, we investigate modifications to surface hydration structures and dynamics as the size of hydrophilic and hydrophobic patches are varied.

  20. On the Impact of Local Taxes in a Set Cover Game

    NASA Astrophysics Data System (ADS)

    Escoffier, Bruno; Gourvès, Laurent; Monnot, Jérôme

    Given a collection C of weighted subsets of a ground set E, the SET cover problem is to find a minimum weight subset of C which covers all elements of E. We study a strategic game defined upon this classical optimization problem. Every element of E is a player which chooses one set of C where it appears. Following a public tax function, every player is charged a fraction of the weight of the set that it has selected. Our motivation is to design a tax function having the following features: it can be implemented in a distributed manner, existence of an equilibrium is guaranteed and the social cost for these equilibria is minimized.

  1. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantry, Sonny; Petriello, Frank

    We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less

  2. Delineating ecological regions in marine systems: Integrating physical structure and community composition to inform spatial management in the eastern Bering Sea

    NASA Astrophysics Data System (ADS)

    Baker, Matthew R.; Hollowed, Anne B.

    2014-11-01

    Characterizing spatial structure and delineating meaningful spatial boundaries have useful applications to understanding regional dynamics in marine systems, and are integral to ecosystem approaches to fisheries management. Physical structure and drivers combine with biological responses and interactions to organize marine systems in unique ways at multiple scales. We apply multivariate statistical methods to define spatially coherent ecological units or ecoregions in the eastern Bering Sea. We also illustrate a practical approach to integrate data on species distribution, habitat structure and physical forcing mechanisms to distinguish areas with distinct biogeography as one means to define management units in large marine ecosystems. We use random forests to quantify the relative importance of habitat and environmental variables to the distribution of individual species, and to quantify shifts in multispecies assemblages or community composition along environmental gradients. Threshold shifts in community composition are used to identify regions with distinct physical and biological attributes, and to evaluate the relative importance of predictor variables to determining regional boundaries. Depth, bottom temperature and frontal boundaries were dominant factors delineating distinct biological communities in this system, with a latitudinal divide at approximately 60°N. Our results indicate that distinct climatic periods will shift habitat gradients and that dynamic physical variables such as temperature and stratification are important to understanding temporal stability of ecoregion boundaries. We note distinct distribution patterns among functional guilds and also evidence for resource partitioning among individual species within each guild. By integrating physical and biological data to determine spatial patterns in community composition, we partition ecosystems along ecologically significant gradients. This may provide a basis for defining spatial management units or serve as a baseline index for analyses of structural shifts in the physical environment, species abundance and distribution, and community dynamics over time.

  3. A Perikinetochoric Ring Defined by MCAK and Aurora-B as a Novel Centromere Domain

    PubMed Central

    Parra, María Teresa; Gómez, Rocío; Viera, Alberto; Page, Jesús; Calvente, Adela; Wordeman, Linda; Rufas, Julio S; Suja, José A

    2006-01-01

    Mitotic Centromere-Associated Kinesin (MCAK) is a member of the kinesin-13 subfamily of kinesin-related proteins. In mitosis, this microtubule-depolymerising kinesin seems to be implicated in chromosome segregation and in the correction of improper kinetochore-microtubule interactions, and its activity is regulated by the Aurora-B kinase. However, there are no published data on its behaviour and function during mammalian meiosis. We have analysed by immunofluorescence in squashed mouse spermatocytes, the distribution and possible function of MCAK, together with Aurora-B, during both meiotic divisions. Our results demonstrate that MCAK and Aurora-B colocalise at the inner domain of metaphase I centromeres. Thus, MCAK shows a “cone”-like three-dimensional distribution beneath and surrounding the closely associated sister kinetochores. During the second meiotic division, MCAK and Aurora-B also colocalise at the inner centromere domain as a band that joins sister kinetochores, but only during prometaphase II in unattached chromosomes. During chromosome congression to the metaphase II plate, MCAK relocalises and appears as a ring below each sister kinetochore. Aurora-B also relocalises to appear as a ring surrounding and beneath kinetochores but during late metaphase II. Our results demonstrate that the redistribution of MCAK at prometaphase II/metaphase II centromeres depends on tension across the centromere and/or on the interaction of microtubules with kinetochores. We propose that the perikinetochoric rings of MCAK and Aurora-B define a novel transient centromere domain at least in mouse chromosomes during meiosis. We discuss the possible functions of MCAK at the inner centromere domain and at the perikinetochoric ring during both meiotic divisions. PMID:16741559

  4. The function of neurocognitive networks. Comment on “Understanding brain networks and brain organization” by Pessoa

    NASA Astrophysics Data System (ADS)

    Bressler, Steven L.

    2014-09-01

    Pessoa [5] has performed a valuable service by reviewing the extant literature on brain networks and making a number of interesting proposals about their cognitive function. The term function is at the core of understanding the brain networks of cognition, or neurocognitive networks (NCNs) [1]. The great Russian neuropsychologist, Luria [4], defined brain function as the common task executed by a distributed brain network of complex dynamic structures united by the demands of cognition. Casting Luria in a modern light, we can say that function emerges from the interactions of brain regions in NCNs as they dynamically self-organize according to cognitive demands. Pessoa rightly details the mapping between brain function and structure, emphasizing both its pluripotency (one structure having multiple functions) and degeneracy (many structures having the same function). However, he fails to consider the potential importance of a one-to-one mapping between NCNs and function. If NCNs are uniquely composed of specific collections of brain areas, then each NCN has a unique function determined by that composition.

  5. Avalanche Analysis from Multielectrode Ensemble Recordings in Cat, Monkey, and Human Cerebral Cortex during Wakefulness and Sleep

    PubMed Central

    Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain

    2012-01-01

    Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053

  6. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.

  7. Adaptive forest management for drinking water protection under climate change

    NASA Astrophysics Data System (ADS)

    Koeck, R.; Hochbichler, E.

    2012-04-01

    Drinking water resources drawn from forested catchment areas are prominent for providing water supply on our planet. Despite the fact that source waters stemming from forested watersheds have generally lower water quality problems than those stemming from agriculturally used watersheds, it has to be guaranteed that the forest stands meet high standards regarding their water protection functionality. For fulfilling these, forest management concepts have to be applied, which are adaptive regarding the specific forest site conditions and also regarding climate change scenarios. In the past century forest management in the alpine area of Austria was mainly based on the cultivation of Norway spruce, by the way neglecting specific forest site conditions, what caused in many cases highly vulnerable mono-species forest stands. The GIS based forest hydrotope model (FoHyM) provides a framework for forest management, which defines the most crucial parameters in a spatial explicit form. FoHyM stratifies the spacious drinking water protection catchments into forest hydrotopes, being operational units for forest management. The primary information layer of FoHyM is the potential natural forest community, which reflects the specific forest site conditions regarding geology, soil types, elevation above sea level, exposition and inclination adequately and hence defines the specific forest hydrotopes. For each forest hydrotope, the adequate tree species composition and forest stand structure for drinking water protection functionality was deduced, based on the plant-sociological information base provided by FoHyM. The most important overall purpose for the related elaboration of adaptive forest management concepts and measures was the improvement of forest stand stability, which can be seen as the crucial parameter for drinking water protection. Only stable forest stands can protect the fragile soil and humus layers and hence prevent erosion process which could endanger the water resources. Forest stands which are formed by a tree species set which conforms to the potential natural forest community are more stable than the currently wide-spread mono-species Norway spruce plantations, especially in times of climate change, where e.g. bark beetle infestations threat spruce with increased intensity. FoHyM also provides the relevant ecological boundary conditions for any estimation of climate change adaptations. The adaptation of the tree species distribution within each forest hydrotope to climate change conditions was fulfilled by the integration of climate change scenarios and the estimation of the eco-physiological characteristics of related tree species. Hence it was possible to define the tree species distribution related to a specific climate change scenario for each forest hydrotope. The silvicultural concepts and measures to accomplish the defined tree species distribution and forest stand structure for each forest hydrotope were defined and elaborated by taking the specific requirements of drinking water protection areas into account, what e.g. comprised the prohibition of the clear cut technique and the application of continuous cover forest management concepts. The overall purpose of these adaptive silvicultural concepts and techniques which were based on the application of FoHyM was the improvement of the water protection functionality of forest stands within drinking water protection zones.

  8. Zinc in Cellular Regulation: The Nature and Significance of "Zinc Signals".

    PubMed

    Maret, Wolfgang

    2017-10-31

    In the last decade, we witnessed discoveries that established Zn 2+ as a second major signalling metal ion in the transmission of information within cells and in communication between cells. Together with Ca 2+ and Mg 2+ , Zn 2+ covers biological regulation with redox-inert metal ions over many orders of magnitude in concentrations. The regulatory functions of zinc ions, together with their functions as a cofactor in about three thousand zinc metalloproteins, impact virtually all aspects of cell biology. This article attempts to define the regulatory functions of zinc ions, and focuses on the nature of zinc signals and zinc signalling in pathways where zinc ions are either extracellular stimuli or intracellular messengers. These pathways interact with Ca 2+ , redox, and phosphorylation signalling. The regulatory functions of zinc require a complex system of precise homeostatic control for transients, subcellular distribution and traffic, organellar homeostasis, and vesicular storage and exocytosis of zinc ions.

  9. Radar sea reflection for low-e targets

    NASA Astrophysics Data System (ADS)

    Chow, Winston C.; Groves, Gordon W.

    1998-09-01

    Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.

  10. Numerical calculation of the neoclassical electron distribution function in an axisymmetric torus

    NASA Astrophysics Data System (ADS)

    Lyons, B. C.; Jardin, S. C.; Ramos, J. J.

    2011-10-01

    We solve for a stationary, axisymmetric electron distribution function (fe) in a torus using a drift-kinetic equation (DKE) with complete Landau collision operator. All terms are kept to gyroradius and collisionality orders relevant to high- temperature tokamaks (i.e., the neoclassical banana regime for electrons). A solubility condition on the DKE determines the non-Maxwellian pieces of fe (called fNMe) to all relevant orders. We work in a 4D phase space (ψ , θ , v , λ) , where ψ defines a flux surface, θ is the poloidal angle, v is the total velocity, and λ is the pitch angle parameter. We expand fNMe in finite elements in both v and λ. The Rosenbluth potentials, Φ and Ψ, which define the collision operator, are expanded in Legendre series in cos χ , where χ is the pitch angle, Fourier series in cos θ , and finite elements in v. At each ψ, we solve a block tridiagonal system for fNMe, Φ, and Ψ simultaneously, resulting in a neoclassical fe for the entire torus. Our goal is to demonstrate that such a formulation can be accurately and efficiently solved numerically. Results will be compared to other codes (e.g., NCLASS, NEO) and could be used as a kinetic closure for an MHD code (e.g., M3D-C1). Supported by the DOE SCGF and DOE Contract # DE-AC02-09CH11466. Based on analytic work by Ramos, PoP 17, 082502 (2010).

  11. Improving Department of Defense Global Distribution Performance Through Network Analysis

    DTIC Science & Technology

    2016-06-01

    network performance increase. 14. SUBJECT TERMS supply chain metrics, distribution networks, requisition shipping time, strategic distribution database...peace and war” (p. 4). USTRANSCOM Metrics and Analysis Branch defines, develops, tracks, and maintains outcomes- based supply chain metrics to...2014a, p. 8). The Joint Staff defines a TDD standard as the maximum number of days the supply chain can take to deliver requisitioned materiel

  12. Advances in Light-Front QCD: Supersymmetric Properties of Hadron Physics from Light-Front Holography and Superconformal Algebra

    NASA Astrophysics Data System (ADS)

    Brodsky, Stanley J.

    2017-05-01

    A remarkable feature of QCD is that the mass scale κ which controls color confinement and light-quark hadron mass scales does not appear explicitly in the QCD Lagrangian. However, de Alfaro, Fubini, and Furlan have shown that a mass scale can appear in the equations of motion without affecting the conformal invariance of the action if one adds a term to the Hamiltonian proportional to the dilatation operator or the special conformal operator. If one applies the same procedure to the light-front Hamiltonian, it leads uniquely to a confinement potential κ ^4 ζ ^2 for mesons, where ζ ^2 is the LF radial variable conjugate to the q \\bar{q} invariant mass. The same result, including spin terms, is obtained using light-front holography—the duality between the front form and AdS_5, the space of isometries of the conformal group—if one modifies the action of AdS_5 by the dilaton e^{κ ^2 z^2} in the fifth dimension z. When one generalizes this procedure using superconformal algebra, the resulting light-front eigensolutions predict a unified Regge spectroscopy of meson, baryon, and tetraquarks, including remarkable supersymmetric relations between the masses of mesons and baryons of the same parity. One also predicts observables such as hadron structure functions, transverse momentum distributions, and the distribution amplitudes defined from the hadronic light-front wavefunctions. The mass scale κ underlying confinement and hadron masses can be connected to the parameter Λ _{\\overline{MS}} in the QCD running coupling by matching the nonperturbative dynamics to the perturbative QCD regime. The result is an effective coupling α _s(Q^2) defined at all momenta. The matching of the high and low momentum transfer regimes determines a scale Q_0 which sets the interface between perturbative and nonperturbative hadron dynamics. The use of Q_0 to resolve the factorization scale uncertainty for structure functions and distribution amplitudes, in combination with the principle of maximal conformality for setting the renormalization scales, can greatly improve the precision of perturbative QCD predictions for collider phenomenology. The absence of vacuum excitations of the causal, frame-independent front-form vacuum has important consequences for the cosmological constant. I also discuss evidence that the antishadowing of nuclear structure functions is non-universal; i.e., flavor dependent, and why shadowing and antishadowing phenomena may be incompatible with sum rules for nuclear parton distribution functions.

  13. Distribution, Community Composition, and Potential Metabolic Activity of Bacterioplankton in an Urbanized Mediterranean Sea Coastal Zone.

    PubMed

    Richa, Kumari; Balestra, Cecilia; Piredda, Roberta; Benes, Vladimir; Borra, Marco; Passarelli, Augusto; Margiotta, Francesca; Saggiomo, Maria; Biffali, Elio; Sanges, Remo; Scanlan, David J; Casotti, Raffaella

    2017-09-01

    Bacterioplankton are fundamental components of marine ecosystems and influence the entire biosphere by contributing to the global biogeochemical cycles of key elements. Yet, there is a significant gap in knowledge about their diversity and specific activities, as well as environmental factors that shape their community composition and function. Here, the distribution and diversity of surface bacterioplankton along the coastline of the Gulf of Naples (GON; Italy) were investigated using flow cytometry coupled with high-throughput sequencing of the 16S rRNA gene. Heterotrophic bacteria numerically dominated the bacterioplankton and comprised mainly Alphaproteobacteria , Gammaproteobacteria , and Bacteroidetes Distinct communities occupied river-influenced, coastal, and offshore sites, as indicated by Bray-Curtis dissimilarity, distance metric (UniFrac), linear discriminant analysis effect size (LEfSe), and multivariate analyses. The heterogeneity in diversity and community composition was mainly due to salinity and changes in environmental conditions across sites, as defined by nutrient and chlorophyll a concentrations. Bacterioplankton communities were composed of a few dominant taxa and a large proportion (92%) of rare taxa (here defined as operational taxonomic units [OTUs] accounting for <0.1% of the total sequence abundance), the majority of which were unique to each site. The relationship between 16S rRNA and the 16S rRNA gene, i.e., between potential metabolic activity and abundance, was positive for the whole community. However, analysis of individual OTUs revealed high rRNA-to-rRNA gene ratios for most (71.6% ± 16.7%) of the rare taxa, suggesting that these low-abundance organisms were potentially active and hence might be playing an important role in ecosystem diversity and functioning in the GON. IMPORTANCE The study of bacterioplankton in coastal zones is of critical importance, considering that these areas are highly productive and anthropogenically impacted. Their richness and evenness, as well as their potential activity, are very important to assess ecosystem health and functioning. Here, we investigated bacterial distribution, community composition, and potential metabolic activity in the GON, which is an ideal test site due to its heterogeneous environment characterized by a complex hydrodynamics and terrestrial inputs of varied quantities and quality. Our study demonstrates that bacterioplankton communities in this region are highly diverse and strongly regulated by a combination of different environmental factors leading to their heterogeneous distribution, with the rare taxa contributing to a major proportion of diversity and shifts in community composition and potentially holding a key role in ecosystem functioning. Copyright © 2017 American Society for Microbiology.

  14. Distribution, Community Composition, and Potential Metabolic Activity of Bacterioplankton in an Urbanized Mediterranean Sea Coastal Zone

    PubMed Central

    Richa, Kumari; Balestra, Cecilia; Piredda, Roberta; Benes, Vladimir; Borra, Marco; Passarelli, Augusto; Margiotta, Francesca; Saggiomo, Maria; Biffali, Elio; Sanges, Remo; Scanlan, David J.

    2017-01-01

    ABSTRACT Bacterioplankton are fundamental components of marine ecosystems and influence the entire biosphere by contributing to the global biogeochemical cycles of key elements. Yet, there is a significant gap in knowledge about their diversity and specific activities, as well as environmental factors that shape their community composition and function. Here, the distribution and diversity of surface bacterioplankton along the coastline of the Gulf of Naples (GON; Italy) were investigated using flow cytometry coupled with high-throughput sequencing of the 16S rRNA gene. Heterotrophic bacteria numerically dominated the bacterioplankton and comprised mainly Alphaproteobacteria, Gammaproteobacteria, and Bacteroidetes. Distinct communities occupied river-influenced, coastal, and offshore sites, as indicated by Bray-Curtis dissimilarity, distance metric (UniFrac), linear discriminant analysis effect size (LEfSe), and multivariate analyses. The heterogeneity in diversity and community composition was mainly due to salinity and changes in environmental conditions across sites, as defined by nutrient and chlorophyll a concentrations. Bacterioplankton communities were composed of a few dominant taxa and a large proportion (92%) of rare taxa (here defined as operational taxonomic units [OTUs] accounting for <0.1% of the total sequence abundance), the majority of which were unique to each site. The relationship between 16S rRNA and the 16S rRNA gene, i.e., between potential metabolic activity and abundance, was positive for the whole community. However, analysis of individual OTUs revealed high rRNA-to-rRNA gene ratios for most (71.6% ± 16.7%) of the rare taxa, suggesting that these low-abundance organisms were potentially active and hence might be playing an important role in ecosystem diversity and functioning in the GON. IMPORTANCE The study of bacterioplankton in coastal zones is of critical importance, considering that these areas are highly productive and anthropogenically impacted. Their richness and evenness, as well as their potential activity, are very important to assess ecosystem health and functioning. Here, we investigated bacterial distribution, community composition, and potential metabolic activity in the GON, which is an ideal test site due to its heterogeneous environment characterized by a complex hydrodynamics and terrestrial inputs of varied quantities and quality. Our study demonstrates that bacterioplankton communities in this region are highly diverse and strongly regulated by a combination of different environmental factors leading to their heterogeneous distribution, with the rare taxa contributing to a major proportion of diversity and shifts in community composition and potentially holding a key role in ecosystem functioning. PMID:28667110

  15. Advances in Light-Front QCD: Supersymmetric Properties of Hadron Physics from Light-Front Holography and Superconformal Algebra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.

    A remarkable feature of QCD is that the mass scalemore » $k$ which controls color confinement and light-quark hadron mass scales does not appear explicitly in the QCD Lagrangian. However, de Alfaro, Fubini, and Furlan have shown that a mass scale can appear in the equations of motion without affecting the conformal invariance of the action if one adds a term to the Hamiltonian proportional to the dilatation operator or the special conformal operator. If one applies the same procedure to the light-front Hamiltonian, it leads uniquely to a confinement potential κ 4ζ 2 for mesons, where ζ 2 is the LF radial variable conjugate to the $$q\\bar{q}$$ invariant mass. The same result, including spin terms, is obtained using light-front holography$-$the duality between the front form and AdS 5, the space of isometries of the conformal group$-$if one modifies the action of AdS 5 by the dilaton e $κ^2z^2$ in the fifth dimension z. When one generalizes this procedure using superconformal algebra, the resulting light-front eigensolutions predict a unified Regge spectroscopy of meson, baryon, and tetraquarks, including remarkable supersymmetric relations between the masses of mesons and baryons of the same parity. One also predicts observables such as hadron structure functions, transverse momentum distributions, and the distribution amplitudes defined from the hadronic light-front wavefunctions. The mass scale κκ underlying confinement and hadron masses can be connected to the parameter Λ $$\\overline{MS}$$ in the QCD running coupling by matching the nonperturbative dynamics to the perturbative QCD regime. The result is an effective coupling α s (Q 2) defined at all momenta. The matching of the high and low momentum transfer regimes determines a scale Q 0 which sets the interface between perturbative and nonperturbative hadron dynamics. The use of Q 0 to resolve the factorization scale uncertainty for structure functions and distribution amplitudes, in combination with the principle of maximal conformality for setting the renormalization scales, can greatly improve the precision of perturbative QCD predictions for collider phenomenology. The absence of vacuum excitations of the causal, frame-independent front-form vacuum has important consequences for the cosmological constant. In conclusion, I also discuss evidence that the antishadowing of nuclear structure functions is non-universal; i.e., flavor dependent, and why shadowing and antishadowing phenomena may be incompatible with sum rules for nuclear parton distribution functions.« less

  16. Advances in Light-Front QCD: Supersymmetric Properties of Hadron Physics from Light-Front Holography and Superconformal Algebra

    DOE PAGES

    Brodsky, Stanley J.

    2017-04-19

    A remarkable feature of QCD is that the mass scalemore » $k$ which controls color confinement and light-quark hadron mass scales does not appear explicitly in the QCD Lagrangian. However, de Alfaro, Fubini, and Furlan have shown that a mass scale can appear in the equations of motion without affecting the conformal invariance of the action if one adds a term to the Hamiltonian proportional to the dilatation operator or the special conformal operator. If one applies the same procedure to the light-front Hamiltonian, it leads uniquely to a confinement potential κ 4ζ 2 for mesons, where ζ 2 is the LF radial variable conjugate to the $$q\\bar{q}$$ invariant mass. The same result, including spin terms, is obtained using light-front holography$-$the duality between the front form and AdS 5, the space of isometries of the conformal group$-$if one modifies the action of AdS 5 by the dilaton e $κ^2z^2$ in the fifth dimension z. When one generalizes this procedure using superconformal algebra, the resulting light-front eigensolutions predict a unified Regge spectroscopy of meson, baryon, and tetraquarks, including remarkable supersymmetric relations between the masses of mesons and baryons of the same parity. One also predicts observables such as hadron structure functions, transverse momentum distributions, and the distribution amplitudes defined from the hadronic light-front wavefunctions. The mass scale κκ underlying confinement and hadron masses can be connected to the parameter Λ $$\\overline{MS}$$ in the QCD running coupling by matching the nonperturbative dynamics to the perturbative QCD regime. The result is an effective coupling α s (Q 2) defined at all momenta. The matching of the high and low momentum transfer regimes determines a scale Q 0 which sets the interface between perturbative and nonperturbative hadron dynamics. The use of Q 0 to resolve the factorization scale uncertainty for structure functions and distribution amplitudes, in combination with the principle of maximal conformality for setting the renormalization scales, can greatly improve the precision of perturbative QCD predictions for collider phenomenology. The absence of vacuum excitations of the causal, frame-independent front-form vacuum has important consequences for the cosmological constant. In conclusion, I also discuss evidence that the antishadowing of nuclear structure functions is non-universal; i.e., flavor dependent, and why shadowing and antishadowing phenomena may be incompatible with sum rules for nuclear parton distribution functions.« less

  17. Tensor distribution function

    NASA Astrophysics Data System (ADS)

    Leow, Alex D.; Zhu, Siwei

    2008-03-01

    Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.

  18. Analysis of extreme precipitation characteristics in low mountain areas based on three-dimensional copulas—taking Kuandian County as an example

    NASA Astrophysics Data System (ADS)

    Wang, Cailin; Ren, Xuehui; Li, Ying

    2017-04-01

    We defined the threshold of extreme precipitation using detrended fluctuation analysis based on daily precipitation during 1955-2013 in Kuandian County, Liaoning Province. Three-dimensional copulas were introduced to analyze the characteristics of four extreme precipitation factors: the annual extreme precipitation day, extreme precipitation amount, annual average extreme precipitation intensity, and extreme precipitation rate of contribution. The results show that (1) the threshold is 95.0 mm, extreme precipitation events generally occur 1-2 times a year, the average extreme precipitation intensity is 100-150 mm, and the extreme precipitation amount is 100-270 mm accounting for 10 to 37 % of annual precipitation. (2) The generalized extreme value distribution, extreme value distribution, and generalized Pareto distribution are suitable for fitting the distribution function for each element of extreme precipitation. The Ali-Mikhail-Haq (AMH) copula function reflects the joint characteristics of extreme precipitation factors. (3) The return period of the three types has significant synchronicity, and the joint return period and co-occurrence return period have long delay when the return period of the single factor is long. This reflects the inalienability of extreme precipitation factors. The co-occurrence return period is longer than that of the single factor and joint return period. (4) The single factor fitting only reflects single factor information of extreme precipitation but is unrelated to the relationship between factors. Three-dimensional copulas represent the internal information of extreme precipitation factors and are closer to the actual. The copula function is potentially widely applicable for the multiple factors of extreme precipitation.

  19. Vlasov simulations of electron acceleration by radio frequency heating near the upper hybrid layer

    NASA Astrophysics Data System (ADS)

    Najmi, A.; Eliasson, B.; Shao, X.; Milikh, G.; Sharma, A. S.; Papadopoulos, K.

    2017-10-01

    It is shown by using a combination of Vlasov and test particles simulations that the electron distribution function resulting from energization due to Upper Hybrid (UH) plasma turbulence depends critically on the closeness of the pump wave to the double resonance, defined as ω ≈ ωUH ≈ nωce, where n is an integer. For pump frequencies, away from the double resonance, the electron distribution function is very close to Maxwellian, while as the pump frequency approaches the double resonance, it develops a high energy tail. The simulations show turbulence involving coupling between Lower Hybrid (LH) and UH waves, followed by excitation of Electron Bernstein (EB) modes. For the particular case of a pump with frequency between n = 3 and n = 4, the EB modes cover the range from the first to the 5th mode. The simulations show that when the injected wave frequency is between the 3rd and 4th harmonics of the electron cyclotron frequency, bulk electron heating occurs due to the interaction between the electrons and large amplitude EB waves, primarily on the first EB branch leading to an essentially thermal distribution. On the other hand, when the frequency is slightly above the 4th electron cyclotron harmonic, the resonant interaction is predominantly due to the UH branch and leads to a further acceleration of high-velocity electrons and a distribution function with a suprathermal tail of energetic electrons. The results are consistent with ionospheric experiments and relevant to the production of Artificial Ionospheric Plasma Layers.

  20. Length distributions of nanowires: Effects of surface diffusion versus nucleation delay

    NASA Astrophysics Data System (ADS)

    Dubrovskii, Vladimir G.

    2017-04-01

    It is often thought that the ensembles of semiconductor nanowires are uniform in length due to the initial organization of the growth seeds such as lithographically defined droplets or holes in the substrate. However, several recent works have already demonstrated that most nanowire length distributions are broader than Poissonian. Herein, we consider theoretically the length distributions of non-interacting nanowires that grow by the material collection from the entire length of their sidewalls and with a delay of nucleation of the very first nanowire monolayer. The obtained analytic length distribution is controlled by two parameters that describe the strength of surface diffusion and the nanowire nucleation rate. We show how the distribution changes from the symmetrical Polya shape without the nucleation delay to a much broader and asymmetrical one for longer delays. In the continuum limit (for tall enough nanowires), the length distribution is given by a power law times an incomplete gamma-function. We discuss interesting scaling properties of this solution and give a recipe for analyzing and tailoring the experimental length histograms of nanowires which should work for a wide range of material systems and growth conditions.

  1. Distribution network design under demand uncertainty using genetic algorithm and Monte Carlo simulation approach: a case study in pharmaceutical industry

    NASA Astrophysics Data System (ADS)

    Izadi, Arman; Kimiagari, Ali mohammad

    2014-01-01

    Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14% reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.

  2. Distribution network design under demand uncertainty using genetic algorithm and Monte Carlo simulation approach: a case study in pharmaceutical industry

    NASA Astrophysics Data System (ADS)

    Izadi, Arman; Kimiagari, Ali Mohammad

    2014-05-01

    Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.

  3. Ascl1 controls the number and distribution of astrocytes and oligodendrocytes in the gray matter and white matter of the spinal cord.

    PubMed

    Vue, Tou Yia; Kim, Euiseok J; Parras, Carlos M; Guillemot, Francois; Johnson, Jane E

    2014-10-01

    Glia constitute the majority of cells in the mammalian central nervous system and are crucial for neurological function. However, there is an incomplete understanding of the molecular control of glial cell development. We find that the transcription factor Ascl1 (Mash1), which is best known for its role in neurogenesis, also functions in both astrocyte and oligodendrocyte lineages arising in the mouse spinal cord at late embryonic stages. Clonal fate mapping in vivo reveals heterogeneity in Ascl1-expressing glial progenitors and shows that Ascl1 defines cells that are restricted to either gray matter (GM) or white matter (WM) as astrocytes or oligodendrocytes. Conditional deletion of Ascl1 post-neurogenesis shows that Ascl1 is required during oligodendrogenesis for generating the correct numbers of WM but not GM oligodendrocyte precursor cells, whereas during astrocytogenesis Ascl1 functions in balancing the number of dorsal GM protoplasmic astrocytes with dorsal WM fibrous astrocytes. Thus, in addition to its function in neurogenesis, Ascl1 marks glial progenitors and controls the number and distribution of astrocytes and oligodendrocytes in the GM and WM of the spinal cord. © 2014. Published by The Company of Biologists Ltd.

  4. C -parameter distribution at N 3 LL ' including power corrections

    DOE PAGES

    Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; ...

    2015-05-15

    We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α 3 s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ω n. To eliminate an O(Λ QCD) renormalon ambiguity in themore » soft function, we switch from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω 1. We show how to simultaneously account for running effects in Ω 1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(m Z) and Ω 1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=m Z.« less

  5. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  6. A Bayesian alternative for multi-objective ecohydrological model specification

    NASA Astrophysics Data System (ADS)

    Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori

    2018-01-01

    Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.

  7. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  8. Radionuclide distribution dynamics in skeletons of beagles fed 90Sr: Correlation with injected 226Ra and 239Pu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parks, N.J.

    Data for the bone-by-bone redistribution of 90Sr in the beagle skeleton are reported for a period of 4000 d following a midgestation-to-540-d-exposure by ingestion. The partitioned clearance model (PCM) that was originally developed to describe bone-by-bone radionuclide redistribution of 226Ra after eight semimonthly injections at ages 435-535 d has been fitted to the 90Sr data. The parameter estimates for the PCM that describe the distribution and clearance of 226Ra after deposition on surfaces following injection and analogous parameter estimates for 90Sr after uniform deposition in the skeleton as a function of Ca mass are given. Fractional compact bone masses permore » bone group (mi,COM) are also predicted by the model and compared to measured values; a high degree of correlation (r = 0.84) is found. Bone groups for which the agreement between the model and experimental values of mi,COM was poor had tissue-to-calcium weight ratios about 1.5 times those for bones that agreed well. Metabolically defined surface in PCM is initial activity fraction per Ca fraction in a given skeletal component for intravenously injected alkaline earth (Sae) radionuclides; comparisons are made to similarly defined surface (Sact) values from 239Pu injection studies. The patterns of Sae and Sact distribution throughout the skeleton are similar.« less

  9. An inverse inviscid method for the design of quasi-three dimensional rotating turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Bonataki, E.; Chaviaropoulos, P.; Papailiou, K. D.

    1991-01-01

    A new inverse inviscid method suitable for the design of rotating blade sections lying on an arbitrary axisymmetric stream-surface with varying streamtube width is presented. The geometry of the axisymmetric stream-surface and the streamtube width variation with meridional distance, the number of blades, the inlet flow conditions, the rotational speed and the suction and pressure side velocity distributions as functions of the normalized arc-length are given. The flow is considered irrotational in the absolute frame of reference and compressible. The output of the computation is the blade section that satisfies the above data. The method solves the flow equations on a (phi 1, psi) potential function-streamfunction plane for the velocity modulus, W and the flow angle beta; the blade section shape can then be obtained as part of the physical plane geometry by integrating the flow angle distribution along streamlines. The (phi 1, psi) plane is defined so that the monotonic behavior of the potential function is guaranteed, even in cases with high peripheral velocities. The method is validated on a rotating turbine case and used to design new blades. To obtain a closed blade, a set of closure conditions were developed and referred.

  10. An Adaptive Complex Network Model for Brain Functional Networks

    PubMed Central

    Gomez Portillo, Ignacio J.; Gleiser, Pablo M.

    2009-01-01

    Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902

  11. Cutting Edge: c-Maf Is Required for Regulatory T Cells To Adopt RORγt+ and Follicular Phenotypes.

    PubMed

    Wheaton, Joshua D; Yeh, Chen-Hao; Ciofani, Maria

    2017-12-15

    Regulatory T cells (Tregs) adopt specialized phenotypes defined by coexpression of lineage-defining transcription factors, such as RORγt, Bcl-6, or PPARγ, alongside Foxp3. These Treg subsets have unique tissue distributions and diverse roles in maintaining organismal homeostasis. However, despite extensive functional characterization, the factors driving Treg specialization are largely unknown. In this article, we show that c-Maf is a critical transcription factor regulating this process in mice, essential for generation of both RORγt + Tregs and T follicular regulatory cells, but not for adipose-resident Tregs. c-Maf appears to function primarily in Treg specialization, because IL-10 production, expression of other effector molecules, and general immune homeostasis are not c-Maf dependent. As in other T cells, c-Maf is induced in Tregs by IL-6 and TGF-β, suggesting that a combination of inflammatory and tolerogenic signals promote c-Maf expression. Therefore, c-Maf is a novel regulator of Treg specialization, which may integrate disparate signals to facilitate environmental adaptation. Copyright © 2017 by The American Association of Immunologists, Inc.

  12. The Vertebrate Brain, Evidence of Its Modular Organization and Operating System: Insights into the Brain's Basic Units of Structure, Function, and Operation and How They Influence Neuronal Signaling and Behavior

    PubMed Central

    Baslow, Morris H.

    2011-01-01

    The human brain is a complex organ made up of neurons and several other cell types, and whose role is processing information for use in eliciting behaviors. However, the composition of its repeating cellular units for both structure and function are unresolved. Based on recent descriptions of the brain's physiological “operating system”, a function of the tri-cellular metabolism of N-acetylaspartate (NAA) and N-acetylaspartylglutamate (NAAG) for supply of energy, and on the nature of “neuronal words and languages” for intercellular communication, insights into the brain's modular structural and functional units have been gained. In this article, it is proposed that the basic structural unit in brain is defined by its physiological operating system, and that it consists of a single neuron, and one or more astrocytes, oligodendrocytes, and vascular system endothelial cells. It is also proposed that the basic functional unit in the brain is defined by how neurons communicate, and consists of two neurons and their interconnecting dendritic–synaptic–dendritic field. Since a functional unit is composed of two neurons, it requires two structural units to form a functional unit. Thus, the brain can be envisioned as being made up of the three-dimensional stacking and intertwining of myriad structural units which results not only in its gross structure, but also in producing a uniform distribution of binary functional units. Since the physiological NAA–NAAG operating system for supply of energy is repeated in every structural unit, it is positioned to control global brain function. PMID:21720525

  13. Effect of Temperature on the Size Distribution, Shell Properties, and Stability of Definity®.

    PubMed

    Shekhar, Himanshu; Smith, Nathaniel J; Raymond, Jason L; Holland, Christy K

    2018-02-01

    Physical characterization of an ultrasound contrast agent (UCA) aids in its safe and effective use in diagnostic and therapeutic applications. The goal of this study was to investigate the impact of temperature on the size distribution, shell properties, and stability of Definity ® , a U.S. Food and Drug Administration-approved UCA used for left ventricular opacification. A Coulter counter was modified to enable particle size measurements at physiologic temperatures. The broadband acoustic attenuation spectrum and size distribution of Definity ® were measured at room temperature (25 °C) and physiologic temperature (37 °C) and were used to estimate the viscoelastic shell properties of the agent at both temperatures. Attenuation and size distribution was measured over time to assess the effect of temperature on the temporal stability of Definity ® . The attenuation coefficient of Definity ® at 37 °C was as much as 5 dB higher than the attenuation coefficient measured at 25 °C. However, the size distributions of Definity ® at 25 °C and 37 °C were similar. The estimated shell stiffness and viscosity decreased from 1.76 ± 0.18 N/m and 0.21 × 10 -6  ± 0.07 × 10 -6 kg/s at 25 °C to 1.01 ± 0.07 N/m and 0.04 × 10 -6  ± 0.04 × 10 -6 kg/s at 37 °C, respectively. Size-dependent differences in dissolution rates were observed within the UCA population at both 25 °C and 37 °C. Additionally, cooling the diluted UCA suspension from 37 °C to 25 °C accelerated the dissolution rate. These results indicate that although temperature affects the shell properties of Definity ® and can influence the stability of Definity ® , the size distribution of this agent is not affected by a temperature increase from 25 °C to 37 °C. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  14. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  15. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  16. The Angular Three-Point Correlation Function in the Quasi-linear Regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchalter, Ari; Kamionkowski, Marc; Jaffe, Andrew H.

    2000-02-10

    We calculate the normalized angular three-point correlation function (3PCF), q, as well as the normalized angular skewness, s{sub 3}, assuming the small-angle approximation, for a biased mass distribution in flat and open cold dark matter (CDM) models with Gaussian initial conditions. The leading-order perturbative results incorporate the explicit dependence on the cosmological parameters, the shape of the CDM transfer function, the linear evolution of the power spectrum, the form of the assumed redshift distribution function, and linear and nonlinear biasing, which may be evolving. Results are presented for different redshift distributions, including that appropriate for the APM Galaxy Survey, asmore » well as for a survey with a mean redshift of z{approx_equal}1 (such as the VLA FIRST Survey). Qualitatively, many of the results found for s{sub 3} and q are similar to those obtained in a related treatment of the spatial skewness and 3PCF, such as a leading-order correction to the standard result for s{sub 3} in the case of nonlinear bias (as defined for unsmoothed density fields), and the sensitivity of the configuration dependence of q to both cosmological and biasing models. We show that since angular correlation functions (CFs) are sensitive to clustering over a range of redshifts, the various evolutionary dependences included in our predictions imply that measurements of q in a deep survey might better discriminate between models with different histories, such as evolving versus nonevolving bias, that can have similar spatial CFs at low redshift. Our calculations employ a derived equation, valid for open, closed, and flat models, to obtain the angular bispectrum from the spatial bispectrum in the small-angle approximation. (c) (c) 2000. The American Astronomical Society.« less

  17. The average magnetic field draping and consistent plasma properties of the Venus magnetotail

    NASA Technical Reports Server (NTRS)

    Mccomas, D. J.; Spence, H. E.; Russell, C. T.; Saunders, M. A.

    1986-01-01

    The detailed average draping pattern of the magnetic field in the deep Venus magnetotail is examined. The variability of the data ordered by spatial location is studied, and the groundwork is laid for developing a coordinate system which measured locations with respect to the tail structures. The reconstruction of the tail in the presence of flapping using a new technique is shown, and the average variations in the field components are examined, including the average field vectors, cross-tail current density distribution, and J x B forces as functions of location across the tail. The average downtail velocity is derived as a function of distance, and a simple model based on the field variations is defined from which the average plasma acceleration is obtained as a function of distance, density, and temperature.

  18. Soybean canopy reflectance as a function of view and illumination geometry

    NASA Technical Reports Server (NTRS)

    Ranson, K. J.; Vanderbilt, V. C.; Biehl, L. L.; Robinson, B. F.; Bauer, M. E.

    1981-01-01

    Reflectances were calculated from measurements at four wavelength bands through eight view azimuth and seven view zenith directions, for various solar zenith and azimuth angles over portions of three days, in an experimental characterization of a soybean field by means of its reflectances and physical and agronomic attributes. Results indicate that the distribution of reflectance from a soybean field is a function of the solar illumination and viewing geometry, wavelength, and row direction, as well as the state of canopy development. Shadows between rows were found to affect visible wavelength band reflectance to a greater extent than near-IR reflectance. A model describing reflectance variation as a function of projected solar and viewing angles is proposed, which approximates the visible wavelength band reflectance variations of a canopy with a well-defined row structure.

  19. Impact of Hybrid and Complex N-Glycans on Cell Surface Targeting of the Endogenous Chloride Cotransporter Slc12a2

    PubMed Central

    Singh, Richa; Pacheco-Andrade, Romario; Almiahuob, Mohamed Y. Mahmoud

    2015-01-01

    The Na+K+2Cl− cotransporter-1 (Slc12a2, NKCC1) is widely distributed and involved in cell volume/ion regulation. Functional NKCC1 locates in the plasma membrane of all cells studied, particularly in the basolateral membrane of most polarized cells. Although the mechanisms involved in plasma membrane sorting of NKCC1 are poorly understood, it is assumed that N-glycosylation is necessary. Here, we characterize expression, N-glycosylation, and distribution of NKCC1 in COS7 cells. We show that ~25% of NKCC1 is complex N-glycosylated whereas the rest of it corresponds to core/high-mannose and hybrid-type N-glycosylated forms. Further, ~10% of NKCC1 reaches the plasma membrane, mostly as core/high-mannose type, whereas ~90% of NKCC1 is distributed in defined intracellular compartments. In addition, inhibition of the first step of N-glycan biosynthesis with tunicamycin decreases total and plasma membrane located NKCC1 resulting in almost undetectable cotransport function. Moreover, inhibition of N-glycan maturation with swainsonine or kifunensine increased core/hybrid-type NKCC1 expression but eliminated plasma membrane complex N-glycosylated NKCC1 and transport function. Together, these results suggest that (i) NKCC1 is delivered to the plasma membrane of COS7 cells independently of its N-glycan nature, (ii) most of NKCC1 in the plasma membrane is core/hybrid-type N-glycosylated, and (iii) the minimal proportion of complex N-glycosylated NKCC1 is functionally active. PMID:26351455

  20. Interpretation of environmental tracers in groundwater systems with stagnant water zones.

    PubMed

    Maloszewski, Piotr; Stichler, Willibald; Zuber, Andrzej

    2004-03-01

    Lumped-parameter models are commonly applied for determining the age of water from time records of transient environmental tracers. The simplest models (e.g. piston flow or exponential) are also applicable for dating based on the decay or accumulation of tracers in groundwater systems. The models are based on the assumption that the transit time distribution function (exit age distribution function) of the tracer particles in the investigated system adequately represents the distribution of flow lines and is described by a simple function. A chosen or fitted function (called the response function) describes the transit time distribution of a tracer which would be observed at the output (discharge area, spring, stream, or pumping wells) in the case of an instantaneous injection at the entrance (recharge area). Due to large space and time scales, response functions are not measurable in groundwater systems, therefore, functions known from other fields of science, mainly from chemical engineering, are usually used. The type of response function and the values of its parameters define the lumped-parameter model of a system. The main parameter is the mean transit time of tracer through the system, which under favourable conditions may represent the mean age of mobile water. The parameters of the model are found by fitting calculated concentrations to the experimental records of concentrations measured at the outlet. The mean transit time of tracer (often called the tracer age), whether equal to the mean age of water or not, serves in adequate combinations with other data for determining other useful parameters, e.g. the recharge rate or the content of water in the system. The transit time distribution and its mean value serve for confirmation or determination of the conceptual model of the system and/or estimation of its potential vulnerability to anthropogenic pollution. In the interpretation of environmental tracer data with the aid of the lumped-parameter models, the influence of diffusion exchange between mobile water and stagnant or quasi-stagnant water is seldom considered, though it leads to large differences between tracer and water ages. Therefore, the article is focused on the transit time distribution functions of the most common lumped-parameter models, particularly those applicable for the interpretation of environmental tracer data in double-porosity aquifers, or aquifers in which aquitard diffusion may play an important role. A case study is recalled for a confined aquifer in which the diffusion exchange with aquitard most probably strongly influenced the transport of environmental tracers. Another case study presented is related to the interpretation of environmental tracer data obtained from lysimeters installed in the unsaturated zone with a fraction of stagnant water.

  1. Symbolic Regression for the Estimation of Transfer Functions of Hydrological Models

    NASA Astrophysics Data System (ADS)

    Klotz, D.; Herrnegger, M.; Schulz, K.

    2017-11-01

    Current concepts for parameter regionalization of spatially distributed rainfall-runoff models rely on the a priori definition of transfer functions that globally map land surface characteristics (such as soil texture, land use, and digital elevation) into the model parameter space. However, these transfer functions are often chosen ad hoc or derived from small-scale experiments. This study proposes and tests an approach for inferring the structure and parametrization of possible transfer functions from runoff data to potentially circumvent these difficulties. The concept uses context-free grammars to generate possible proposition for transfer functions. The resulting structure can then be parametrized with classical optimization techniques. Several virtual experiments are performed to examine the potential for an appropriate estimation of transfer function, all of them using a very simple conceptual rainfall-runoff model with data from the Austrian Mur catchment. The results suggest that a priori defined transfer functions are in general well identifiable by the method. However, the deduction process might be inhibited, e.g., by noise in the runoff observation data, often leading to transfer function estimates of lower structural complexity.

  2. Orthonormal vector polynomials in a unit circle, Part I: Basis set derived from gradients of Zernike polynomials.

    PubMed

    Zhao, Chunyu; Burge, James H

    2007-12-24

    Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.

  3. Cell lineage distribution atlas of the human stomach reveals heterogeneous gland populations in the gastric antrum.

    PubMed

    Choi, Eunyoung; Roland, Joseph T; Barlow, Brittney J; O'Neal, Ryan; Rich, Amy E; Nam, Ki Taek; Shi, Chanjuan; Goldenring, James R

    2014-11-01

    The glands of the stomach body and antral mucosa contain a complex compendium of cell lineages. In lower mammals, the distribution of oxyntic glands and antral glands define the anatomical regions within the stomach. We examined in detail the distribution of the full range of cell lineages within the human stomach. We determined the distribution of gastric gland cell lineages with specific immunocytochemical markers in entire stomach specimens from three non-obese organ donors. The anatomical body and antrum of the human stomach were defined by the presence of ghrelin and gastrin cells, respectively. Concentrations of somatostatin cells were observed in the proximal stomach. Parietal cells were seen in all glands of the body of the stomach as well as in over 50% of antral glands. MIST1 expressing chief cells were predominantly observed in the body although individual glands of the antrum also showed MIST1 expressing chief cells. While classically described antral glands were observed with gastrin cells and deep antral mucous cells without any parietal cells, we also observed a substantial population of mixed type glands containing both parietal cells and G cells throughout the antrum. Enteroendocrine cells show distinct patterns of localisation in the human stomach. The existence of antral glands with mixed cell lineages indicates that human antral glands may be functionally chimeric with glands assembled from multiple distinct stem cell populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Method for spatially distributing a population

    DOEpatents

    Bright, Edward A [Knoxville, TN; Bhaduri, Budhendra L [Knoxville, TN; Coleman, Phillip R [Knoxville, TN; Dobson, Jerome E [Lawrence, KS

    2007-07-24

    A process for spatially distributing a population count within a geographically defined area can include the steps of logically correlating land usages apparent from a geographically defined area to geospatial features in the geographically defined area and allocating portions of the population count to regions of the geographically defined area having the land usages, according to the logical correlation. The process can also include weighing the logical correlation for determining the allocation of portions of the population count and storing the allocated portions within a searchable data store. The logically correlating step can include the step of logically correlating time-based land usages to geospatial features of the geographically defined area. The process can also include obtaining a population count for the geographically defined area, organizing the geographically defined area into a plurality of sectors, and verifying the allocated portions according to direct observation.

  5. Bidirectional Reflectance Modeling of Non-homogeneous Plant Canopies

    NASA Technical Reports Server (NTRS)

    Norman, J. M.

    1984-01-01

    Efforts to develop a three dimensional model to predict canopy, bidirectional reflectance for heterogenous plant stands using incident radiation and canopy structural descriptions as inputs are described. Utility programs were developed to cope with the complex output from the 3 dimensional model. In addition an attempt was made to define leaf and soil properties, which are appropriate to the mode, by measuring leaf and soil bidirectional reflectance distribution functions; since almost no data exist on these distributions. In the process it was realized that most models probably are using the wrong leaf spectral properties, and that off-nadir reflectance measurements are difficult to make because of non-Lambertian properties of reference surfaces. Also, in the visible wavebands, rough soil may not be distinguishable from canopies when viewed from above.

  6. Advances in modeling trait-based plant community assembly.

    PubMed

    Laughlin, Daniel C; Laughlin, David E

    2013-10-01

    In this review, we examine two new trait-based models of community assembly that predict the relative abundance of species from a regional species pool. The models use fundamentally different mathematical approaches and the predictions can differ considerably. Maxent obtains the most even probability distribution subject to community-weighted mean trait constraints. Traitspace predicts low probabilities for any species whose trait distribution does not pass through the environmental filter. Neither model maximizes functional diversity because of the emphasis on environmental filtering over limiting similarity. Traitspace can test for the effects of limiting similarity by explicitly incorporating intraspecific trait variation. The range of solutions in both models could be used to define the range of natural variability of community composition in restoration projects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Hopping in the Crowd to Unveil Network Topology.

    PubMed

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-13

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  8. Quadratic RK shooting solution for a environmental parameter prediction boundary value problem

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis Th.; Tsitouras, Ch.

    2014-10-01

    Using tools of Information Geometry, the minimum distance between two elements of a statistical manifold is defined by the corresponding geodesic, e.g. the minimum length curve that connects them. Such a curve, where the probability distribution functions in the case of our meteorological data are two parameter Weibull distributions, satisfies a 2nd order Boundary Value (BV) system. We study the numerical treatment of the resulting special quadratic form system using Shooting method. We compare the solutions of the problem when we employ a classical Singly Diagonally Implicit Runge Kutta (SDIRK) 4(3) pair of methods and a quadratic SDIRK 5(3) pair . Both pairs have the same computational costs whereas the second one attains higher order as it is specially constructed for quadratic problems.

  9. Evaluation of substitution monopole models for tire noise sound synthesis

    NASA Astrophysics Data System (ADS)

    Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.

    2010-01-01

    Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.

  10. Hopping in the Crowd to Unveil Network Topology

    NASA Astrophysics Data System (ADS)

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-01

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  11. X-ray and neutron total scattering analysis of Hy·(Bi0.2Ca0.55Sr0.25)(Ag0.25Na0.75)Nb3O10·xH2O perovskite nanosheet booklets with stacking disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metz, Peter; Koch, Robert; Cladek, Bernadette

    Ion-exchanged Aurivillius materials form perovskite nanosheet booklets wherein well-defined bi-periodic sheets, with ~11.5 Å thickness, exhibit extensive stacking disorder. The perovskite layer contents were defined initially using combined synchrotron X-ray and neutron Rietveld refinement of the parent Aurivillius structure. The structure of the subsequently ion-exchanged material, which is disordered in its stacking sequence, is analyzed using both pair distribution function (PDF) analysis and recursive method simulations of the scattered intensity. Combined X-ray and neutron PDF refinement of supercell stacking models demonstrates sensitivity of the PDF to both perpendicular and transverse stacking vector components. Further, hierarchical ensembles of stacking models weightedmore » by a standard normal distribution are demonstrated to improve PDF fit over 1–25 Å. Recursive method simulations of the X-ray scattering profile demonstrate agreement between the real space stacking analysis and more conventional reciprocal space methods. The local structure of the perovskite sheet is demonstrated to relax only slightly from the Aurivillius structure after ion exchange.« less

  12. Advanced optical system for scanning-spot photorefractive keratectomy (PRK)

    NASA Astrophysics Data System (ADS)

    Mrochen, Michael; Wullner, Christian; Semchishen, Vladimir A.; Seiler, Theo

    1999-06-01

    Purpose: The goal of this presentation is to discuss the use of the Light Shaping Beam Homogenizer in an optical system for scanning-spot PRK. Methods: The basic principle of the LSBH is the transformation of any incident intensity distribution by light scattering on an irregular microlens structure z = f(x,y). The relief of this microlens structure is determined by a defined statistical function, i.e. it is defined by the mean root-squared tilt σ of the surface relief. Therefore, the beam evolution after the LSBH and in the focal plane of an imaging lens was measured for various root-squared tilts. Beside this, an optical setup for scanning-spot PRK was assembled according to the theoretical and experimental results. Results: The divergence, homogeneity and the Gaussian radius of the intensity distribution in the treatment plane of the scanning-spot PRK laser system is mainly characterized by dependent on root-mean-square tilt σ of the LSBH, as it will be explained by the theoretical description of the LSBH. Conclusions: The LSBH represents a simple, low cost beam homogenizer with low energy losses, for scanning-spot excimer laser systems.

  13. The variable flavor number scheme at next-to-leading order

    NASA Astrophysics Data System (ADS)

    Blümlein, J.; De Freitas, A.; Schneider, C.; Schönwald, K.

    2018-07-01

    We present the matching relations of the variable flavor number scheme at next-to-leading order, which are of importance to define heavy quark partonic distributions for the use at high energy colliders such as Tevatron and the LHC. The consideration of the two-mass effects due to both charm and bottom quarks, having rather similar masses, are important. These effects have not been considered in previous investigations. Numerical results are presented for a wide range of scales. We also present the corresponding contributions to the structure function F2 (x ,Q2).

  14. Telescience testbedding for life science missions on the Space Station

    NASA Technical Reports Server (NTRS)

    Rasmussen, D.; Mian, A.; Bosley, J.

    1988-01-01

    'Telescience', defined as the ability of distributed system users to perform remote operations associated with NASA Space Station life science operations, has been explored by a developmental testbed project allowing rapid prototyping to evaluate the functional requirements of telescience implementation in three areas: (1) research planning and design, (2) remote operation of facilities, and (3) remote access to data bases for analysis. Attention is given to the role of expert systems in telescience, its use in realistic simulation of Space Shuttle payload remote monitoring, and remote interaction with life science data bases.

  15. Stochastic Frontier Model Approach for Measuring Stock Market Efficiency with Different Distributions

    PubMed Central

    Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  16. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    PubMed

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  17. Synthesis of Densely Packaged, Ultrasmall Pt02 Clusters within a Thioether-Functionalized MOF: Catalytic Activity in Industrial Reactions at Low Temperature.

    PubMed

    Mon, Marta; Rivero-Crespo, Miguel A; Ferrando-Soria, Jesús; Vidal-Moya, Alejandro; Boronat, Mercedes; Leyva-Pérez, Antonio; Corma, Avelino; Hernández-Garrido, Juan C; López-Haro, Miguel; Calvino, José J; Ragazzon, Giulio; Credi, Alberto; Armentano, Donatella; Pardo, Emilio

    2018-05-22

    The gram-scale synthesis, stabilization, and characterization of well-defined ultrasmall subnanometric catalytic clusters on solids is a challenge. The chemical synthesis and X-ray snapshots of Pt 0 2 clusters, homogenously distributed and densely packaged within the channels of a metal-organic framework, is presented. This hybrid material catalyzes efficiently, and even more importantly from an economic and environmental viewpoint, at low temperature (25 to 140 °C), energetically costly industrial reactions in the gas phase such as HCN production, CO 2 methanation, and alkene hydrogenations. These results open the way for the design of precisely defined catalytically active ultrasmall metal clusters in solids for technically easier, cheaper, and dramatically less-dangerous industrial reactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Stable polyurethane coatings for electronic circuits. NASA tech briefs, fall 1982, volume 7, no. 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    One of the most severe deficiencies of polyurethanes as engineering materials for electrical applications has been their sensitivity to combined humidity and temperature environments. Gross failure by reversion of urethane connector potting materials has occurred under these conditions. This has resulted in both scrapping of expensive hardware and reduction in reliability in other instances. A basic objective of this study has been to gain a more complete understanding of the mechanisms and interactions of moisture in urethane systems to guide the development of reversion resistant materials for connector potting and conformal coating applications in high humidity environments. Basic polymer studies of molecular weight and distribution, polymer structure, and functionality were carried out to define those areas responsible for hydrolytic instability and to define polymer structural feature conducive to optimum hydrolytic stability.

  19. Modulation of Respiratory Frequency by Peptidergic Input to Rhythmogenic Neurons in the PreBötzinger Complex

    PubMed Central

    Gray, Paul A.; Rekling, Jens C.; Bocchiaro, Christopher M.; Feldman, Jack L.

    2010-01-01

    Neurokinin-1 receptor (NK1R) and μ-opioid receptor (μOR) agonists affected respiratory rhythm when injected directly into the preBötzinger Complex (preBötC), the hypothesized site for respiratory rhythmogenesis in mammals. These effects were mediated by actions on preBötC rhythmogenic neurons. The distribution of NK1R+ neurons anatomically defined the preBötC. Type 1 neurons in the preBötC, which have rhythmogenic properties, expressed both NK1Rs and μORs, whereas type 2 neurons expressed only NK1Rs. These findings suggest that the preBötC is a definable anatomic structure with unique physiological function and that a subpopulation of neurons expressing both NK1Rs and μORs generate respiratory rhythm and modulate respiratory frequency. PMID:10567264

  20. Numerical model of tapered fiber Bragg gratings for comprehensive analysis and optimization of their sensing and strain-induced tunable dispersion properties.

    PubMed

    Osuch, Tomasz; Markowski, Konrad; Jędrzejewski, Kazimierz

    2015-06-10

    A versatile numerical model for spectral transmission/reflection, group delay characteristic analysis, and design of tapered fiber Bragg gratings (TFBGs) is presented. This approach ensures flexibility with defining both distribution of refractive index change of the gratings (including apodization) and shape of the taper profile. Additionally, sensing and tunable dispersion properties of the TFBGs were fully examined, considering strain-induced effects. The presented numerical approach, together with Pareto optimization, were also used to design the best tanh apodization profiles of the TFBG in terms of maximizing its spectral width with simultaneous minimization of the group delay oscillations. Experimental verification of the model confirms its correctness. The combination of model versatility and possibility to define the other objective functions of Pareto optimization creates a universal tool for TFBG analysis and design.

  1. Predicting protein complex geometries with a neural network.

    PubMed

    Chae, Myong-Ho; Krull, Florian; Lorenzen, Stephan; Knapp, Ernst-Walter

    2010-03-01

    A major challenge of the protein docking problem is to define scoring functions that can distinguish near-native protein complex geometries from a large number of non-native geometries (decoys) generated with noncomplexed protein structures (unbound docking). In this study, we have constructed a neural network that employs the information from atom-pair distance distributions of a large number of decoys to predict protein complex geometries. We found that docking prediction can be significantly improved using two different types of polar hydrogen atoms. To train the neural network, 2000 near-native decoys of even distance distribution were used for each of the 185 considered protein complexes. The neural network normalizes the information from different protein complexes using an additional protein complex identity input neuron for each complex. The parameters of the neural network were determined such that they mimic a scoring funnel in the neighborhood of the native complex structure. The neural network approach avoids the reference state problem, which occurs in deriving knowledge-based energy functions for scoring. We show that a distance-dependent atom pair potential performs much better than a simple atom-pair contact potential. We have compared the performance of our scoring function with other empirical and knowledge-based scoring functions such as ZDOCK 3.0, ZRANK, ITScore-PP, EMPIRE, and RosettaDock. In spite of the simplicity of the method and its functional form, our neural network-based scoring function achieves a reasonable performance in rigid-body unbound docking of proteins. Proteins 2010. (c) 2009 Wiley-Liss, Inc.

  2. Sensitivity analysis of environmental changes associated with riverscape evolutions following sediment reintroduction: Application to the Drôme River network, France

    NASA Astrophysics Data System (ADS)

    Piégay, H.; Bertrand, M.; Liébault, F.; Pont, D.; Sauquet, E.

    2011-12-01

    The present contribution aims to put into practice the conceptual framework defined in Pont et al. (2009) to the Drôme River Basin (France) in order to test the capacity of functional reach concept to be used to assess risks in environmental changes. The methodology is illustrated by examples focusing on the potential changes in functional reach diversity as a proxy of habitat diversity, and on potential impact on trout distribution at a network scale due to actions of sediment reintroduction. We used remote sensing and GIS methods to provide original data and to analyze them. A cluster analysis performed on the components of a PCA has been done to establish a functional reach typology based on planform parameters, used as a proxy of habitat typology following a review of literature. We calculated for the entire channel network an index of present and 1948 states of the functional reach types diversity to highlight past evolution. Various options of changes in functional reach types diversity were compared in relation to various increases in bedload delivery following planned deforestation. A similar risk assessment procedure is proposed in relation to changes in canopy cover and associated changes in summer temperature to evaluate impacts on brown trout distribution. Two practical examples are used as pilots for evaluating the risk assessment approach based on functional reach typology and its potential applicability for testing management actions for improving aquatic ecology. Limitations and improvements are then discussed.

  3. Topology optimized design of functionally graded piezoelectric ultrasonic transducers

    NASA Astrophysics Data System (ADS)

    Rubio, Wilfredo Montealegre; Buiochi, Flávio; Adamowski, Julio Cezar; Silva, Emílio C. N.

    2010-01-01

    This work presents a new approach to systematically design piezoelectric ultrasonic transducers based on Topology Optimization Method (TOM) and Functionally Graded Material (FGM) concepts. The main goal is to find the optimal material distribution of Functionally Graded Piezoelectric Ultrasonic Transducers, to achieve the following requirements: (i) the transducer must be designed to have a multi-modal or uni-modal frequency response, which defines the kind of generated acoustic wave, either short pulse or continuous wave, respectively; (ii) the transducer is required to oscillate in a thickness extensional mode or piston-like mode, aiming at acoustic wave generation applications. Two kinds of piezoelectric materials are mixed for producing the FGM transducer. Material type 1 represents a PZT-5A piezoelectric ceramic and material type 2 represents a PZT-5H piezoelectric ceramic. To illustrate the proposed method, two Functionally Graded Piezoelectric Ultrasonic Transducers are designed. The TOM has shown to be a useful tool for designing Functionally Graded Piezoelectric Ultrasonic Transducers with uni-modal or multi-modal dynamic behavior.

  4. Cardiovascular Disease Biomarkers Predict Susceptibility or Resistance to Lung Injury in World Trade Center Dust Exposed Firefighters

    PubMed Central

    Weiden, Michael D.; Naveed, Bushra; Kwon, Sophia; Cho, Soo Jung; Comfort, Ashley L.; Prezant, David J.; Rom, William N.; Nolan, Anna

    2013-01-01

    Pulmonary vascular loss is an early feature of chronic obstructive pulmonary disease. Biomarkers of inflammation and of metabolic syndrome, predicts loss of lung function in World Trade Center Lung Injury (WTC-LI). We investigated if other cardiovascular disease (CVD) biomarkers also predicted WTC-LI. This nested case-cohort study used 801 never smoker, WTC exposed firefighters with normal pre-9/11 lung function presenting for subspecialty pulmonary evaluation (SPE) before March, 2008. A representative sub-cohort of 124/801 with serum drawn within six months of 9/11 defined CVD biomarker distribution. Post-9/11/01 FEV1 at subspecialty exam defined cases: susceptible WTC-LI cases with FEV1≤77% predicted (66/801) and resistant WTC-LI cases with FEV1≥107% (68/801). All models were adjusted for WTC exposure intensity, BMI at SPE, age at 9/11, and pre-9/11 FEV1. Susceptible WTC-LI cases had higher levels of Apo-AII, CRP, and MIP-4 with significant RRs of 3.85, 3.93, and 0.26 respectively with an area under the curve (AUC) of 0.858. Resistant WTC-LI cases had significantly higher sVCAM and lower MPO with RRs of 2.24, and 2.89 respectively; AUC 0.830. Biomarkers of CVD in serum six-month post-9/11 predicted either susceptibility or resistance to WTC-LI. These biomarkers may define pathways producing or protecting subjects from pulmonary vascular disease and associated loss of lung function after an irritant exposure. PMID:22903969

  5. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  6. Comparing capacity coefficient and dual task assessment of visual multitasking workload

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaha, Leslie M.

    Capacity coefficient analysis could offer a theoretically grounded alternative approach to subjective measures and dual task assessment of cognitive workload. Workload capacity or workload efficiency is a human information processing modeling construct defined as the amount of information that can be processed by the visual cognitive system given a specified of amount of time. In this paper, I explore the relationship between capacity coefficient analysis of workload efficiency and dual task response time measures. To capture multitasking performance, I examine how the relatively simple assumptions underlying the capacity construct generalize beyond the single visual decision making tasks. The fundamental toolsmore » for measuring workload efficiency are the integrated hazard and reverse hazard functions of response times, which are defined by log transforms of the response time distribution. These functions are used in the capacity coefficient analysis to provide a functional assessment of the amount of work completed by the cognitive system over the entire range of response times. For the study of visual multitasking, capacity coefficient analysis enables a comparison of visual information throughput as the number of tasks increases from one to two to any number of simultaneous tasks. I illustrate the use of capacity coefficients for visual multitasking on sample data from dynamic multitasking in the modified Multi-attribute Task Battery.« less

  7. Optimal cost for strengthening or destroying a given network

    NASA Astrophysics Data System (ADS)

    Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo

    2017-05-01

    Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c (k ) , which is the cost of strengthening or destroying a node with degree k . Using the degrees k in a network and the cost function c (k ) , we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their pc, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.

  8. Optimal cost for strengthening or destroying a given network.

    PubMed

    Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo

    2017-05-01

    Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c(k), which is the cost of strengthening or destroying a node with degree k. Using the degrees k in a network and the cost function c(k), we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their p_{c}, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.

  9. Spatial characteristics of observed precipitation fields: A catalog of summer storms in Arizona, Volume 1

    NASA Technical Reports Server (NTRS)

    Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodrigues-Iturbe, I.

    1986-01-01

    Eight years of summer raingage observations are analyzed for a dense, 93 gage, network operated by the U. S. Department of Agriculture, Agricultural Research Service, in their 150 sq km Walnut Gulch catchment near Tucson, Arizona. Storms are defined by the total depths collected at each raingage during the noon to noon period for which there was depth recorded at any of the gages. For each of the resulting 428 storms, the 93 gage depths are interpolated onto a dense grid and the resulting random field is anlyzed. Presented are: storm depth isohyets at 2 mm contour intervals, first three moments of point storm depth, spatial correlation function, spatial variance function, and the spatial distribution of total rainstorm depth.

  10. Optimal consensus algorithm integrated with obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Wang, Jianan; Xin, Ming

    2013-01-01

    This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.

  11. Statistical analysis of the time and space characteristic scales for large precipitating systems in the equatorial, tropical, sahelian and mid-latitude regions.

    NASA Astrophysics Data System (ADS)

    Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya

    2017-04-01

    Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.

  12. Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure

    PubMed Central

    Skvortsova, Elena B.; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity. PMID:26010779

  13. Universal spatial correlation functions for describing and reconstructing soil microstructure.

    PubMed

    Karsanina, Marina V; Gerke, Kirill M; Skvortsova, Elena B; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity.

  14. Phylogenetic continuum indicates "galaxies" in the protein universe: preliminary results on the natural group structures of proteins.

    PubMed

    Ladunga, I

    1992-04-01

    The markedly nonuniform, even systematic distribution of sequences in the protein "universe" has been analyzed by methods of protein taxonomy. Mapping of the natural hierarchical system of proteins has revealed some dense cores, i.e., well-defined clusterings of proteins that seem to be natural structural groupings, possibly seeds for a future protein taxonomy. The aim was not to force proteins into more or less man-made categories by discriminant analysis, but to find structurally similar groups, possibly of common evolutionary origin. Single-valued distance measures between pairs of superfamilies from the Protein Identification Resource were defined by two chi 2-like methods on tripeptide frequencies and the variable-length subsequence identity method derived from dot-matrix comparisons. Distance matrices were processed by several methods of cluster analysis to detect phylogenetic continuum between highly divergent proteins. Only well-defined clusters characterized by relatively unique structural, intracellular environmental, organismal, and functional attribute states were selected as major protein groups, including subsets of viral and Escherichia coli proteins, hormones, inhibitors, plant, ribosomal, serum and structural proteins, amino acid synthases, and clusters dominated by certain oxidoreductases and apolar and DNA-associated enzymes. The limited repertoire of functional patterns due to small genome size, the high rate of recombination, specific features of the bacterial membranes, or of the virus cycle canalize certain proteins of viruses and Gram-negative bacteria, respectively, to organismal groups.

  15. A new theoretical framework for modeling respiratory protection based on the beta distribution.

    PubMed

    Klausner, Ziv; Fattal, Eyal

    2014-08-01

    The problem of modeling respiratory protection is well known and has been dealt with extensively in the literature. Often the efficiency of respiratory protection is quantified in terms of penetration, defined as the proportion of an ambient contaminant concentration that penetrates the respiratory protection equipment. Typically, the penetration modeling framework in the literature is based on the assumption that penetration measurements follow the lognormal distribution. However, the analysis in this study leads to the conclusion that the lognormal assumption is not always valid, making it less adequate for analyzing respiratory protection measurements. This work presents a formulation of the problem from first principles, leading to a stochastic differential equation whose solution is the probability density function of the beta distribution. The data of respiratory protection experiments were reexamined, and indeed the beta distribution was found to provide the data a better fit than the lognormal. We conclude with a suggestion for a new theoretical framework for modeling respiratory protection. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  16. Singular unlocking transition in the Winfree model of coupled oscillators.

    PubMed

    Quinn, D Dane; Rand, Richard H; Strogatz, Steven H

    2007-03-01

    The Winfree model consists of a population of globally coupled phase oscillators with randomly distributed natural frequencies. As the coupling strength and the spread of natural frequencies are varied, the various stable states of the model can undergo bifurcations, nearly all of which have been characterized previously. The one exception is the unlocking transition, in which the frequency-locked state disappears abruptly as the spread of natural frequencies exceeds a critical width. Viewed as a function of the coupling strength, this critical width defines a bifurcation curve in parameter space. For the special case where the frequency distribution is uniform, earlier work had uncovered a puzzling singularity in this bifurcation curve. Here we seek to understand what causes the singularity. Using the Poincaré-Lindstedt method of perturbation theory, we analyze the locked state and its associated unlocking transition, first for an arbitrary distribution of natural frequencies, and then for discrete systems of N oscillators. We confirm that the bifurcation curve becomes singular for a continuum uniform distribution, yet find that it remains well behaved for any finite N , suggesting that the continuum limit is responsible for the singularity.

  17. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  18. Distributed intelligent monitoring and reporting facilities

    NASA Astrophysics Data System (ADS)

    Pavlou, George; Mykoniatis, George; Sanchez-P, Jorge-A.

    1996-06-01

    Distributed intelligent monitoring and reporting facilities are of paramount importance in both service and network management as they provide the capability to monitor quality of service and utilization parameters and notify degradation so that corrective action can be taken. By intelligent, we refer to the capability of performing the monitoring tasks in a way that has the smallest possible impact on the managed network, facilitates the observation and summarization of information according to a number of criteria and in its most advanced form and permits the specification of these criteria dynamically to suit the particular policy in hand. In addition, intelligent monitoring facilities should minimize the design and implementation effort involved in such activities. The ISO/ITU Metric, Summarization and Performance management functions provide models that only partially satisfy the above requirements. This paper describes our extensions to the proposed models to support further capabilities, with the intention to eventually lead to fully dynamically defined monitoring policies. The concept of distributing intelligence is also discussed, including the consideration of security issues and the applicability of the model in ODP-based distributed processing environments.

  19. Testing methods of pressure distribution of bra cups on breasts soft tissue

    NASA Astrophysics Data System (ADS)

    Musilova, B.; Nemcokova, R.; Svoboda, M.

    2017-10-01

    Objective of this study is to evaluate testing methods of pressure distribution of bra cups on breasts soft tissue, the system which do not affect the space between the wearer's body surface and bra cups and thus do not influence the geometry of the measured body surface and thus investigate the functional performance of brassieres. Two measuring systems were used for the pressure comfort evaluating: 1) The pressure distribution of a wearing bra during 20 minutes on women's breasts has been directly measured using pressure sensor, a dielectricum which is elastic polyurethane foam bra cups. Twelve points were measured in bra cups. 2) Simultaneously the change of temperature in the same points bra was tested with the help of noncontact system the thermal imager. The results indicate that both of those systems can identify different pressure distribution at different points. The same size of bra designing features bra cups made from the same material and which is define by the help of same standardised body dimensions (bust and underbust) can cause different value of a compression on different shape of a woman´s breast soft tissue.

  20. Stochastic Sampling in the IMF of Galactic Open Clusters

    NASA Astrophysics Data System (ADS)

    Kay, Christina; Hancock, M.; Canalizo, G.; Smith, B. J.; Giroux, M. L.

    2010-01-01

    We sought observational evidence of the effects of stochastic sampling of the initial mass function by investigating the integrated colors of a sample of Galactic open clusters. In particular we looked for scatter in the integrated (V-K) color as previous research resulted in little scatter in the (U-B) and (B-V) colors. Combining data from WEBDA and 2MASS we determined three different colors for 287 open clusters. Of these clusters, 39 have minimum uncertainties in age and formed a standard set. A plot of the (V-K) color versus age showed much more scatter than the (U-B) versus age. We also divided the sample into two groups based on a lowest luminosity limit which is a function of age and V magnitude. We expected the group of clusters fainter than this limit to show more scatter than the brighter group. Assuming the published ages, we compared the reddening corrected observed colors to those predicted by Starburst99. The presence of stochastic sampling should increase scatter in the distribution of the differences between observed and model colors of the fainter group relative to the brighter group. However, we found that K-S tests cannot rule out that the distribution of color difference for the brighter and fainter sets come from the same parent distribution. This indistinguishabilty may result from uncertainties in the parameters used to define the groups. This result constrains the size of the effects of stochastic sampling of the initial mass function.

  1. Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application

    PubMed Central

    Zhang, Ping; Li, Wenjun; Sun, Hua

    2016-01-01

    Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747

  2. Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application.

    PubMed

    Zhang, Ping; Li, Wenjun; Sun, Hua

    2016-01-01

    Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy.

  3. Genome-wide analysis of promoter architecture in Drosophila melanogaster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoskins, Roger A.; Landolin, Jane M.; Brown, James B.

    2010-10-20

    Core promoters are critical regions for gene regulation in higher eukaryotes. However, the boundaries of promoter regions, the relative rates of initiation at the transcription start sites (TSSs) distributed within them, and the functional significance of promoter architecture remain poorly understood. We produced a high-resolution map of promoters active in the Drosophila melanogaster embryo by integrating data from three independent and complementary methods: 21 million cap analysis of gene expression (CAGE) tags, 1.2 million RNA ligase mediated rapid amplification of cDNA ends (RLMRACE) reads, and 50,000 cap-trapped expressed sequence tags (ESTs). We defined 12,454 promoters of 8037 genes. Our analysismore » indicates that, due to non-promoter-associated RNA background signal, previous studies have likely overestimated the number of promoter-associated CAGE clusters by fivefold. We show that TSS distributions form a complex continuum of shapes, and that promoters active in the embryo and adult have highly similar shapes in 95% of cases. This suggests that these distributions are generally determined by static elements such as local DNA sequence and are not modulated by dynamic signals such as histone modifications. Transcription factor binding motifs are differentially enriched as a function of promoter shape, and peaked promoter shape is correlated with both temporal and spatial regulation of gene expression. Our results contribute to the emerging view that core promoters are functionally diverse and control patterning of gene expression in Drosophila and mammals.« less

  4. Fundamental equations of a mixture of gas and small spherical solid particles from simple kinetic theory.

    NASA Technical Reports Server (NTRS)

    Pai, S. I.

    1973-01-01

    The fundamental equations of a mixture of a gas and pseudofluid of small spherical solid particles are derived from the Boltzmann equation of two-fluid theory. The distribution function of the gas molecules is defined in the same manner as in the ordinary kinetic theory of gases, but the distribution function for the solid particles is different from that of the gas molecules, because it is necessary to take into account the different size and physical properties of solid particles. In the proposed simple kinetic theory, two additional parameters are introduced: one is the radius of the spheres and the other is the instantaneous temperature of the solid particles in the distribution of the solid particles. The Boltzmann equation for each species of the mixture is formally written, and the transfer equations of these Boltzmann equations are derived and compared to the well-known fundamental equations of the mixture of a gas and small solid particles from continuum theory. The equations obtained reveal some insight into various terms in the fundamental equations. For instance, the partial pressure of the pseudofluid of solid particles is not negligible if the volume fraction of solid particles is not negligible as in the case of lunar ash flow.

  5. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  6. A Core Regulatory Pathway Controlling Rice Tiller Angle Mediated by the LAZY1-dependent Asymmetric Distribution of Auxin.

    PubMed

    Zhang, Ning; Yu, Hong; Yu, Hao; Cai, Yueyue; Huang, Linzhou; Xu, Cao; Xiong, Guosheng; Meng, Xiangbing; Wang, Jiyao; Chen, Haofeng; Liu, Guifu; Jing, Yanhui; Yuan, Yundong; Liang, Yan; Li, Shujia; Smith, Steven M; Li, Jiayang; Wang, Yonghong

    2018-06-18

    Tiller angle in cereals is a key shoot architecture trait that strongly influences grain yield. Studies in rice (Oryza sativa L.) have implicated shoot gravitropism in the regulation of tiller angle. However, the functional link between shoot gravitropism and tiller angle is unknown. Here, we conducted a large-scale transcriptome analysis of rice shoots in response to gravistimulation and identified two new nodes of a shoot gravitropism regulatory gene network that also controls rice tiller angle. We demonstrate that HEAT STRESS TRANSCRIPTION FACTOR 2D (HSFA2D) is an upstream positive regulator of the LAZY1-mediated asymmetric auxin distribution pathway. We also show that two functionally redundant transcription factor genes, WUSCHEL RELATED HOMEOBOX6 (WOX6) and WOX11, are expressed asymmetrically in response to auxin to connect gravitropism responses with the control of rice tiller angle. These findings define upstream and downstream genetic components that link shoot gravitropism, asymmetric auxin distribution, and rice tiller angle. The results highlight the power of the high-temporal-resolution RNA-seq dataset, and its use to explore further genetic components controlling tiller angle. Collectively these approaches will identify genes to improve grain yields by facilitating the optimization of plant architecture. © 2018 American Society of Plant Biologists. All rights reserved.

  7. Density functional theory study of hydrogen atom abstraction from a series of para-substituted phenols: why is the Hammett σ(p)+ constant able to represent radical reaction rates?

    PubMed

    Yoshida, Tatsusada; Hirozumi, Koji; Harada, Masataka; Hitaoka, Seiji; Chuman, Hiroshi

    2011-06-03

    The rate of hydrogen atom abstraction from phenolic compounds by a radical is known to be often linear with the Hammett substitution constant σ(+), defined using the S(N)1 solvolysis rates of substituted cumyl chlorides. Nevertheless, a physicochemical reason for the above "empirical fact" has not been fully revealed. The transition states of complexes between the 2,2-diphenyl-1-picrylhydrazyl radical (dpph·) and a series of para-substituted phenols were determined by DFT (Density Functional Theory) calculations, and then the activation energy as well as the homolytic bond dissociation energy of the O-H bond and charge distribution in the transition state were calculated. The heterolytic bond dissociation energy of the C-Cl bond and charge distribution in the corresponding para-substituted cumyl chlorides were calculated in parallel. Excellent correlations among σ(+), charge distribution, and activation and bond dissociation energies revealed quantitatively that there is a strong similarity between the two reactions, showing that the electron-deficiency of the π-electron system conjugated with a substituent plays a crucial role in determining rates of the two reactions. The results provide a new insight into and physicochemical understanding of σ(+) in the hydrogen abstraction from substituted phenols by a radical.

  8. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    NASA Astrophysics Data System (ADS)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  9. Moment Analysis Characterizing Water Flow in Repellent Soils from On- and Sub-Surface Point Sources

    NASA Astrophysics Data System (ADS)

    Xiong, Yunwu; Furman, Alex; Wallach, Rony

    2010-05-01

    Water repellency has a significant impact on water flow patterns in the soil profile. Flow tends to become unstable in such soils, which affects the water availability to plants and subsurface hydrology. In this paper, water flow in repellent soils was experimentally studied using the light reflection method. The transient 2D moisture profiles were monitored by CCD camera for tested soils packed in a transparent flow chamber. Water infiltration experiments and subsequent redistribution from on-surface and subsurface point sources with different flow rates were conducted for two soils of different repellency degrees as well as for wettable soil. We used spatio-statistical analysis (moments) to characterize the flow patterns. The zeroth moment is related to the total volume of water inside the moisture plume, and the first and second moments are affinitive to the center of mass and spatial variances of the moisture plume, respectively. The experimental results demonstrate that both the general shape and size of the wetting plume and the moisture distribution within the plume for the repellent soils are significantly different from that for the wettable soil. The wetting plume of the repellent soils is smaller, narrower, and longer (finger-like) than that of the wettable soil compared with that for the wettable soil that tended to roundness. Compared to the wettable soil, where the soil water content decreases radially from the source, moisture content for the water-repellent soils is higher, relatively uniform horizontally and gradually increases with depth (saturation overshoot), indicating that flow tends to become unstable. Ellipses, defined around the mass center and whose semi-axes represented a particular number of spatial variances, were successfully used to simulate the spatial and temporal variation of the moisture distribution in the soil profiles. Cumulative probability functions were defined for the water enclosed in these ellipses. Practically identical cumulative probability functions (beta distribution) were obtained for all soils, all source types, and flow rates. Further, same distributions were obtained for the infiltration and redistribution processes. This attractive result demonstrates the competence and advantage of the moment analysis method.

  10. 26 CFR 53.4942(a)-2 - Computation of undistributed income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... any taxable year as of any time, the amount by which: (1) The distributable amount (as defined in paragraph (b) of this section) for such taxable year, exceeds (2) The qualifying distributions (as defined...: (i) For taxable years beginning before January 1, 1982, an amount equal to the greater of the minimum...

  11. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  12. Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations

    NASA Astrophysics Data System (ADS)

    Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET

    2017-09-01

    The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.

  13. A Bayesian kriging approach for blending satellite and ground precipitation observations

    USGS Publications Warehouse

    Verdin, Andrew P.; Rajagopalan, Balaji; Kleiber, William; Funk, Christopher C.

    2015-01-01

    Drought and flood management practices require accurate estimates of precipitation. Gauge observations, however, are often sparse in regions with complicated terrain, clustered in valleys, and of poor quality. Consequently, the spatial extent of wet events is poorly represented. Satellite-derived precipitation data are an attractive alternative, though they tend to underestimate the magnitude of wet events due to their dependency on retrieval algorithms and the indirect relationship between satellite infrared observations and precipitation intensities. Here we offer a Bayesian kriging approach for blending precipitation gauge data and the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates for Central America, Colombia, and Venezuela. First, the gauge observations are modeled as a linear function of satellite-derived estimates and any number of other variables—for this research we include elevation. Prior distributions are defined for all model parameters and the posterior distributions are obtained simultaneously via Markov chain Monte Carlo sampling. The posterior distributions of these parameters are required for spatial estimation, and thus are obtained prior to implementing the spatial kriging model. This functional framework is applied to model parameters obtained by sampling from the posterior distributions, and the residuals of the linear model are subject to a spatial kriging model. Consequently, the posterior distributions and uncertainties of the blended precipitation estimates are obtained. We demonstrate this method by applying it to pentadal and monthly total precipitation fields during 2009. The model's performance and its inherent ability to capture wet events are investigated. We show that this blending method significantly improves upon the satellite-derived estimates and is also competitive in its ability to represent wet events. This procedure also provides a means to estimate a full conditional distribution of the “true” observed precipitation value at each grid cell.

  14. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  15. An experimental distributed microprocessor implementation with a shared memory communications and control medium

    NASA Technical Reports Server (NTRS)

    Mejzak, R. S.

    1980-01-01

    The distributed processing concept is defined in terms of control primitives, variables, and structures and their use in performing a decomposed discrete Fourier transform (DET) application function. The design assumes interprocessor communications to be anonymous. In this scheme, all processors can access an entire common database by employing control primitives. Access to selected areas within the common database is random, enforced by a hardware lock, and determined by task and subtask pointers. This enables the number of processors to be varied in the configuration without any modifications to the control structure. Decompositional elements of the DFT application function in terms of tasks and subtasks are also described. The experimental hardware configuration consists of IMSAI 8080 chassis which are independent, 8 bit microcomputer units. These chassis are linked together to form a multiple processing system by means of a shared memory facility. This facility consists of hardware which provides a bus structure to enable up to six microcomputers to be interconnected. It provides polling and arbitration logic so that only one processor has access to shared memory at any one time.

  16. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  17. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  18. Optimum design of structures subject to general periodic loads

    NASA Technical Reports Server (NTRS)

    Reiss, Robert; Qian, B.

    1989-01-01

    A simplified version of Icerman's problem regarding the design of structures subject to a single harmonic load is discussed. The nature of the restrictive conditions that must be placed on the design space in order to ensure an analytic optimum are discussed in detail. Icerman's problem is then extended to include multiple forcing functions with different driving frequencies. And the conditions that now must be placed upon the design space to ensure an analytic optimum are again discussed. An important finding is that all solutions to the optimality condition (analytic stationary design) are local optima, but the global optimum may well be non-analytic. The more general problem of distributing the fixed mass of a linear elastic structure subject to general periodic loads in order to minimize some measure of the steady state deflection is also considered. This response is explicitly expressed in terms of Green's functional and the abstract operators defining the structure. The optimality criterion is derived by differentiating the response with respect to the design parameters. The theory is applicable to finite element as well as distributed parameter models.

  19. Regulation of adhesion behavior of murine macrophage using supported lipid membranes displaying tunable mannose domains

    NASA Astrophysics Data System (ADS)

    Kaindl, T.; Oelke, J.; Pasc, A.; Kaufmann, S.; Konovalov, O. V.; Funari, S. S.; Engel, U.; Wixforth, A.; Tanaka, M.

    2010-07-01

    Highly uniform, strongly correlated domains of synthetically designed lipids can be incorporated into supported lipid membranes. The systematic characterization of membranes displaying a variety of domains revealed that the equilibrium size of domains significantly depends on the length of fluorocarbon chains, which can be quantitatively interpreted within the framework of an equivalent dipole model. A mono-dispersive, narrow size distribution of the domains enables us to treat the inter-domain correlations as two-dimensional colloidal crystallization and calculate the potentials of mean force. The obtained results demonstrated that both size and inter-domain correlation can precisely be controlled by the molecular structures. By coupling α-D-mannose to lipid head groups, we studied the adhesion behavior of the murine macrophage (J774A.1) on supported membranes. Specific adhesion and spreading of macrophages showed a clear dependence on the density of functional lipids. The obtained results suggest that such synthetic lipid domains can be used as a defined platform to study how cells sense the size and distribution of functional molecules during adhesion and spreading.

  20. Acoustic design by topology optimization

    NASA Astrophysics Data System (ADS)

    Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole

    2008-11-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.

  1. Regional cyst concentration as a prognostic biomarker for polycystic kidney disease

    NASA Astrophysics Data System (ADS)

    Warner, Joshua D.; Irazabal, Maria V.; Torres, Vicente E.; King, Bernard F.; Erickson, Bradley J.

    2014-03-01

    Polycystic kidney disease (PKD) is a major cause of renal failure. Despite recent advances in understanding the biochemistry and genetics of PKD, the functional mechanisms underpinning the declines in renal function observed in the disorder are not well established. No studies investigating the distribution of cysts within polycystic kidneys exist. This work introduces regional cyst concentration as a new biomarker for evaluation of patients suffering from PKD. We derive a method to define central and peripheral regions of the kidney, approximating the anatomical division between cortex and medulla, and apply it to two cohorts of ten patients with early/mild or late/severe disease. Our results from the late/severe cohort show peripheral cyst concentration correlates with the current standard PKD biomarker, total kidney volume (TKV), signi cantly better than central cyst concentration (p < 0.05). We also find that cyst concentration was globally increased in the late/severe cohort (p << 0.01) compared to the early/mild cohort, for both central and peripheral regions. These findings show cysts in PKD are not distributed homogeneously throughout the renal tissues.

  2. Data set for the proteomic inventory and quantitative analysis of chicken eggshell matrix proteins during the primary events of eggshell mineralization and the active growth phase of calcification.

    PubMed

    Marie, Pauline; Labas, Valérie; Brionne, Aurélien; Harichaux, Grégoire; Hennequet-Antier, Christelle; Rodriguez-Navarro, Alejandro B; Nys, Yves; Gautron, Joël

    2015-09-01

    Chicken eggshell is a biomineral composed of 95% calcite calcium carbonate mineral and of 3.5% organic matrix proteins. The assembly of mineral and its structural organization is controlled by its organic matrix. In a recent study [1], we have used quantitative proteomic, bioinformatic and functional analyses to explore the distribution of 216 eggshell matrix proteins at four key stages of shell mineralization defined as: (1) widespread deposition of amorphous calcium carbonate (ACC), (2) ACC transformation into crystalline calcite aggregates, (3) formation of larger calcite crystal units and (4) rapid growth of calcite as columnar structure with preferential crystal orientation. The current article detailed the quantitative analysis performed at the four stages of shell mineralization to determine the proteins which are the most abundant. Additionally, we reported the enriched GO terms and described the presence of 35 antimicrobial proteins equally distributed at all stages to keep the egg free of bacteria and of 81 proteins, the function of which could not be ascribed.

  3. Data set for the proteomic inventory and quantitative analysis of chicken eggshell matrix proteins during the primary events of eggshell mineralization and the active growth phase of calcification

    PubMed Central

    Marie, Pauline; Labas, Valérie; Brionne, Aurélien; Harichaux, Grégoire; Hennequet-Antier, Christelle; Rodriguez-Navarro, Alejandro B.; Nys, Yves; Gautron, Joël

    2015-01-01

    Chicken eggshell is a biomineral composed of 95% calcite calcium carbonate mineral and of 3.5% organic matrix proteins. The assembly of mineral and its structural organization is controlled by its organic matrix. In a recent study [1], we have used quantitative proteomic, bioinformatic and functional analyses to explore the distribution of 216 eggshell matrix proteins at four key stages of shell mineralization defined as: (1) widespread deposition of amorphous calcium carbonate (ACC), (2) ACC transformation into crystalline calcite aggregates, (3) formation of larger calcite crystal units and (4) rapid growth of calcite as columnar structure with preferential crystal orientation. The current article detailed the quantitative analysis performed at the four stages of shell mineralization to determine the proteins which are the most abundant. Additionally, we reported the enriched GO terms and described the presence of 35 antimicrobial proteins equally distributed at all stages to keep the egg free of bacteria and of 81 proteins, the function of which could not be ascribed. PMID:26306314

  4. Grating-based X-ray Dark-field Computed Tomography of Living Mice.

    PubMed

    Velroyen, A; Yaroshenko, A; Hahn, D; Fehringer, A; Tapfer, A; Müller, M; Noël, P B; Pauwels, B; Sasov, A; Yildirim, A Ö; Eickelberg, O; Hellbach, K; Auweter, S D; Meinel, F G; Reiser, M F; Bech, M; Pfeiffer, F

    2015-10-01

    Changes in x-ray attenuating tissue caused by lung disorders like emphysema or fibrosis are subtle and thus only resolved by high-resolution computed tomography (CT). The structural reorganization, however, is of strong influence for lung function. Dark-field CT (DFCT), based on small-angle scattering of x-rays, reveals such structural changes even at resolutions coarser than the pulmonary network and thus provides access to their anatomical distribution. In this proof-of-concept study we present x-ray in vivo DFCTs of lungs of a healthy, an emphysematous and a fibrotic mouse. The tomographies show excellent depiction of the distribution of structural - and thus indirectly functional - changes in lung parenchyma, on single-modality slices in dark field as well as on multimodal fusion images. Therefore, we anticipate numerous applications of DFCT in diagnostic lung imaging. We introduce a scatter-based Hounsfield Unit (sHU) scale to facilitate comparability of scans. In this newly defined sHU scale, the pathophysiological changes by emphysema and fibrosis cause a shift towards lower numbers, compared to healthy lung tissue.

  5. Spectral bidirectional reflectance distribution function measurements on well-defined textured surfaces: direct observation of shadowing, masking, inter-reflection, and transparency effects.

    PubMed

    Wilen, Larry; Dasgupta, Bivash R

    2011-11-01

    We present results for the bidirectional reflectance distribution function (BRDF) for samples of uniform rectangular and triangular grooves constructed from polydimethylsilicone replicas. The measurements are performed with the detector in the plane of incidence, but with varying groove orientations with respect to that plane. The samples are opaque in some cases and semitransparent in others. By measuring the BRDF for colored samples over a wide spectral range, we explicitly probe the effect of sample albedo, which is important for inter-reflections. For the opaque samples, we compare the results with exact theoretical results either taken from the literature (for the triangular geometry) or worked out here (for the rectangular geometry). For both geometries, we also extend the theoretical results to finite length grooves. There is generally very good agreement between theory and the experiment. Shadowing, masking, and inter-reflection are clearly observed, as well as effects that may be due to polarization and asperity scattering. For semitransparent samples, we observe the effect of increasing transparency on the BRDF.

  6. Dynamics and density distributions in a capillary-discharge waveguide with an embedded supersonic jet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matlis, N. H., E-mail: nmatlis@gmail.com; Gonsalves, A. J.; Steinke, S.

    We present an analysis of the gas dynamics and density distributions within a capillary-discharge waveguide with an embedded supersonic jet. This device provides a target for a laser plasma accelerator which uses longitudinal structuring of the gas-density profile to enable control of electron trapping and acceleration. The functionality of the device depends sensitively on the details of the density profile, which are determined by the interaction between the pulsed gas in the jet and the continuously-flowing gas in the capillary. These dynamics are captured by spatially resolving recombination light from several emission lines of the plasma as a function ofmore » the delay between the jet and the discharge. We provide a phenomenological description of the gas dynamics as well as a quantitative evaluation of the density evolution. In particular, we show that the pressure difference between the jet and the capillary defines three regimes of operation with qualitatively different longitudinal density profiles and show that jet timing provides a sensitive method for tuning between these regimes.« less

  7. Grating-based X-ray Dark-field Computed Tomography of Living Mice

    PubMed Central

    Velroyen, A.; Yaroshenko, A.; Hahn, D.; Fehringer, A.; Tapfer, A.; Müller, M.; Noël, P.B.; Pauwels, B.; Sasov, A.; Yildirim, A.Ö.; Eickelberg, O.; Hellbach, K.; Auweter, S.D.; Meinel, F.G.; Reiser, M.F.; Bech, M.; Pfeiffer, F.

    2015-01-01

    Changes in x-ray attenuating tissue caused by lung disorders like emphysema or fibrosis are subtle and thus only resolved by high-resolution computed tomography (CT). The structural reorganization, however, is of strong influence for lung function. Dark-field CT (DFCT), based on small-angle scattering of x-rays, reveals such structural changes even at resolutions coarser than the pulmonary network and thus provides access to their anatomical distribution. In this proof-of-concept study we present x-ray in vivo DFCTs of lungs of a healthy, an emphysematous and a fibrotic mouse. The tomographies show excellent depiction of the distribution of structural – and thus indirectly functional – changes in lung parenchyma, on single-modality slices in dark field as well as on multimodal fusion images. Therefore, we anticipate numerous applications of DFCT in diagnostic lung imaging. We introduce a scatter-based Hounsfield Unit (sHU) scale to facilitate comparability of scans. In this newly defined sHU scale, the pathophysiological changes by emphysema and fibrosis cause a shift towards lower numbers, compared to healthy lung tissue. PMID:26629545

  8. Relation between self-organized criticality and grain aspect ratio in granular piles

    NASA Astrophysics Data System (ADS)

    Denisov, D. V.; Villanueva, Y. Y.; Lőrincz, K. A.; May, S.; Wijngaarden, R. J.

    2012-05-01

    We investigate experimentally whether self-organized criticality (SOC) occurs in granular piles composed of different grains, namely, rice, lentils, quinoa, and mung beans. These four grains were selected to have different aspect ratios, from oblong to oblate. As a function of aspect ratio, we determined the growth (β) and roughness (α) exponents, the avalanche fractal dimension (D), the avalanche size distribution exponent (τ), the critical angle (γ), and its fluctuation. At superficial inspection, three types of grains seem to have power-law-distributed avalanches with a well-defined τ. However, only rice is truly SOC if we take three criteria into account: a power-law-shaped avalanche size distribution, finite size scaling, and a universal scaling relation relating characteristic exponents. We study SOC as a spatiotemporal fractal; in particular, we study the spatial structure of criticality from local observation of the slope angle. From the fluctuation of the slope angle we conclude that greater fluctuation (and thus bigger avalanches) happen in piles consisting of grains with larger aspect ratio.

  9. On the theory and simulation of multiple Coulomb scattering of heavy-charged particles.

    PubMed

    Striganov, S I

    2005-01-01

    The Moliere theory of multiple Coulomb scattering is modified to take into account the difference between processes of scattering off atomic nuclei and electrons. A simple analytical expression for angular distribution of charged particles passing through a thick absorber is found. It does not assume any special form for a differential scattering cross section and has a wider range of applicability than a gaussian approximation. A well-known method to simulate multiple Coulomb scatterings is based on treating 'soft' and 'hard' collisions differently. An angular deflection in a large number of 'soft' collisions is sampled using the proposed distribution function, a small number of 'hard' collision are simulated directly. A boundary between 'hard' and 'soft' collisions is defined, providing a precise sampling of a scattering angle (1% level) and a small number of 'hard' collisions. A corresponding simulating module takes into account projectile and nucleus charged distributions and exact kinematics of a projectile-electron interaction.

  10. Polarized Optical Scattering Measurements of Metallic Nanoparticles on a Thin Film Silicon Wafer

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Yang; Liu, Tze-An; Fu, Wei-En

    2009-09-01

    Light scattering has shown its powerful diagnostic capability to characterize optical quality surfaces. In this study, the theory of bidirectional reflectance distribution function (BRDF) was used to analyze the metallic nanoparticles' sizes on wafer surfaces. The BRDF of a surface is defined as the angular distribution of radiance scattered by the surface normalized by the irradiance incident on the surface. A goniometric optical scatter instrument has been developed to perform the BRDF measurements on polarized light scattering on wafer surfaces for the diameter and distribution measurements of metallic nanoparticles. The designed optical scatter instrument is capable of distinguishing various types of optical scattering characteristics, which are corresponding to the diameters of the metallic nanoparticles, near surfaces by using the Mueller matrix calculation. The metallic nanoparticle diameter of measurement is 60 nm on 2 inch thin film wafers. These measurement results demonstrate that the polarization of light scattered by metallic particles can be used to determine the size of metallic nanoparticles on silicon wafers.

  11. Theoretical Current-Voltage Curve in Low-Pressure Cesium Diode for Electron-Rich Emission

    NASA Technical Reports Server (NTRS)

    Coldstein, C. M.

    1964-01-01

    Although considerable interest has been shown in the space-charge analysis of low-pressure (collisionless case) thermionic diodes, there is a conspicuous lack in the presentation of results in a way that allows direct comparison with experiment. The current-voltage curve of this report was, therefore, computed for a typical case within the realm of experimental interest. The model employed in this computation is shown in Fig. 1 and is defined by the limiting potential distributions [curves (a) and (b)]. Curve (a) represents the potential V as a monotonic function of position with a slope of zero at the anode; curve (b) is similarly monotonic with a slope of zero at the cathode. It is assumed that by a continuous variation of the anode voltage, the potential distributions vary continuously from one limiting form to the other. Although solutions for infinitely spaced electrodes show that spatically oscillatory potential distributions may exist, they have been neglected in this computation.

  12. Influence of the spectral power distribution of a LED on the illuminance responsivity of a photometer

    NASA Astrophysics Data System (ADS)

    Sametoglu, Ferhat

    2008-09-01

    The measurement accuracy in the photometric quantities measured through photometer head is determined by the value of the spectral mismatch correction factor ( c( St, Ss)), which is defined as a function of spectral power distribution of light sources, besides illuminance responsivity of the photometer head used. This factor is more important when photometric quantities of the light-emitting diode (LED) style optical sources, which radiate within relatively narrow spectral bands as compared with that of other optical sources, are being measured. Variations of the illuminance responsivities of various V( λ)-adopted photometer heads are discussed. High-power-colored LEDs, manufactured by Lumileds Lighting Co., were used as light sources and their relative spectral power distributions (RSPDs) were measured using a spectrometer-based optical setup. Dependences of the c( St, Ss) factors of three types of photometer heads ( f1'=1.4%, f1'=0.8% and f1'=0.5%) with wavelength and influences of the factors on the illuminance responsivities of photometer heads are presented.

  13. The kinematics of dense clusters of galaxies. II - The distribution of velocity dispersions

    NASA Technical Reports Server (NTRS)

    Zabludoff, Ann I.; Geller, Margaret J.; Huchra, John P.; Ramella, Massimo

    1993-01-01

    From the survey of 31 Abell R above 1 cluster fields within z of 0.02-0.05, we extract 25 dense clusters with velocity dispersions omicron above 300 km/s and with number densities exceeding the mean for the Great Wall of galaxies by one deviation. From the CfA Redshift Survey (in preparation), we obtain an approximately volume-limited catalog of 31 groups with velocity dispersions above 100 km/s and with the same number density limit. We combine these well-defined samples to obtain the distribution of cluster velocity dispersions. The group sample enables us to correct for incompleteness in the Abell catalog at low velocity dispersions. The clusters from the Abell cluster fields populate the high dispersion tail. For systems with velocity dispersions above 700 km/s, approximately the median for R = 1 clusters, the group and cluster abundances are consistent. The combined distribution is consistent with cluster X-ray temperature functions.

  14. Screening Method Based on Walking Plantar Impulse for Detecting Musculoskeletal Senescence and Injury

    PubMed Central

    Fan, Yifang; Fan, Yubo; Li, Zhiyu; Newman, Tony; Lv, Changsheng; Zhou, Yi

    2013-01-01

    No consensus has been reached on how musculoskeletal system injuries or aging can be explained by a walking plantar impulse. We standardize the plantar impulse by defining a principal axis of plantar impulse. Based upon this standardized plantar impulse, two indexes are presented: plantar pressure record time series and plantar-impulse distribution along the principal axis of plantar impulse. These indexes are applied to analyze the plantar impulse collected by plantar pressure plates from three sources: Achilles tendon ruptures; elderly people (ages 62–71); and young people (ages 19–23). Our findings reveal that plantar impulse distribution curves for Achilles tendon ruptures change irregularly with subjects’ walking speed changes. When comparing distribution curves of the young, we see a significant difference in the elderly subjects’ phalanges plantar pressure record time series. This verifies our hypothesis that a plantar impulse can function as a means to assess and evaluate musculoskeletal system injuries and aging. PMID:24386288

  15. Bayes Factor Covariance Testing in Item Response Models.

    PubMed

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  16. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  17. THE MILKY WAY HAS NO DISTINCT THICK DISK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bovy, Jo; Rix, Hans-Walter; Hogg, David W., E-mail: bovy@ias.edu

    2012-06-01

    Different stellar sub-populations of the Milky Way's stellar disk are known to have different vertical scale heights, their thickness increasing with age. Using SEGUE spectroscopic survey data, we have recently shown that mono-abundance sub-populations, defined in the [{alpha}/Fe]-[Fe/H] space, are well described by single-exponential spatial-density profiles in both the radial and the vertical direction; therefore, any star of a given abundance is clearly associated with a sub-population of scale height h{sub z} . Here, we work out how to determine the stellar surface-mass density contributions at the solar radius R{sub 0} of each such sub-population, accounting for the survey selectionmore » function, and for the fraction of the stellar population mass that is reflected in the spectroscopic target stars given populations of different abundances and their presumed age distributions. Taken together, this enables us to derive {Sigma}{sub R{sub 0}}(h{sub z}), the surface-mass contributions of stellar populations with scale height h{sub z} . Surprisingly, we find no hint of a thin-thick disk bi-modality in this mass-weighted scale-height distribution, but a smoothly decreasing function, approximately {Sigma}{sub R{sub 0}}(h{sub z}){proportional_to} exp(-h{sub z}), from h{sub z} Almost-Equal-To 200 pc to h{sub z} Almost-Equal-To 1 kpc. As h{sub z} is ultimately the structurally defining property of a thin or thick disk, this shows clearly that the Milky Way has a continuous and monotonic distribution of disk thicknesses: there is no 'thick disk' sensibly characterized as a distinct component. We discuss how our result is consistent with evidence for seeming bi-modality in purely geometric disk decompositions or chemical abundances analyses. We constrain the total visible stellar surface-mass density at the solar radius to be {Sigma}{sub R{sub 0}}* = 30 {+-} 1 M{sub Sun} pc{sup -2}.« less

  18. Higher-order phase transitions on financial markets

    NASA Astrophysics Data System (ADS)

    Kasprzak, A.; Kutner, R.; Perelló, J.; Masoliver, J.

    2010-08-01

    Statistical and thermodynamic properties of the anomalous multifractal structure of random interevent (or intertransaction) times were thoroughly studied by using the extended continuous-time random walk (CTRW) formalism of Montroll, Weiss, Scher, and Lax. Although this formalism is quite general (and can be applied to any interhuman communication with nontrivial priority), we consider it in the context of a financial market where heterogeneous agent activities can occur within a wide spectrum of time scales. As the main general consequence, we found (by additionally using the Saddle-Point Approximation) the scaling or power-dependent form of the partition function, Z(q'). It diverges for any negative scaling powers q' (which justifies the name anomalous) while for positive ones it shows the scaling with the general exponent τ(q'). This exponent is the nonanalytic (singular) or noninteger power of q', which is one of the pilar of higher-order phase transitions. In definition of the partition function we used the pausing-time distribution (PTD) as the central one, which takes the form of convolution (or superstatistics used, e.g. for describing turbulence as well as the financial market). Its integral kernel is given by the stretched exponential distribution (often used in disordered systems). This kernel extends both the exponential distribution assumed in the original version of the CTRW formalism (for description of the transient photocurrent measured in amorphous glassy material) as well as the Gaussian one sometimes used in this context (e.g. for diffusion of hydrogen in amorphous metals or for aging effects in glasses). Our most important finding is the third- and higher-order phase transitions, which can be roughly interpreted as transitions between the phase where high frequency trading is most visible and the phase defined by low frequency trading. The specific order of the phase transition directly depends upon the shape exponent α defining the stretched exponential integral kernel. On this basis a simple practical hint for investors was formulated.

  19. Tensor Minkowski Functionals for random fields on the sphere

    NASA Astrophysics Data System (ADS)

    Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom

    2017-12-01

    We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.

  20. Generic functional requirements for a NASA general-purpose data base management system

    NASA Technical Reports Server (NTRS)

    Lohman, G. M.

    1981-01-01

    Generic functional requirements for a general-purpose, multi-mission data base management system (DBMS) for application to remotely sensed scientific data bases are detailed. The motivation for utilizing DBMS technology in this environment is explained. The major requirements include: (1) a DBMS for scientific observational data; (2) a multi-mission capability; (3) user-friendly; (4) extensive and integrated information about data; (5) robust languages for defining data structures and formats; (6) scientific data types and structures; (7) flexible physical access mechanisms; (8) ways of representing spatial relationships; (9) a high level nonprocedural interactive query and data manipulation language; (10) data base maintenance utilities; (11) high rate input/output and large data volume storage; and adaptability to a distributed data base and/or data base machine configuration. Detailed functions are specified in a top-down hierarchic fashion. Implementation, performance, and support requirements are also given.

Top