Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2012-09-01
In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.
On the continuity of the stationary state distribution of DPCM
NASA Astrophysics Data System (ADS)
Naraghi-Pour, Morteza; Neuhoff, David L.
1990-03-01
Continuity and singularity properties of the stationary state distribution of differential pulse code modulation (DPCM) are explored. Two-level DPCM (i.e., delta modulation) operating on a first-order autoregressive source is considered, and it is shown that, when the magnitude of the DPCM prediciton coefficient is between zero and one-half, the stationary state distribution is singularly continuous; i.e., it is not discrete but concentrates on an uncountable set with a Lebesgue measure of zero. Consequently, it cannot be represented with a probability density function. For prediction coefficients with magnitude greater than or equal to one-half, the distribution is pure, i.e., either absolutely continuous and representable with a density function, or singular. This problem is compared to the well-known and still substantially unsolved problem of symmetric Bernoulli convolutions.
NASA Astrophysics Data System (ADS)
Julie, Hongki; Pasaribu, Udjianna S.; Pancoro, Adi
2015-12-01
This paper will allow Markov Chain's application in genome shared identical by descent by two individual at full sibs model. The full sibs model was a continuous time Markov Chain with three state. In the full sibs model, we look for the cumulative distribution function of the number of sub segment which have 2 IBD haplotypes from a segment of the chromosome which the length is t Morgan and the cumulative distribution function of the number of sub segment which have at least 1 IBD haplotypes from a segment of the chromosome which the length is t Morgan. This cumulative distribution function will be developed by the moment generating function.
Normal theory procedures for calculating upper confidence limits (UCL) on the risk function for continuous responses work well when the data come from a normal distribution. However, if the data come from an alternative distribution, the application of the normal theory procedure...
Continuous distributions of specific ventilation recovered from inert gas washout
NASA Technical Reports Server (NTRS)
Lewis, S. M.; Evans, J. W.; Jalowayski, A. A.
1978-01-01
A new technique is described for recovering continuous distributions of ventilation as a function of tidal ventilation/volume ratio from the nitrogen washout. The analysis yields a continuous distribution of ventilation as a function of tidal ventilation/volume ratio represented as fractional ventilations of 50 compartments plus dead space. The procedure was verified by recovering known distributions from data to which noise had been added. Using an apparatus to control the subject's tidal volume and FRC, mixed expired N2 data gave the following results: (a) the distributions of young, normal subjects were narrow and unimodal; (b) those of subjects over age 40 were broader with more poorly ventilated units; (c) patients with pulmonary disease of all descriptions showed enlarged dead space; (d) patients with cystic fibrosis showed multimodal distributions with the bulk of the ventilation going to overventilated units; and (e) patients with obstructive lung disease fell into several classes, three of which are illustrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
NASA Astrophysics Data System (ADS)
Kazakis, Nikolaos A.
2018-01-01
The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
26 CFR 1.527-3 - Exempt function income.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 7 2011-04-01 2009-04-01 true Exempt function income. 1.527-3 Section 1.527-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED... be identified as relating to distributing political literature or organizing voters to vote for a...
26 CFR 1.527-3 - Exempt function income.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Exempt function income. 1.527-3 Section 1.527-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED... be identified as relating to distributing political literature or organizing voters to vote for a...
26 CFR 1.527-3 - Exempt function income.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 7 2014-04-01 2013-04-01 true Exempt function income. 1.527-3 Section 1.527-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED... be identified as relating to distributing political literature or organizing voters to vote for a...
Li, Q; He, Y L; Wang, Y; Tao, W Q
2007-11-01
A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.
Comment on the asymptotics of a distribution-free goodness of fit test statistic.
Browne, Michael W; Shapiro, Alexander
2015-03-01
In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Hua-Sheng
2013-09-15
A unified, fast, and effective approach is developed for numerical calculation of the well-known plasma dispersion function with extensions from Maxwellian distribution to almost arbitrary distribution functions, such as the δ, flat top, triangular, κ or Lorentzian, slowing down, and incomplete Maxwellian distributions. The singularity and analytic continuation problems are also solved generally. Given that the usual conclusion γ∝∂f{sub 0}/∂v is only a rough approximation when discussing the distribution function effects on Landau damping, this approach provides a useful tool for rigorous calculations of the linear wave and instability properties of plasma for general distribution functions. The results are alsomore » verified via a linear initial value simulation approach. Intuitive visualizations of the generalized plasma dispersion function are also provided.« less
NASA Astrophysics Data System (ADS)
Min, Huang; Na, Cai
2017-06-01
These years, ant colony algorithm has been widely used in solving the domain of discrete space optimization, while the research on solving the continuous space optimization was relatively little. Based on the original optimization for continuous space, the article proposes the improved ant colony algorithm which is used to Solve the optimization for continuous space, so as to overcome the ant colony algorithm’s disadvantages of searching for a long time in continuous space. The article improves the solving way for the total amount of information of each interval and the due number of ants. The article also introduces a function of changes with the increase of the number of iterations in order to enhance the convergence rate of the improved ant colony algorithm. The simulation results show that compared with the result in literature[5], the suggested improved ant colony algorithm that based on the information distribution function has a better convergence performance. Thus, the article provides a new feasible and effective method for ant colony algorithm to solve this kind of problem.
Random walk to a nonergodic equilibrium concept
NASA Astrophysics Data System (ADS)
Bel, G.; Barkai, E.
2006-01-01
Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.
Foster, Tobias
2011-09-01
A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society
Functionalized multi-walled carbon nanotube based sensors for distributed methane leak detection
This paper presents a highly sensitive, energy efficient and low-cost distributed methane (CH4) sensor system (DMSS) for continuous monitoring, detection and localization of CH4 leaks in natural gas infrastructure such as transmission and distribution pipelines, wells, and produc...
High level continuity for coordinate generation with precise controls
NASA Technical Reports Server (NTRS)
Eiseman, P. R.
1982-01-01
Coordinate generation techniques with precise local controls have been derived and analyzed for continuity requirements up to both the first and second derivatives, and have been projected to higher level continuity requirements from the established pattern. The desired local control precision was obtained when a family of coordinate surfaces could be uniformly distributed without a consequent creation of flat spots on the coordinate curves transverse to the family. Relative to the uniform distribution, the family could be redistributed from an a priori distribution function or from a solution adaptive approach, both without distortion from the underlying transformation which may be independently chosen to fit a nontrivial geometry and topology.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
NASA Technical Reports Server (NTRS)
Bathke, C. G.
1976-01-01
Electron energy distribution functions were calculated in a U235 plasma at 1 atmosphere for various plasma temperatures and neutron fluxes. The distributions are assumed to be a summation of a high energy tail and a Maxwellian distribution. The sources of energetic electrons considered are the fission-fragment induced ionization of uranium and the electron induced ionization of uranium. The calculation of the high energy tail is reduced to an electron slowing down calculation, from the most energetic source to the energy where the electron is assumed to be incorporated into the Maxwellian distribution. The pertinent collisional processes are electron-electron scattering and electron induced ionization and excitation of uranium. Two distinct methods were employed in the calculation of the distributions. One method is based upon the assumption of continuous slowing and yields a distribution inversely proportional to the stopping power. An iteration scheme is utilized to include the secondary electron avalanche. In the other method, a governing equation is derived without assuming continuous electron slowing. This equation is solved by a Monte Carlo technique.
Varga, Imre; Pipek, János
2003-08-01
We discuss some properties of the generalized entropies, called Rényi entropies, and their application to the case of continuous distributions. In particular, it is shown that these measures of complexity can be divergent; however, their differences are free from these divergences, thus enabling them to be good candidates for the description of the extension and the shape of continuous distributions. We apply this formalism to the projection of wave functions onto the coherent state basis, i.e., to the Husimi representation. We also show how the localization properties of the Husimi distribution on average can be reconstructed from its marginal distributions that are calculated in position and momentum space in the case when the phase space has no structure, i.e., no classical limit can be defined. Numerical simulations on a one-dimensional disordered system corroborate our expectations.
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
DISTRIBUTED RC NETWORKS WITH RATIONAL TRANSFER FUNCTIONS,
A distributed RC circuit analogous to a continuously tapped transmission line can be made to have a rational short-circuit transfer admittance and...one rational shortcircuit driving-point admittance. A subcircuit of the same structure has a rational open circuit transfer impedance and one rational ...open circuit driving-point impedance. Hence, rational transfer functions may be obtained while considering either generator impedance or load
Vercruysse, Jurgen; Toiviainen, Maunu; Fonteyne, Margot; Helkimo, Niko; Ketolainen, Jarkko; Juuti, Mikko; Delaet, Urbain; Van Assche, Ivo; Remon, Jean Paul; Vervaet, Chris; De Beer, Thomas
2014-04-01
Over the last decade, there has been increased interest in the application of twin screw granulation as a continuous wet granulation technique for pharmaceutical drug formulations. However, the mixing of granulation liquid and powder material during the short residence time inside the screw chamber and the atypical particle size distribution (PSD) of granules produced by twin screw granulation is not yet fully understood. Therefore, this study aims at visualizing the granulation liquid mixing and distribution during continuous twin screw granulation using NIR chemical imaging. In first instance, the residence time of material inside the barrel was investigated as function of screw speed and moisture content followed by the visualization of the granulation liquid distribution as function of different formulation and process parameters (liquid feed rate, liquid addition method, screw configuration, moisture content and barrel filling degree). The link between moisture uniformity and granule size distributions was also studied. For residence time analysis, increased screw speed and lower moisture content resulted to a shorter mean residence time and narrower residence time distribution. Besides, the distribution of granulation liquid was more homogenous at higher moisture content and with more kneading zones on the granulator screws. After optimization of the screw configuration, a two-level full factorial experimental design was performed to evaluate the influence of moisture content, screw speed and powder feed rate on the mixing efficiency of the powder and liquid phase. From these results, it was concluded that only increasing the moisture content significantly improved the granulation liquid distribution. This study demonstrates that NIR chemical imaging is a fast and adequate measurement tool for allowing process visualization and hence for providing better process understanding of a continuous twin screw granulation system. Copyright © 2013 Elsevier B.V. All rights reserved.
Distribution and Supply Chain Management: Educating the Army Officer
2005-05-26
knowledge a logistics officer must have to function effectively in a supply chain and distribution management environment. It analyzes how officers...Educational Objectives. It discusses how the Army/DoD currently teaches supply chain and distribution management concepts in various programs, such as the...its educational curriculum, and that logisticians continue to gain operational experience in distribution management operations. The paper recommends
Grid-based Continual Analysis of Molecular Interior for Drug Discovery, QSAR and QSPR.
Potemkin, Andrey V; Grishina, Maria A; Potemkin, Vladimir A
2017-01-01
In 1979, R.D.Cramer and M.Milne made a first realization of 3D comparison of molecules by aligning them in space and by mapping their molecular fields to a 3D grid. Further, this approach was developed as the DYLOMMS (Dynamic Lattice- Oriented Molecular Modelling System) approach. In 1984, H.Wold and S.Wold proposed the use of partial least squares (PLS) analysis, instead of principal component analysis, to correlate the field values with biological activities. Then, in 1988, the method which was called CoMFA (Comparative Molecular Field Analysis) was introduced and the appropriate software became commercially available. Since 1988, a lot of 3D QSAR methods, algorithms and their modifications are introduced for solving of virtual drug discovery problems (e.g., CoMSIA, CoMMA, HINT, HASL, GOLPE, GRID, PARM, Raptor, BiS, CiS, ConGO,). All the methods can be divided into two groups (classes):1. Methods studying the exterior of molecules; 2) Methods studying the interior of molecules. A series of grid-based computational technologies for Continual Molecular Interior analysis (CoMIn) are invented in the current paper. The grid-based analysis is fulfilled by means of a lattice construction analogously to many other grid-based methods. The further continual elucidation of molecular structure is performed in various ways. (i) In terms of intermolecular interactions potentials. This can be represented as a superposition of Coulomb, Van der Waals interactions and hydrogen bonds. All the potentials are well known continual functions and their values can be determined in all lattice points for a molecule. (ii) In the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the first principles, an original quantum free-orbital approach AlteQ is proposed. All the functions can be calculated using a quantum approach at a sufficient level of theory and their values can be determined in all lattice points for a molecule. Then, the molecules of a dataset can be superimposed in the lattice for the maximal coincidence (or minimal deviations) of the potentials (i) or the quantum functions (ii). The methods and criteria of the superimposition are discussed. After that a functional relationship between biological activity or property and characteristics of potentials (i) or functions (ii) is created. The methods of the quantitative relationship construction are discussed. New approaches for rational virtual drug design based on the intermolecular potentials and quantum functions are invented. All the invented methods are realized at www.chemosophia.com web page. Therefore, a set of 3D QSAR approaches for continual molecular interior study giving a lot of opportunities for virtual drug discovery, virtual screening and ligand-based drug design are invented. The continual elucidation of molecular structure is performed in the terms of intermolecular interactions potentials and in the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the first principles, an original quantum free-orbital approach AlteQ is proposed. The methods of the quantitative relationship construction are discussed. New approaches for rational virtual drug design based on the intermolecular potentials and quantum functions are invented. All the invented methods are realized at www.chemosophia.com web page. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.
The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.
10 CFR 51.66 - Environmental report-number of copies; distribution.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Environmental report-number of copies; distribution. 51.66 Section 51.66 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) ENVIRONMENTAL PROTECTION REGULATIONS FOR DOMESTIC LICENSING AND RELATED REGULATORY FUNCTIONS National Environmental Policy Act-Regulations...
NASA Astrophysics Data System (ADS)
Zhou, distributed delays [rapid communication] T.; Chen, A.; Zhou, Y.
2005-08-01
By using the continuation theorem of coincidence degree theory and Liapunov function, we obtain some sufficient criteria to ensure the existence and global exponential stability of periodic solution to the bidirectional associative memory (BAM) neural networks with periodic coefficients and continuously distributed delays. These results improve and generalize the works of papers [J. Cao, L. Wang, Phys. Rev. E 61 (2000) 1825] and [Z. Liu, A. Chen, J. Cao, L. Huang, IEEE Trans. Circuits Systems I 50 (2003) 1162]. An example is given to illustrate that the criteria are feasible.
Aerodynamic influence coefficient method using singularity splines.
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Weber, J. A.; Lesferd, E. P.
1973-01-01
A new numerical formulation with computed results, is presented. This formulation combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of the loading function methods. The formulation employs a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfies all of the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise (termed 'spline'). Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral.
Cascade impactors are particularly useful in determining the mass size distributions of particulate and individual chemical species. The impactor raw data must be inverted to reconstruct a continuous particle size distribution. An inversion method using a lognormal function for p...
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
Distribution functions of probabilistic automata
NASA Technical Reports Server (NTRS)
Vatan, F.
2001-01-01
Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.
10 CFR 51.119 - Publication of finding of no significant impact; distribution.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Publication of finding of no significant impact; distribution. 51.119 Section 51.119 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) ENVIRONMENTAL PROTECTION REGULATIONS FOR DOMESTIC LICENSING AND RELATED REGULATORY FUNCTIONS National Environmental Policy Act...
10 CFR 51.119 - Publication of finding of no significant impact; distribution.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Publication of finding of no significant impact; distribution. 51.119 Section 51.119 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) ENVIRONMENTAL PROTECTION REGULATIONS FOR DOMESTIC LICENSING AND RELATED REGULATORY FUNCTIONS National Environmental Policy Act...
Variable Weight Fractional Collisions for Multiple Species Mixtures
2017-08-28
DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA #17517 6 / 21 VARIABLE WEIGHTS FOR DYNAMIC RANGE Continuum to Discrete ...Representation: Many Particles →̃ Continuous Distribution Discretized VDF Yields Vlasov But Collision Integral Still a Problem Particle Methods VDF to Delta...Function Set Collisions between Discrete Velocities But Poorly Resolved Tail (Tail Critical to Inelastic Collisions) Variable Weights Permit Extra DOF in
The steady aerodynamics of aerofoils with porosity gradients.
Hajian, Rozhin; Jaworski, Justin W
2017-09-01
This theoretical study determines the aerodynamic loads on an aerofoil with a prescribed porosity distribution in a steady incompressible flow. A Darcy porosity condition on the aerofoil surface furnishes a Fredholm integral equation for the pressure distribution, which is solved exactly and generally as a Riemann-Hilbert problem provided that the porosity distribution is Hölder-continuous. The Hölder condition includes as a subset any continuously differentiable porosity distributions that may be of practical interest. This formal restriction on the analysis is examined by a class of differentiable porosity distributions that approach a piecewise, discontinuous function in a certain parametric limit. The Hölder-continuous solution is verified in this limit against analytical results for partially porous aerofoils in the literature. Finally, a comparison made between the new theoretical predictions and experimental measurements of SD7003 aerofoils presented in the literature. Results from this analysis may be integrated into a theoretical framework to optimize turbulence noise suppression with minimal impact to aerodynamic performance.
The steady aerodynamics of aerofoils with porosity gradients
NASA Astrophysics Data System (ADS)
Hajian, Rozhin; Jaworski, Justin W.
2017-09-01
This theoretical study determines the aerodynamic loads on an aerofoil with a prescribed porosity distribution in a steady incompressible flow. A Darcy porosity condition on the aerofoil surface furnishes a Fredholm integral equation for the pressure distribution, which is solved exactly and generally as a Riemann-Hilbert problem provided that the porosity distribution is Hölder-continuous. The Hölder condition includes as a subset any continuously differentiable porosity distributions that may be of practical interest. This formal restriction on the analysis is examined by a class of differentiable porosity distributions that approach a piecewise, discontinuous function in a certain parametric limit. The Hölder-continuous solution is verified in this limit against analytical results for partially porous aerofoils in the literature. Finally, a comparison made between the new theoretical predictions and experimental measurements of SD7003 aerofoils presented in the literature. Results from this analysis may be integrated into a theoretical framework to optimize turbulence noise suppression with minimal impact to aerodynamic performance.
NASA Technical Reports Server (NTRS)
Crane, R. K.
1975-01-01
An experiment was conducted to study the relations between the empirical distribution functions of reflectivity at specified locations above the surface and the corresponding functions at the surface. A bistatic radar system was used to measure continuously the scattering cross section per unit volume at heights of 3 and 6 km. A frequency of 3.7 GHz was used in the tests. It was found that the distribution functions for reflectivity may significantly change with height at heights below the level of the melting layer.
Exponential Family Functional data analysis via a low-rank model.
Li, Gen; Huang, Jianhua Z; Shen, Haipeng
2018-05-08
In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.
Evolution of Linux operating system network
NASA Astrophysics Data System (ADS)
Xiao, Guanping; Zheng, Zheng; Wang, Haoqin
2017-01-01
Linux operating system (LOS) is a sophisticated man-made system and one of the most ubiquitous operating systems. However, there is little research on the structure and functionality evolution of LOS from the prospective of networks. In this paper, we investigate the evolution of the LOS network. 62 major releases of LOS ranging from versions 1.0 to 4.1 are modeled as directed networks in which functions are denoted by nodes and function calls are denoted by edges. It is found that the size of the LOS network grows almost linearly, while clustering coefficient monotonically decays. The degree distributions are almost the same: the out-degree follows an exponential distribution while both in-degree and undirected degree follow power-law distributions. We further explore the functionality evolution of the LOS network. It is observed that the evolution of functional modules is shown as a sequence of seven events (changes) succeeding each other, including continuing, growth, contraction, birth, splitting, death and merging events. By means of a statistical analysis of these events in the top 4 largest components (i.e., arch, drivers, fs and net), it is shown that continuing, growth and contraction events occupy more than 95% events. Our work exemplifies a better understanding and describing of the dynamics of LOS evolution.
ON CONTINUOUS-REVIEW (S-1,S) INVENTORY POLICIES WITH STATE-DEPENDENT LEADTIMES,
INVENTORY CONTROL, *REPLACEMENT THEORY), MATHEMATICAL MODELS, LEAD TIME , MANAGEMENT ENGINEERING, DISTRIBUTION FUNCTIONS, PROBABILITY, QUEUEING THEORY, COSTS, OPTIMIZATION, STATISTICAL PROCESSES, DIFFERENCE EQUATIONS
A computer model of molecular arrangement in a n-paraffinic liquid
NASA Astrophysics Data System (ADS)
Vacatello, Michele; Avitabile, Gustavo; Corradini, Paolo; Tuzi, Angela
1980-07-01
A computer model of a bulk liquid polymer was built to investigate the problem of local order. The model is made of C30 n-alkane molecules; it is not a lattice model, but it allows for a continuous variability of torsion angles and interchain distances, subject to realistic intra- and intermolecular potentials. Experimental x-ray scattering curves and radial distribution functions are well reproduced. Calculated properties like end-to-end distances, distribution of torsion angles, radial distribution functions, and chain direction correlation parameters, all indicate a random coil conformation and no tendency to form bundles of parallel chains.
The beta Burr type X distribution properties with application.
Merovci, Faton; Khaleel, Mundher Abdullah; Ibrahim, Noor Akma; Shitan, Mahendran
2016-01-01
We develop a new continuous distribution called the beta-Burr type X distribution that extends the Burr type X distribution. The properties provide a comprehensive mathematical treatment of this distribution. Further more, various structural properties of the new distribution are derived, that includes moment generating function and the rth moment thus generalizing some results in the literature. We also obtain expressions for the density, moment generating function and rth moment of the order statistics. We consider the maximum likelihood estimation to estimate the parameters. Additionally, the asymptotic confidence intervals for the parameters are derived from the Fisher information matrix. Finally, simulation study is carried at under varying sample size to assess the performance of this model. Illustration the real dataset indicates that this new distribution can serve as a good alternative model to model positive real data in many areas.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
NASA Astrophysics Data System (ADS)
Matsui, Hiroyuki; Mishchenko, Andrei S.; Hasegawa, Tatsuo
2010-02-01
We developed a novel method for obtaining the distribution of trapped carriers over their degree of localization in organic transistors, based on the fine analysis of electron spin resonance spectra at low enough temperatures where all carriers are localized. To apply the method to pentacene thin-film transistors, we proved through continuous wave saturation experiments that all carriers are localized at below 50 K. We analyzed the spectra at 20 K and found that the major groups of traps comprise localized states having wave functions spanning around 1.5 and 5 molecules and a continuous distribution of states with spatial extent in the range between 6 and 20 molecules.
Matsui, Hiroyuki; Mishchenko, Andrei S; Hasegawa, Tatsuo
2010-02-05
We developed a novel method for obtaining the distribution of trapped carriers over their degree of localization in organic transistors, based on the fine analysis of electron spin resonance spectra at low enough temperatures where all carriers are localized. To apply the method to pentacene thin-film transistors, we proved through continuous wave saturation experiments that all carriers are localized at below 50 K. We analyzed the spectra at 20 K and found that the major groups of traps comprise localized states having wave functions spanning around 1.5 and 5 molecules and a continuous distribution of states with spatial extent in the range between 6 and 20 molecules.
Modelling Root Systems Using Oriented Density Distributions
NASA Astrophysics Data System (ADS)
Dupuy, Lionel X.
2011-09-01
Root architectural models are essential tools to understand how plants access and utilize soil resources during their development. However, root architectural models use complex geometrical descriptions of the root system and this has limitations to model interactions with the soil. This paper presents the development of continuous models based on the concept of oriented density distribution function. The growth of the root system is built as a hierarchical system of partial differential equations (PDEs) that incorporate single root growth parameters such as elongation rate, gravitropism and branching rate which appear explicitly as coefficients of the PDE. Acquisition and transport of nutrients are then modelled by extending Darcy's law to oriented density distribution functions. This framework was applied to build a model of the growth and water uptake of barley root system. This study shows that simplified and computer effective continuous models of the root system development can be constructed. Such models will allow application of root growth models at field scale.
Equilibrium structure of δ-Bi(2)O(3) from first principles.
Music, Denis; Konstantinidis, Stephanos; Schneider, Jochen M
2009-04-29
Using ab initio calculations, we have systematically studied the structure of δ-Bi(2)O(3) (fluorite prototype, 25% oxygen vacancies) probing [Formula: see text] and combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering, random distribution of oxygen vacancies with two different statistical descriptions as well as local relaxations. We observe that the combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering is the most stable configuration. Radial distribution functions for these configurations can be classified as discrete (ordered configurations) and continuous (random configurations). This classification can be understood on the basis of local structural relaxations. Up to 28.6% local relaxation of the oxygen sublattice is present in the random configurations, giving rise to continuous distribution functions. The phase stability obtained may be explained with the bonding analysis. Electron lone-pair charges in the predominantly ionic Bi-O matrix may stabilize the combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering.
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Morphological study on the prediction of the site of surface slides
Hiromasa Hiura
1991-01-01
The annual continual occurrence of surface slides in the basin was estimated by modifying the estimation formula of Yoshimatsu. The Weibull Distribution Function revealed to be usefull for presenting the state and the transition of surface slides in the basin. Three parameters of the Weibull Function are recognized to be the linear function of the area ratio a/A. The...
Rajeswaran, Jeevanantham; Blackstone, Eugene H; Barnard, John
2018-07-01
In many longitudinal follow-up studies, we observe more than one longitudinal outcome. Impaired renal and liver functions are indicators of poor clinical outcomes for patients who are on mechanical circulatory support and awaiting heart transplant. Hence, monitoring organ functions while waiting for heart transplant is an integral part of patient management. Longitudinal measurements of bilirubin can be used as a marker for liver function and glomerular filtration rate for renal function. We derive an approximation to evolution of association between these two organ functions using a bivariate nonlinear mixed effects model for continuous longitudinal measurements, where the two submodels are linked by a common distribution of time-dependent latent variables and a common distribution of measurement errors.
Continuous wave cavity ring-down spectroscopy for velocity distribution measurements in plasma.
McCarren, D; Scime, E
2015-10-01
We report the development of a continuous wave cavity ring-down spectroscopic (CW-CRDS) diagnostic for real-time, in situ measurement of velocity distribution functions of ions and neutral atoms in plasma. This apparatus is less complex than conventional CW-CRDS systems. We provide a detailed description of the CW-CRDS apparatus as well as measurements of argon ions and neutrals in a high-density (10(9) cm(-3) < plasma density <10(13) cm(-3)) plasma. The CW-CRDS measurements are validated through comparison with laser induced fluorescence measurements of the same absorbing states of the ions and neutrals.
Functional Literacy and Continuing Education by Television
ERIC Educational Resources Information Center
Paiva e Souza, Alfredina de
1970-01-01
As a result of a pilot project (in Rio de Janeiro) of functional literacy for adolescents and adults by television, 90 percent of the students in experimental tele-classes" became literate with 36 broadcasts of 20 minutes each, distributed over three months three times each week, supported by 50 minutes of discussion and other activities…
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Wigner Function Reconstruction in Levitated Optomechanics
NASA Astrophysics Data System (ADS)
Rashid, Muddassar; Toroš, Marko; Ulbricht, Hendrik
2017-10-01
We demonstrate the reconstruction of theWigner function from marginal distributions of the motion of a single trapped particle using homodyne detection. We show that it is possible to generate quantum states of levitated optomechanical systems even under the efect of continuous measurement by the trapping laser light. We describe the opto-mechanical coupling for the case of the particle trapped by a free-space focused laser beam, explicitly for the case without an optical cavity. We use the scheme to reconstruct the Wigner function of experimental data in perfect agreement with the expected Gaussian distribution of a thermal state of motion. This opens a route for quantum state preparation in levitated optomechanics.
Automated generation of influence functions for planar crack problems
NASA Technical Reports Server (NTRS)
Sire, Robert A.; Harris, David O.; Eason, Ernest D.
1989-01-01
A numerical procedure for the generation of influence functions for Mode I planar problems is described. The resulting influence functions are in a form for convenient evaluation of stress-intensity factors for complex stress distributions. Crack surface displacements are obtained by a least-squares solution of the Williams eigenfunction expansion for displacements in a cracked body. Discrete values of the influence function, evaluated using the crack surface displacements, are curve fit using an assumed functional form. The assumed functional form includes appropriate limit-behavior terms for very deep and very shallow cracks. Continuous representation of the influence function provides a convenient means for evaluating stress-intensity factors for arbitrary stress distributions by numerical integration. The procedure is demonstrated for an edge-cracked strip and a radially cracked disk. Comparisons with available published results demonstrate the accuracy of the procedure.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
Metabolic networks evolve towards states of maximum entropy production.
Unrean, Pornkamol; Srienc, Friedrich
2011-11-01
A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. Copyright © 2011 Elsevier Inc. All rights reserved.
Rare-event statistics and modular invariance
NASA Astrophysics Data System (ADS)
Nechaev, S. K.; Polovnikov, K.
2018-01-01
Simple geometric arguments based on constructing the Euclid orchard are presented, which explain the equivalence of various types of distributions that result from rare-event statistics. In particular, the spectral density of the exponentially weighted ensemble of linear polymer chains is examined for its number-theoretic properties. It can be shown that the eigenvalue statistics of the corresponding adjacency matrices in the sparse regime show a peculiar hierarchical structure and are described by the popcorn (Thomae) function discontinuous in the dense set of rational numbers. Moreover, the spectral edge density distribution exhibits Lifshitz tails, reminiscent of 1D Anderson localization. Finally, a continuous approximation for the popcorn function is suggested based on the Dedekind η-function, and the hierarchical ultrametric structure of the popcorn-like distributions is demonstrated to be related to hidden SL(2,Z) modular symmetry.
The Center for Astrophysics Redshift Survey - Recent results
NASA Technical Reports Server (NTRS)
Geller, Margaret J.; Huchra, John P.
1989-01-01
Six strips of the CfA redshift survey extension are now complete. The data continue to support a picture in which galaxies are on thin sheets which nearly surround vast low-density voids. The largest structures are comparable with the extent of the survey. Voids like the one in Bootes are a common feature of the large-scale distribution of galaxies. The issue of fair samples of the galaxy distribution is discussed, examining statistical measures of the galaxy distribution including the two-point correlation functions.
Internal services simulation control in 220/110kV power transformer station Mintia
NASA Astrophysics Data System (ADS)
Ciulica, D.; Rob, R.
2018-01-01
The main objectives in developing the electric transport and distribution networks infrastructure are satisfying the electric energy demand, ensuring the continuity of supply to customers, minimizing electricity losses in the transmission and distribution networks of public interest. This paper presents simulations in functioning of the internal services system 400/230 V ac in the 220/110 kV power transformer station Mintia. Using simulations in Visual Basic, the following premises are taken into consideration. All the ac consumers of the 220/110 kV power transformer station Mintia will be supplied by three 400/230 V transformers for internal services which can mutual reserve. In case of damaging at one transformer, the others are able to assume the entire consumption using automatic release of reserves. The simulation program studies three variants in which the continuity of supply to customers are ensured. As well, by simulations, all the functioning situations are analyzed in detail.
NASA Astrophysics Data System (ADS)
McCarren, Dustin; Vandervort, Robert; Soderholm, Mark; Carr, Jerry, Jr.; Galante, Matthew; Magee, Richard; Scime, Earl
2013-10-01
Cavity Ring-Down Spectroscopy CRDS is a proven, ultra-sensitive, cavity enhanced absorption spectroscopy technique. When combined with a continuous wavelength (CW) diode laser that has a sufficiently narrow line width, the Doppler broadened absorption line, i.e., the velocity distribution functions (IVDFs), can be measured. Measurements of IVDFS can be made using established techniques, such as laser induced fluorescence (LIF). However, LIF suffers from the requirement that the initial state of the LIF sequence have a substantial density. This usually limits LIF to ions and atoms with large metastable state densities for the given plasma conditions. CW-CRDS is considerably more sensitive than LIF and can potentially be applied to much lower density populations of ion and atom states. In this work we present ongoing measurements of the CW-CRDS diagnostic and discuss the technical challenges of using CW-CRDS to make measurements in a helicon plasma.
Continuously distributed magnetization profile for millimeter-scale elastomeric undulatory swimming
NASA Astrophysics Data System (ADS)
Diller, Eric; Zhuang, Jiang; Zhan Lum, Guo; Edwards, Matthew R.; Sitti, Metin
2014-04-01
We have developed a millimeter-scale magnetically driven swimming robot for untethered motion at mid to low Reynolds numbers. The robot is propelled by continuous undulatory deformation, which is enabled by the distributed magnetization profile of a flexible sheet. We demonstrate control of a prototype device and measure deformation and speed as a function of magnetic field strength and frequency. Experimental results are compared with simple magnetoelastic and fluid propulsion models. The presented mechanism provides an efficient remote actuation method at the millimeter scale that may be suitable for further scaling down in size for micro-robotics applications in biotechnology and healthcare.
Continuous wave cavity ring-down spectroscopy for velocity distribution measurements in plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarren, D.; Lockheed Martin, Palmdale, California 93599; Scime, E., E-mail: earl.scime@mail.wvu.edu
2015-10-15
We report the development of a continuous wave cavity ring-down spectroscopic (CW-CRDS) diagnostic for real-time, in situ measurement of velocity distribution functions of ions and neutral atoms in plasma. This apparatus is less complex than conventional CW-CRDS systems. We provide a detailed description of the CW-CRDS apparatus as well as measurements of argon ions and neutrals in a high-density (10{sup 9} cm{sup −3} < plasma density <10{sup 13} cm{sup −3}) plasma. The CW-CRDS measurements are validated through comparison with laser induced fluorescence measurements of the same absorbing states of the ions and neutrals.
Occupation times and ergodicity breaking in biased continuous time random walks
NASA Astrophysics Data System (ADS)
Bel, Golan; Barkai, Eli
2005-12-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.
Global Land Carbon Uptake from Trait Distributions
NASA Astrophysics Data System (ADS)
Butler, E. E.; Datta, A.; Flores-Moreno, H.; Fazayeli, F.; Chen, M.; Wythers, K. R.; Banerjee, A.; Atkin, O. K.; Kattge, J.; Reich, P. B.
2016-12-01
Historically, functional diversity in land surface models has been represented through a range of plant functional types (PFTs), each of which has a single value for all of its functional traits. Here we expand the diversity of the land surface by using a distribution of trait values for each PFT. The data for these trait distributions is from a sub-set of the global database of plant traits, TRY, and this analysis uses three leaf traits: mass based nitrogen and phosphorus content and specific leaf area, which influence both photosynthesis and respiration. The data are extrapolated into continuous surfaces through two methodologies. The first, a categorical method, classifies the species observed in TRY into satellite estimates of their plant functional type abundances - analogous to how traits are currently assigned to PFTs in land surface models. Second, a Bayesian spatial method which additionally estimates how the distribution of a trait changes in accord with both climate and soil covariates. These two methods produce distinct patterns of diversity which are incorporated into a land surface model to estimate how the range of trait values affects the global land carbon budget.
Exact joint density-current probability function for the asymmetric exclusion process.
Depken, Martin; Stinchcombe, Robin
2004-07-23
We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society
Methodological Article: A Brief Taxometrics Primer
ERIC Educational Resources Information Center
Beauchaine, Theodore P.
2007-01-01
Taxometric procedures provide an empirical means of determining which psychiatric disorders are typologically distinct from normal behavioral functioning. Although most disorders reflect extremes along continuously distributed behavioral traits, identifying those that are discrete has important implications for accurate diagnosis, effective…
Lambert W function for applications in physics
NASA Astrophysics Data System (ADS)
Veberič, Darko
2012-12-01
The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.
Cubic Zig-Zag Enrichment of the Classical Kirchhoff Kinematics for Laminated and Sandwich Plates
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2012-01-01
A detailed anaylsis and examples are presented that show how to enrich the kinematics of classical Kirchhoff plate theory by appending them with a set of continuous piecewise-cubic functions. This analysis is used to obtain functions that contain the effects of laminate heterogeneity and asymmetry on the variations of the inplane displacements and transverse shearing stresses, for use with a {3, 0} plate theory in which these distributions are specified apriori. The functions used for the enrichment are based on the improved zig-zag plate theory presented recently by Tessler, Di Scuva, and Gherlone. With the approach presented herein, the inplane displacements are represented by a set of continuous piecewise-cubic functions, and the transverse shearing stresses and strains are represented by a set of piecewise-quadratic functions that are discontinuous at the ply interfaces.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.
2012-08-01
This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.
New optical probes for the continuous monitoring of renal function
NASA Astrophysics Data System (ADS)
Dorshow, Richard B.; Asmelash, Bethel; Chinen, Lori K.; Debreczeny, Martin P.; Fitch, Richard M.; Freskos, John N.; Galen, Karen P.; Gaston, Kimberly R.; Marzan, Timothy A.; Poreddy, Amruta R.; Rajagopalan, Raghavan; Shieh, Jeng-Jong; Neumann, William L.
2008-02-01
The ability to continuously monitor renal function via the glomerular filtration rate (GFR) in the clinic is currently an unmet medical need. To address this need we have developed a new series of hydrophilic fluorescent probes designed to clear via glomerular filtration for use as real time optical monitoring agents at the bedside. The ideal molecule should be freely filtered via the glomerular filtration barrier and be neither reabsorbed nor secreted by the renal tubule. In addition, we have hypothesized that a low volume of distribution into the interstitial space could also be advantageous. Our primary molecular design strategy employs a very small pyrazine-based fluorophore as the core unit. Modular chemistry for functionalizing these systems for optimal pharmacokinetics (PK) and photophysical properties have been developed. Structure-activity relationship (SAR) and pharmacokinetic (PK) studies involving hydrophilic pyrazine analogues incorporating polyethylene glycol (PEG), carbohydrate, amino acid and peptide functionality have been a focus of this work. Secondary design strategies for minimizing distribution into the interstitium while maintaining glomerular filtration include enhancing molecular volume through PEG substitution. In vivo optical monitoring experiments with advanced candidates have been correlated with plasma PK for measurement of clearance and hence GFR.
10 CFR 51.66 - Environmental report-number of copies; distribution.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Section 51.66 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) ENVIRONMENTAL PROTECTION REGULATIONS FOR DOMESTIC LICENSING AND RELATED REGULATORY FUNCTIONS National Environmental Policy Act-Regulations... submit to the Director of Nuclear Material Safety and Safeguards an environmental report or any...
The structure of water around the compressibility minimum
L. B. Skinner; Benmore, C. J.; Parise, J.; ...
2014-12-03
Here we present diffraction data that yield the oxygen-oxygen pair distribution function, gOO(r) over the range 254.2–365.9 K. The running O-O coordination number, which represents the integral of the pair distribution function as a function of radial distance, is found to exhibit an isosbestic point at 3.30(5) Å. The probability of finding an oxygen atom surrounding another oxygen at this distance is therefore shown to be independent of temperature and corresponds to an O-O coordination number of 4.3(2). Moreover, the experimental data also show a continuous transition associated with the second peak position in gOO(r) concomitant with the compressibility minimummore » at 319 K.« less
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
Impelluso, Thomas J
2003-06-01
An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.
7 CFR 251.4 - Availability of commodities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... existing food bank networks and other organizations whose ongoing primary function is to facilitate the... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION THE EMERGENCY FOOD ASSISTANCE PROGRAM § 251.4...
Analysis of Functionally Graded Shells Subjected to Blast Loads
2008-07-21
and antisymmetric about the midsurface , continuous distributions of the two constituent phases, ceramic and metal, are considered, in the sense that...through the wall-thickness temperature field is investigated. Two scenarios, symmetric and antisymmetric about the midsurface , continuous...and for 3 / 2x h= − , ( / 2) cP h P− ⇒ . At the midsurface , 3 0x = , and for 1k = , ( )(0) / 2c mP P P= + . 3 Kinematics and Constitutive
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
Use of collateral information to improve LANDSAT classification accuracies
NASA Technical Reports Server (NTRS)
Strahler, A. H. (Principal Investigator)
1981-01-01
Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.
Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.
Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella
2014-11-03
Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Sallah, M.
2014-03-01
The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.
Continuous high speed coherent one-way quantum key distribution.
Stucki, Damien; Barreiro, Claudio; Fasel, Sylvain; Gautier, Jean-Daniel; Gay, Olivier; Gisin, Nicolas; Thew, Rob; Thoma, Yann; Trinkler, Patrick; Vannel, Fabien; Zbinden, Hugo
2009-08-03
Quantum key distribution (QKD) is the first commercial quantum technology operating at the level of single quanta and is a leading light for quantum-enabled photonic technologies. However, controlling these quantum optical systems in real world environments presents significant challenges. For the first time, we have brought together three key concepts for future QKD systems: a simple high-speed protocol; high performance detection; and integration both, at the component level and for standard fibre network connectivity. The QKD system is capable of continuous and autonomous operation, generating secret keys in real time. Laboratory and field tests were performed and comparisons made with robust InGaAs avalanche photodiodes and superconducting detectors. We report the first real world implementation of a fully functional QKD system over a 43 dB-loss (150 km) transmission line in the Swisscom fibre optic network where we obtained average real-time distribution rates over 3 hours of 2.5 bps.
KINK AND SAUSAGE MODES IN NONUNIFORM MAGNETIC SLABS WITH CONTINUOUS TRANSVERSE DENSITY DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hui; Li, Bo; Chen, Shao-Xia
2015-11-20
We examine the influence of a continuous density structuring transverse to coronal slabs on the dispersive properties of fundamental standing kink and sausage modes supported therein. We derive generic dispersion relations (DRs) governing linear fast waves in pressureless straight slabs with general transverse density distributions, and focus on cases where the density inhomogeneity takes place in a layer of arbitrary width and in arbitrary form. The physical relevance of the solutions to the DRs is demonstrated by the corresponding time-dependent computations. For all profiles examined, the lowest order kink modes are trapped regardless of longitudinal wavenumber k. A continuous density distribution introducesmore » a difference to their periods of ≲13% when k is the observed range relative to the case where the density profile takes a step function form. Sausage modes and other branches of kink modes are leaky at small k, and their periods and damping times are heavily influenced by how the transverse density profile is prescribed, in particular the length scale. These modes have sufficiently high quality to be observable only for physical parameters representative of flare loops. We conclude that while the simpler DR pertinent to a step function profile can be used for the lowest order kink modes, the detailed information on the transverse density structuring needs to be incorporated into studies of sausage modes and higher order kink modes.« less
Huang, Bo; Kuan, Pei Fen
2014-11-01
Delayed dose limiting toxicities (i.e. beyond first cycle of treatment) is a challenge for phase I trials. The time-to-event continual reassessment method (TITE-CRM) is a Bayesian dose-finding design to address the issue of long observation time and early patient drop-out. It uses a weighted binomial likelihood with weights assigned to observations by the unknown time-to-toxicity distribution, and is open to accrual continually. To avoid dosing at overly toxic levels while retaining accuracy and efficiency for DLT evaluation that involves multiple cycles, we propose an adaptive weight function by incorporating cyclical data of the experimental treatment with parameters updated continually. This provides a reasonable estimate for the time-to-toxicity distribution by accounting for inter-cycle variability and maintains the statistical properties of consistency and coherence. A case study of a First-in-Human trial in cancer for an experimental biologic is presented using the proposed design. Design calibrations for the clinical and statistical parameters are conducted to ensure good operating characteristics. Simulation results show that the proposed TITE-CRM design with adaptive weight function yields significantly shorter trial duration, does not expose patients to additional risk, is competitive against the existing weighting methods, and possesses some desirable properties. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Cho, Jeonghyun; Han, Cheolheui; Cho, Leesang; Cho, Jinsoo
2003-08-01
This paper treats the kernel function of an integral equation that relates a known or prescribed upwash distribution to an unknown lift distribution for a finite wing. The pressure kernel functions of the singular integral equation are summarized for all speed range in the Laplace transform domain. The sonic kernel function has been reduced to a form, which can be conveniently evaluated as a finite limit from both the subsonic and supersonic sides when the Mach number tends to one. Several examples are solved including rectangular wings, swept wings, a supersonic transport wing and a harmonically oscillating wing. Present results are given with other numerical data, showing continuous results through the unit Mach number. Computed results are in good agreement with other numerical results.
Un, M Kerem; Kaghazchi, Hamed
2018-01-01
When a signal is initiated in the nerve, it is transmitted along each nerve fiber via an action potential (called single fiber action potential (SFAP)) which travels with a velocity that is related with the diameter of the fiber. The additive superposition of SFAPs constitutes the compound action potential (CAP) of the nerve. The fiber diameter distribution (FDD) in the nerve can be computed from the CAP data by solving an inverse problem. This is usually achieved by dividing the fibers into a finite number of diameter groups and solve a corresponding linear system to optimize FDD. However, number of fibers in a nerve can be measured sometimes in thousands and it is possible to assume a continuous distribution for the fiber diameters which leads to a gradient optimization problem. In this paper, we have evaluated this continuous approach to the solution of the inverse problem. We have utilized an analytical function for SFAP and an assumed a polynomial form for FDD. The inverse problem involves the optimization of polynomial coefficients to obtain the best estimate for the FDD. We have observed that an eighth order polynomial for FDD can capture both unimodal and bimodal fiber distributions present in vivo, even in case of noisy CAP data. The assumed FDD distribution regularizes the ill-conditioned inverse problem and produces good results.
Management Training in Retailing.
ERIC Educational Resources Information Center
Veness, C. Rosina
Intended for prospective members of the new Distributive Industrial Training Board in Great Britain, this training guide concentrates on managerial functions in retailing; the selection of trainees; the planning of in-company and external training programs; scheduling and continuity of training; roles of training personnel; and the use of various…
Fronts in extended systems of bistable maps coupled via convolutions
NASA Astrophysics Data System (ADS)
Coutinho, Ricardo; Fernandez, Bastien
2004-01-01
An analysis of front dynamics in discrete time and spatially extended systems with general bistable nonlinearity is presented. The spatial coupling is given by the convolution with distribution functions. It allows us to treat in a unified way discrete, continuous or partly discrete and partly continuous diffusive interactions. We prove the existence of fronts and the uniqueness of their velocity. We also prove that the front velocity depends continuously on the parameters of the system. Finally, we show that every initial configuration that is an interface between the stable phases propagates asymptotically with the front velocity.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Born, U.
1970-01-01
A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.
Probability distributions for multimeric systems.
Albert, Jaroslav; Rooman, Marianne
2016-01-01
We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.
An adaptive grid scheme using the boundary element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munipalli, R.; Anderson, D.A.
1996-09-01
A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less
Statistics of particle time-temperature histories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hewson, John C.; Lignell, David O.; Sun, Guangyuan
2014-10-01
Particles in non - isothermal turbulent flow are subject to a stochastic environment tha t produces a distribution of particle time - temperature histories. This distribution is a function of the dispersion of the non - isothermal (continuous) gas phase and the distribution of particles relative to that gas phase. In this work we extend the one - dimensional turbulence (ODT) model to predict the joint dispersion of a dispersed particle phase and a continuous phase. The ODT model predicts the turbulent evolution of continuous scalar fields with a model for the cascade of fluctuations to smaller sc ales (themore » 'triplet map') at a rate that is a function of the fully resolved one - dimens ional velocity field . Stochastic triplet maps also drive Lagrangian particle dispersion with finite Stokes number s including inertial and eddy trajectory - crossing effect s included. Two distinct approaches to this coupling between triplet maps and particle dispersion are developed and implemented along with a hybrid approach. An 'instantaneous' particle displacement model matches the tracer particle limit and provide s an accurate description of particle dispersion. A 'continuous' particle displacement m odel translates triplet maps into a continuous velocity field to which particles respond. Particles can alter the turbulence, and modifications to the stochastic rate expr ession are developed for two - way coupling between particles and the continuous phase. Each aspect of model development is evaluated in canonical flows (homogeneous turbulence, free - shear flows and wall - bounded flows) for which quality measurements are ava ilable. ODT simulations of non - isothermal flows provide statistics for particle heating. These simulations show the significance of accurately predicting the joint statistics of particle and fluid dispersion . Inhomogeneous turbulence coupled with the in fluence of the mean flow fields on particles of varying properties alter s particle dispersion. The joint particle - temperature dispersion leads to a distribution of temperature histories predicted by the ODT . Predictions are shown for the lower moments an d the full distributions of the particle positions, particle - observed gas temperatures and particle temperatures. An analysis of the time scales affecting particle - temperature interactions covers Lagrangian integral time scales based on temperature autoco rrelations, rates of temperature change associated with particle motion relative to the temperature field and rates of diffusional change of temperatures. These latter two time scales have not been investigated previously; they are shown to be strongly in termittent having peaked distributions with long tails. The logarithm of the absolute value of these time scales exhibits a distribution closer to normal. A cknowledgements This work is supported by the Defense Threat Reduction Agency (DTRA) under their Counter - Weapons of Mass Destruction Basic Research Program in the area of Chemical and Biological Agent Defeat under award number HDTRA1 - 11 - 4503I to Sandia National Laboratories. The authors would like to express their appreciation for the guidance provi ded by Dr. Suhithi Peiris to this project and to the Science to Defeat Weapons of Mass Destruction program.« less
NASA Technical Reports Server (NTRS)
Eggleston, John M; Diederich, Franklin W
1957-01-01
The correlation functions and power spectra of the rolling and yawing moments on an airplane wing due to the three components of continuous random turbulence are calculated. The rolling moments to the longitudinal (horizontal) and normal (vertical) components depend on the spanwise distributions of instantaneous gust intensity, which are taken into account by using the inherent properties of symmetry of isotropic turbulence. The results consist of expressions for correlation functions or spectra of the rolling moment in terms of the point correlation functions of the two components of turbulence. Specific numerical calculations are made for a pair of correlation functions given by simple analytic expressions which fit available experimental data quite well. Calculations are made for four lift distributions. Comparison is made with the results of previous analyses which assumed random turbulence along the flight path and linear variations of gust velocity across the span.
Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.
Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark
2010-05-01
We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.
Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.
2016-06-07
A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.
Mixtures of amino-acid based ionic liquids and water.
Chaban, Vitaly V; Fileti, Eudes Eterno
2015-09-01
New ionic liquids (ILs) involving increasing numbers of organic and inorganic ions are continuously being reported. We recently developed a new force field; in the present work, we applied that force field to investigate the structural properties of a few novel imidazolium-based ILs in aqueous mixtures via molecular dynamics (MD) simulations. Using cluster analysis, radial distribution functions, and spatial distribution functions, we argue that organic ions (imidazolium, deprotonated alanine, deprotonated methionine, deprotonated tryptophan) are well dispersed in aqueous media, irrespective of the IL content. Aqueous dispersions exhibit desirable properties for chemical engineering. The ILs exist as ion pairs in relatively dilute aqueous mixtures (10 mol%), while more concentrated mixtures feature a certain amount of larger ionic aggregates.
Modeling evaporation of Jet A, JP-7 and RP-1 drops at 1 to 15 bars
NASA Technical Reports Server (NTRS)
Harstad, K.; Bellan, J.
2003-01-01
A model describing the evaportion of an isolated drop of a multicomponent fuel containing hundreds of species has been developed. The model is based on Continuous Thermodynamics concepts wherein the composition of a fuel is statistically described using a Probability Distribution Function (PDF).
Distributed Learning Environment: Major Functions, Implementation, and Continuous Improvement.
ERIC Educational Resources Information Center
Converso, Judith A.; Schaffer, Scott P.; Guerra, Ingrid J.
The content of this paper is based on a development plan currently in design for the U.S. Navy in conjunction with the Learning Systems Institute at Florida State University. Leading research (literature review) references and case study ("best practice") references are presented as supporting evidence for the results-oriented…
W. J. Bond; Robert Keane
2017-01-01
Fire is both a natural and anthropogenic disturbance influencing the distribution, structure, and functioning of terrestrial ecosystems around the world. Many plants and animals depend on fire for their continued existence. Others species, such as rainforest plants species, are extremely intolerant of burning and need protection from fire. The properties of a fire...
USDA-ARS?s Scientific Manuscript database
Bidirectional Reflectance Distribution Function (BRDF) model parameters, Albedo quantities, and Nadir BRDF Adjusted Reflectance (NBAR) products derived from the Visible Infrared Imaging Radiometer Suite (VIIRS), on the Suomi-NPP (National Polar-orbiting Partnership) satellite are evaluated through c...
Periodic bidirectional associative memory neural networks with distributed delays
NASA Astrophysics Data System (ADS)
Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde
2006-05-01
Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
Discrete-to-continuous transition in quantum phase estimation
NASA Astrophysics Data System (ADS)
Rządkowski, Wojciech; Demkowicz-Dobrzański, Rafał
2017-09-01
We analyze the problem of quantum phase estimation in which the set of allowed phases forms a discrete N -element subset of the whole [0 ,2 π ] interval, φn=2 π n /N , n =0 ,⋯,N -1 , and study the discrete-to-continuous transition N →∞ for various cost functions as well as the mutual information. We also analyze the relation between the problems of phase discrimination and estimation by considering a step cost function of a given width σ around the true estimated value. We show that in general a direct application of the theory of covariant measurements for a discrete subgroup of the U(1 ) group leads to suboptimal strategies due to an implicit requirement of estimating only the phases that appear in the prior distribution. We develop the theory of subcovariant measurements to remedy this situation and demonstrate truly optimal estimation strategies when performing a transition from discrete to continuous phase estimation.
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
On the existence of a scaling relation in the evolution of cellular systems
NASA Astrophysics Data System (ADS)
Fortes, M. A.
1994-05-01
A mean field approximation is used to analyze the evolution of the distribution of sizes in systems formed by individual 'cells,' each of which grows or shrinks, in such a way that the total number of cells decreases (e.g. polycrystals, soap froths, precipitate particles in a matrix). The rate of change of the size of a cell is defined by a growth function that depends on the size (x) of the cell and on moments of the size distribution, such as the average size (bar-x). Evolutionary equations for the distribution of sizes and of reduced sizes (i.e. x/bar-x) are established. The stationary (or steady state) solutions of the equations are obtained for various particular forms of the growth function. A steady state of the reduced size distribution is equivalent to a scaling behavior. It is found that there are an infinity of steady state solutions which form a (continuous) one-parameter family of functions, but they are not, in general, reached from an arbitrary initial state. These properties are at variance from those that can be derived from models based on von Neumann-Mullins equation.
Theory of Ostwald ripening in a two-component system
NASA Technical Reports Server (NTRS)
Baird, J. K.; Lee, L. K.; Frazier, D. O.; Naumann, R. J.
1986-01-01
When a two-component system is cooled below the minimum temperature for its stability, it separates into two or more immiscible phases. The initial nucleation produces grains (if solid) or droplets (if liquid) of one of the phases dispersed in the other. The dynamics by which these nuclei proceed toward equilibrium is called Ostwald ripening. The dynamics of growth of the droplets depends upon the following factors: (1) The solubility of the droplet depends upon its radius and the interfacial energy between it and the surrounding (continuous) phase. There is a critical radius determined by the supersaturation in the continuous phase. Droplets with radii smaller than critical dissolve, while droplets with radii larger grow. (2) The droplets concentrate one component and reject the other. The rate at which this occurs is assumed to be determined by the interdiffusion of the two components in the continuous phase. (3) The Ostwald ripening is constrained by conservation of mass; e.g., the amount of materials in the droplet phase plus the remaining supersaturation in the continuous phase must equal the supersaturation available at the start. (4) There is a distribution of droplet sizes associated with a mean droplet radius, which grows continuously with time. This distribution function satisfies a continuity equation, which is solved asymptotically by a similarity transformation method.
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
Activity Systems and Conflict Resolution in an Online Professional Communication Course
ERIC Educational Resources Information Center
Walker, Kristin
2004-01-01
Conflicts often arise in online professional communication class discussions as students discuss sensitive ethical issues relating to the workplace. When conflicts arise in an online class, the activity system of the class has to be kept in balance for the course to continue functioning effectively. Activity theory and distributed learning theory…
Red cell distribution width does not predict stroke severity or functional outcome.
Ntaios, George; Gurer, Ozgur; Faouzi, Mohamed; Aubert, Carole; Michel, Patrik
2012-01-01
Red cell distribution width was recently identified as a predictor of cardiovascular and all-cause mortality in patients with previous stroke. Red cell distribution width is also higher in patients with stroke compared with those without. However, there are no data on the association of red cell distribution width, assessed during the acute phase of ischemic stroke, with stroke severity and functional outcome. In the present study, we sought to investigate this relationship and ascertain the main determinants of red cell distribution width in this population. We used data from the Acute Stroke Registry and Analysis of Lausanne for patients between January 2003 and December 2008. Red cell distribution width was generated at admission by the Sysmex XE-2100 automated cell counter from ethylene diamine tetraacetic acid blood samples stored at room temperature until measurement. An χ(2) -test was performed to compare frequencies of categorical variables between different red cell distribution width quartiles, and one-way analysis of variance for continuous variables. The effect of red cell distribution width on severity and functional outcome was investigated in univariate and multivariate robust regression analysis. Level of significance was set at 95%. There were 1504 patients (72±15·76 years, 43·9% females) included in the analysis. Red cell distribution width was significantly associated to NIHSS (β-value=0·24, P=0·01) and functional outcome (odds ratio=10·73 for poor outcome, P<0·001) at univariate analysis but not multivariate. Prehospital Rankin score (β=0·19, P<0·001), serum creatinine (β=0·008, P<0·001), hemoglobin (β=-0·009, P<0·001), mean platelet volume (β=0·09, P<0·05), age (β=0·02, P<0·001), low ejection fraction (β=0·66, P<0·001) and antihypertensive treatment (β=0·32, P<0·001) were independent determinants of red cell distribution width. Red cell distribution width, assessed during the early phase of acute ischemic stroke, does not predict severity or functional outcome. © 2011 The Authors. International Journal of Stroke © 2011 World Stroke Organization.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
NASA Astrophysics Data System (ADS)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy
2018-03-01
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...
2018-03-06
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.
Wang, Xinghu; Hong, Yiguang; Ji, Haibo
2016-07-01
The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Geodesics in nonexpanding impulsive gravitational waves with Λ. II
NASA Astrophysics Data System (ADS)
Sämann, Clemens; Steinbauer, Roland
2017-11-01
We investigate all geodesics in the entire class of nonexpanding impulsive gravitational waves propagating in an (anti-)de Sitter universe using the distributional metric. We extend the regularization approach of part I [Sämann, C. et al., Classical Quantum Gravity 33(11), 115002 (2016)] to a full nonlinear distributional analysis within the geometric theory of generalized functions. We prove global existence and uniqueness of geodesics that cross the impulsive wave and hence geodesic completeness in full generality for this class of low regularity spacetimes. This, in particular, prepares the ground for a mathematically rigorous account on the "physical equivalence" of the continuous form with the distributional "form" of the metric.
2dFLenS and KiDS: determining source redshift distributions with cross-correlations
NASA Astrophysics Data System (ADS)
Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian
2017-03-01
We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.
Theoretical Current-Voltage Curve in Low-Pressure Cesium Diode for Electron-Rich Emission
NASA Technical Reports Server (NTRS)
Coldstein, C. M.
1964-01-01
Although considerable interest has been shown in the space-charge analysis of low-pressure (collisionless case) thermionic diodes, there is a conspicuous lack in the presentation of results in a way that allows direct comparison with experiment. The current-voltage curve of this report was, therefore, computed for a typical case within the realm of experimental interest. The model employed in this computation is shown in Fig. 1 and is defined by the limiting potential distributions [curves (a) and (b)]. Curve (a) represents the potential V as a monotonic function of position with a slope of zero at the anode; curve (b) is similarly monotonic with a slope of zero at the cathode. It is assumed that by a continuous variation of the anode voltage, the potential distributions vary continuously from one limiting form to the other. Although solutions for infinitely spaced electrodes show that spatically oscillatory potential distributions may exist, they have been neglected in this computation.
Aerodynamic influence coefficient method using singularity splines
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Weber, J. A.; Lesferd, E. P.
1974-01-01
A numerical lifting surface formulation, including computed results for planar wing cases is presented. This formulation, referred to as the vortex spline scheme, combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of loading function methods. The formulation employes a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfied all the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise. Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral. The current formulation uses the elementary horseshoe vortex as the basic singularity and is therefore restricted to linearized potential flow. As part of the study, a non planar development was considered, but the numerical evaluation of the lifting surface concept was restricted to planar configurations. Also, a second order sideslip analysis based on an asymptotic expansion was investigated using the singularity spline formulation.
Wang, Dongshu; Huang, Lihong
2014-03-01
In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.
Smooth conditional distribution function and quantiles under random censorship.
Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine
2002-09-01
We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).
Bonetti, Marco; Pagano, Marcello
2005-03-15
The topic of this paper is the distribution of the distance between two points distributed independently in space. We illustrate the use of this interpoint distance distribution to describe the characteristics of a set of points within some fixed region. The properties of its sample version, and thus the inference about this function, are discussed both in the discrete and in the continuous setting. We illustrate its use in the detection of spatial clustering by application to a well-known leukaemia data set, and report on the results of a simulation experiment designed to study the power characteristics of the methods within that study region and in an artificial homogenous setting. Copyright (c) 2004 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Troudet, Terry; Merrill, Walter C.
1990-01-01
The ability of feed-forward neural network architectures to learn continuous valued mappings in the presence of noise was demonstrated in relation to parameter identification and real-time adaptive control applications. An error function was introduced to help optimize parameter values such as number of training iterations, observation time, sampling rate, and scaling of the control signal. The learning performance depended essentially on the degree of embodiment of the control law in the training data set and on the degree of uniformity of the probability distribution function of the data that are presented to the net during sequence. When a control law was corrupted by noise, the fluctuations of the training data biased the probability distribution function of the training data sequence. Only if the noise contamination is minimized and the degree of embodiment of the control law is maximized, can a neural net develop a good representation of the mapping and be used as a neurocontroller. A multilayer net was trained with back-error-propagation to control a cart-pole system for linear and nonlinear control laws in the presence of data processing noise and measurement noise. The neurocontroller exhibited noise-filtering properties and was found to operate more smoothly than the teacher in the presence of measurement noise.
Solutions to Kuessner's integral equation in unsteady flow using local basis functions
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Halstead, D. W.
1975-01-01
The computational procedure and numerical results are presented for a new method to solve Kuessner's integral equation in the case of subsonic compressible flow about harmonically oscillating planar surfaces with controls. Kuessner's equation is a linear transformation from pressure to normalwash. The unknown pressure is expanded in terms of prescribed basis functions and the unknown basis function coefficients are determined in the usual manner by satisfying the given normalwash distribution either collocationally or in the complex least squares sense. The present method of solution differs from previous ones in that the basis functions are defined in a continuous fashion over a relatively small portion of the aerodynamic surface and are zero elsewhere. This method, termed the local basis function method, combines the smoothness and accuracy of distribution methods with the simplicity and versatility of panel methods. Predictions by the local basis function method for unsteady flow are shown to be in excellent agreement with other methods. Also, potential improvements to the present method and extensions to more general classes of solutions are discussed.
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
Evaluation of statistical distributions to analyze the pollution of Cd and Pb in urban runoff.
Toranjian, Amin; Marofi, Safar
2017-05-01
Heavy metal pollution in urban runoff causes severe environmental damage. Identification of these pollutants and their statistical analysis is necessary to provide management guidelines. In this study, 45 continuous probability distribution functions were selected to fit the Cd and Pb data in the runoff events of an urban area during October 2014-May 2015. The sampling was conducted from the outlet of the city basin during seven precipitation events. For evaluation and ranking of the functions, we used the goodness of fit Kolmogorov-Smirnov and Anderson-Darling tests. The results of Cd analysis showed that Hyperbolic Secant, Wakeby and Log-Pearson 3 are suitable for frequency analysis of the event mean concentration (EMC), the instantaneous concentration series (ICS) and instantaneous concentration of each event (ICEE), respectively. In addition, the LP3, Wakeby and Generalized Extreme Value functions were chosen for the EMC, ICS and ICEE related to Pb contamination.
2011-09-01
movement of the groundwater that sustains Headwater Slope wetlands are not regulated and continue to affect their distribution, character, and functions...permeability and soil porosity, thereby affecting the subsurface movement and storage of water in the soil. Soil permeability will affect the rate at...discharge time to the adjacent stream occurs over a longer period. Soil porosity will affect the volume of space available below the ground surface
A limiting analysis for edge effects in angle-ply laminates
NASA Technical Reports Server (NTRS)
Hsu, P. W.; Herakovich, C. T.
1976-01-01
A zeroth order solution for edge effects in angle ply composite laminates using perturbation techniques and a limiting free body approach was developed. The general method of solution for laminates is developed and then applied to the special case of a graphite/epoxy laminate. Interlaminar stress distributions are obtained as a function of the laminate thickness to width ratio h/b and compared to existing numerical results. The solution predicts stable, continuous stress distributions, determines finite maximum tensile interlaminar normal stress for two laminates, and provides mathematical evidence for singular interlaminar shear stresses.
NASA Astrophysics Data System (ADS)
Bessho, N.; Chen, L. J.; Hesse, M.; Wang, S.
2017-12-01
In asymmetric reconnection with a guide field in the Earth's magnetopause, electron motion in the electron diffusion region (EDR) is largely affected by the guide field, the Hall electric field, and the reconnection electric field. The electron motion in the EDR is neither simple gyration around the guide field nor simple meandering motion across the current sheet. The combined meandering motion and gyration has essential effects on particle acceleration by the in-plane Hall electric field (existing only in the magnetospheric side) and the out-of-plane reconnection electric field. We analyze electron motion and crescent-shaped electron distribution functions in the EDR in asymmetric guide field reconnection, and perform 2-D particle-in-cell (PIC) simulations to elucidate the effect of reconnection electric field on electron distribution functions. Recently, we have analytically expressed the acceleration effect due to the reconnection electric field on electron crescent distribution functions in asymmetric reconnection without a guide field (Bessho et al., Phys. Plasmas, 24, 072903, 2017). We extend the theory to asymmetric guide field reconnection, and predict the crescent bulge in distribution functions. Assuming 1D approximation of field variations in the EDR, we derive the time period of oscillatory electron motion (meandering + gyration) in the EDR. The time period is expressed as a hybrid of the meandering period and the gyro period. Due to the guide field, electrons not only oscillate along crescent-shaped trajectories in the velocity plane perpendicular to the antiparallel magnetic fields, but also move along parabolic trajectories in the velocity plane coplanar with magnetic field. The trajectory in the velocity space gradually shifts to the acceleration direction by the reconnection electric field as multiple bounces continue. Due to the guide field, electron distributions for meandering particles are bounded by two paraboloids (or hyperboloids) in the velocity space. We compare theory and PIC simulation results of the velocity shift of crescent distribution functions based on the derived time period of bounce motion in a guide field. Theoretical predictions are applied to electron distributions observed by MMS in magnetopause reconnection to estimate the reconnection electric field.
Non-Fickian dispersion of groundwater age
Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.
2014-01-01
We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651
Modeling continuous covariates with a "spike" at zero: Bivariate approaches.
Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi
2016-07-01
In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A brief introduction to probability.
Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio
2018-02-01
The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.
Shalymov, Dmitry S; Fradkov, Alexander L
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886
Dorazio, Robert; Karanth, K. Ullas
2017-01-01
MotivationSeveral spatial capture-recapture (SCR) models have been developed to estimate animal abundance by analyzing the detections of individuals in a spatial array of traps. Most of these models do not use the actual dates and times of detection, even though this information is readily available when using continuous-time recorders, such as microphones or motion-activated cameras. Instead most SCR models either partition the period of trap operation into a set of subjectively chosen discrete intervals and ignore multiple detections of the same individual within each interval, or they simply use the frequency of detections during the period of trap operation and ignore the observed times of detection. Both practices make inefficient use of potentially important information in the data.Model and data analysisWe developed a hierarchical SCR model to estimate the spatial distribution and abundance of animals detected with continuous-time recorders. Our model includes two kinds of point processes: a spatial process to specify the distribution of latent activity centers of individuals within the region of sampling and a temporal process to specify temporal patterns in the detections of individuals. We illustrated this SCR model by analyzing spatial and temporal patterns evident in the camera-trap detections of tigers living in and around the Nagarahole Tiger Reserve in India. We also conducted a simulation study to examine the performance of our model when analyzing data sets of greater complexity than the tiger data.BenefitsOur approach provides three important benefits: First, it exploits all of the information in SCR data obtained using continuous-time recorders. Second, it is sufficiently versatile to allow the effects of both space use and behavior of animals to be specified as functions of covariates that vary over space and time. Third, it allows both the spatial distribution and abundance of individuals to be estimated, effectively providing a species distribution model, even in cases where spatial covariates of abundance are unknown or unavailable. We illustrated these benefits in the analysis of our data, which allowed us to quantify differences between nocturnal and diurnal activities of tigers and to estimate their spatial distribution and abundance across the study area. Our continuous-time SCR model allows an analyst to specify many of the ecological processes thought to be involved in the distribution, movement, and behavior of animals detected in a spatial trapping array of continuous-time recorders. We plan to extend this model to estimate the population dynamics of animals detected during multiple years of SCR surveys.
Generalized Cross Entropy Method for estimating joint distribution from incomplete information
NASA Astrophysics Data System (ADS)
Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.
2016-07-01
Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as ;Generalized Cross Entropy Method; (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.
Connecting source aggregating areas with distributive regions via Optimal Transportation theory.
NASA Astrophysics Data System (ADS)
Lanzoni, S.; Putti, M.
2016-12-01
We study the application of Optimal Transport (OT) theory to the transfer of water and sediments from a distributed aggregating source to a distributing area connected by a erodible hillslope. Starting from the Monge-Kantorovich equations, We derive a global energy functional that nonlinearly combines the cost of constructing the drainage network over the entire domain and the cost of water and sediment transportation through the network. It can be shown that the minimization of this functional is equivalent to the infinite time solution of a system of diffusion partial differential equations coupled with transient ordinary differential equations, that closely resemble the classical conservation laws of water and sediments mass and momentum. We present several numerical simulations applied to realstic test cases. For example, the solution of the proposed model forms network configurations that share strong similiratities with rill channels formed on an hillslope. At a larger scale, we obtain promising results in simulating the network patterns that ensure a progressive and continuous transition from a drainage drainage area to a distributive receiving region.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2016-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.
Exploring the Alfven-Wave Acceleration of Auroral Electrons in the Laboratory
NASA Astrophysics Data System (ADS)
Schroeder, James William Ryan
Inertial Alfven waves occur in plasmas where the Alfven speed is greater than the electron thermal speed and the scale of wave field structure across the background magnetic field is comparable to the electron skin depth. Such waves have an electric field aligned with the background magnetic field that can accelerate electrons. It is likely that electrons are accelerated by inertial Alfven waves in the auroral magnetosphere and contribute to the generation of auroras. While rocket and satellite measurements show a high level of coincidence between inertial Alfven waves and auroral activity, definitive measurements of electrons being accelerated by inertial Alfven waves are lacking. Continued uncertainty stems from the difficulty of making a conclusive interpretation of measurements from spacecraft flying through a complex and transient process. A laboratory experiment can avoid some of the ambiguity contained in spacecraft measurements. Experiments have been performed in the Large Plasma Device (LAPD) at UCLA. Inertial Alfven waves were produced while simultaneously measuring the suprathermal tails of the electron distribution function. Measurements of the distribution function use resonant absorption of whistler mode waves. During a burst of inertial Alfven waves, the measured portion of the distribution function oscillates at the Alfven wave frequency. The phase space response of the electrons is well-described by a linear solution to the Boltzmann equation. Experiments have been repeated using electrostatic and inductive Alfven wave antennas. The oscillation of the distribution function is described by a purely Alfvenic model when the Alfven wave is produced by the inductive antenna. However, when the electrostatic antenna is used, measured oscillations of the distribution function are described by a model combining Alfvenic and non-Alfvenic effects. Indications of a nonlinear interaction between electrons and inertial Alfven waves are present in recent data.
Variability of daily UV index in Jokioinen, Finland, in 1995-2015
NASA Astrophysics Data System (ADS)
Heikkilä, A.; Uusitalo, K.; Kärhä, P.; Vaskuri, A.; Lakkala, K.; Koskela, T.
2017-02-01
UV Index is a measure for UV radiation harmful for the human skin, developed and used to promote the sun awareness and protection of people. Monitoring programs conducted around the world have produced a number of long-term time series of UV irradiance. One of the longest time series of solar spectral UV irradiance in Europe has been obtained from the continuous measurements of Brewer #107 spectrophotometer in Jokioinen (lat. 60°44'N, lon. 23°30'E), Finland, over the years 1995-2015. We have used descriptive statistics and estimates of cumulative distribution functions, quantiles and probability density functions in the analysis of the time series of daily UV Index maxima. Seasonal differences in the estimated distributions and in the trends of the estimated quantiles are found.
Critical thresholds for eventual extinction in randomly disturbed population growth models.
Peckham, Scott D; Waymire, Edward C; De Leenheer, Patrick
2018-02-16
This paper considers several single species growth models featuring a carrying capacity, which are subject to random disturbances that lead to instantaneous population reduction at the disturbance times. This is motivated in part by growing concerns about the impacts of climate change. Our main goal is to understand whether or not the species can persist in the long run. We consider the discrete-time stochastic process obtained by sampling the system immediately after the disturbances, and find various thresholds for several modes of convergence of this discrete process, including thresholds for the absence or existence of a positively supported invariant distribution. These thresholds are given explicitly in terms of the intensity and frequency of the disturbances on the one hand, and the population's growth characteristics on the other. We also perform a similar threshold analysis for the original continuous-time stochastic process, and obtain a formula that allows us to express the invariant distribution for this continuous-time process in terms of the invariant distribution of the discrete-time process, and vice versa. Examples illustrate that these distributions can differ, and this sends a cautionary message to practitioners who wish to parameterize these and related models using field data. Our analysis relies heavily on a particular feature shared by all the deterministic growth models considered here, namely that their solutions exhibit an exponentially weighted averaging property between a function of the initial condition, and the same function applied to the carrying capacity. This property is due to the fact that these systems can be transformed into affine systems.
Simulations of Evaporating Multicomponent Fuel Drops
NASA Technical Reports Server (NTRS)
Bellan, Josette; Le Clercq, Patrick
2005-01-01
A paper presents additional information on the subject matter of Model of Mixing Layer With Multicomponent Evaporating Drops (NPO-30505), NASA Tech Briefs, Vol. 28, No. 3 (March 2004), page 55. To recapitulate: A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The model is used to perform direct numerical simulations in continuing studies directed toward understanding the behaviors of sprays of liquid petroleum fuels in furnaces, industrial combustors, and engines. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and drops, respectively, and incorporates a concept of continuous thermodynamics, according to which the chemical composition of a fuel is described by use of a distribution function. In this investigation, the distribution function depends solely on the species molar weight. The present paper reiterates the description of the model and discusses further in-depth analysis of the previous results as well as results of additional numerical simulations assessing the effect of the mass loading. The paper reiterates the conclusions reported in the cited previous article, and states some new conclusions. Some new conclusions are: 1. The slower evaporation and the evaporation/ condensation process for multicomponent-fuel drops resulted in a reduced drop-size polydispersity compared to their single-component counterpart. 2. The inhomogeneity in the spatial distribution of the species in the layer increases with the initial mass loading. 3. As evaporation becomes faster, the assumed invariant form of the molecular- weight distribution during evaporation becomes inaccurate.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Statistical detection of patterns in unidimensional distributions by continuous wavelet transforms
NASA Astrophysics Data System (ADS)
Baluev, R. V.
2018-04-01
Objective detection of specific patterns in statistical distributions, like groupings or gaps or abrupt transitions between different subsets, is a task with a rich range of applications in astronomy: Milky Way stellar population analysis, investigations of the exoplanets diversity, Solar System minor bodies statistics, extragalactic studies, etc. We adapt the powerful technique of the wavelet transforms to this generalized task, making a strong emphasis on the assessment of the patterns detection significance. Among other things, our method also involves optimal minimum-noise wavelets and minimum-noise reconstruction of the distribution density function. Based on this development, we construct a self-closed algorithmic pipeline aimed to process statistical samples. It is currently applicable to single-dimensional distributions only, but it is flexible enough to undergo further generalizations and development.
Reframing the Dissemination Challenge: A Marketing and Distribution Perspective
Bernhardt, Jay M.
2009-01-01
A fundamental obstacle to successful dissemination and implementation of evidence-based public health programs is the near-total absence of systems and infrastructure for marketing and distribution. We describe the functions of a marketing and distribution system, and we explain how it would help move effective public health programs from research to practice. Then we critically evaluate the 4 dominant strategies now used to promote dissemination and implementation, and we explain how each would be enhanced by marketing and distribution systems. Finally, we make 6 recommendations for building the needed system infrastructure and discuss the responsibility within the public health community for implementation of these recommendations. Without serious investment in such infrastructure, application of proven solutions in public health practice will continue to occur slowly and rarely. PMID:19833993
Reframing the dissemination challenge: a marketing and distribution perspective.
Kreuter, Matthew W; Bernhardt, Jay M
2009-12-01
A fundamental obstacle to successful dissemination and implementation of evidence-based public health programs is the near-total absence of systems and infrastructure for marketing and distribution. We describe the functions of a marketing and distribution system, and we explain how it would help move effective public health programs from research to practice. Then we critically evaluate the 4 dominant strategies now used to promote dissemination and implementation, and we explain how each would be enhanced by marketing and distribution systems. Finally, we make 6 recommendations for building the needed system infrastructure and discuss the responsibility within the public health community for implementation of these recommendations. Without serious investment in such infrastructure, application of proven solutions in public health practice will continue to occur slowly and rarely.
Gardner, B.; Sullivan, P.J.; Morreale, S.J.; Epperly, S.P.
2008-01-01
Loggerhead (Caretta caretta) and leatherback (Dermochelys coriacea) sea turtle distributions and movements in offshore waters of the western North Atlantic are not well understood despite continued efforts to monitor, survey, and observe them. Loggerhead and leatherback sea turtles are listed as endangered by the World Conservation Union, and thus anthropogenic mortality of these species, including fishing, is of elevated interest. This study quantifies spatial and temporal patterns of sea turtle bycatch distributions to identify potential processes influencing their locations. A Ripley's K function analysis was employed on the NOAA Fisheries Atlantic Pelagic Longline Observer Program data to determine spatial, temporal, and spatio-temporal patterns of sea turtle bycatch distributions within the pattern of the pelagic fishery distribution. Results indicate that loggerhead and leatherback sea turtle catch distributions change seasonally, with patterns of spatial clustering appearing from July through October. The results from the space-time analysis indicate that sea turtle catch distributions are related on a relatively fine scale (30-200 km and 1-5 days). The use of spatial and temporal point pattern analysis, particularly K function analysis, is a novel way to examine bycatch data and can be used to inform fishing practices such that fishing could still occur while minimizing sea turtle bycatch. ?? 2008 NRC.
Modeled ground water age distributions
Woolfenden, Linda R.; Ginn, Timothy R.
2009-01-01
The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.
Investigation of the Photon Strength Function in 130 Te
NASA Astrophysics Data System (ADS)
Isaak, J.; Beller, J.; Fiori, E.; Glorius, J.; Krtička, M.; Löher, B.; Pietralla, N.; Romig, C.; Rusev, G.; Savran, D.; Scheck, M.; Silva, J.; Sonnabend, K.; Tonchev, A. P.; Tornow, W.; Weller, H. R.; Zweidinger, M.
2016-01-01
The dipole strength distribution of 130Te was investigated with the method of Nuclear Resonance Fluorescence using continuous-energy bremsstrahlung at the Darmstadt High Intensity Photon Setup and quasi-monoenergetic photons at the High Intensity γ-Ray Source. The average decay properties were determined between 5.50 and 8.15 MeV and compared to simulations within the statistical model.
Distributed Decision Making in a Dynamic Network Environment
1990-01-01
protocols, particularly when traffic arrival statistics are varying or unknown, and loads are high. Both nonpreemptive and preemptive repeat disciplines are...The simulation model allows general value functions, continuous time operation, and preemptive or nonpreemptive service. For reasons of tractability... nonpreemptive LIFO, (4) nonpreemptive LIFO with discarding, (5) nonpreemptive HOL, (6) nonpreemp- tive HOL with discarding, (7) preemptive repeat HOL, (8
Loss of Miro1-directed mitochondrial movement results in a novel murine model for neuron disease
Nguyen, Tammy T.; Oh, Sang S.; Weaver, David; Lewandowska, Agnieszka; Maxfield, Dane; Schuler, Max-Hinderk; Smith, Nathan K.; Macfarlane, Jane; Saunders, Gerald; Palmer, Cheryl A.; Debattisti, Valentina; Koshiba, Takumi; Pulst, Stefan; Feldman, Eva L.; Hajnóczky, György; Shaw, Janet M.
2014-01-01
Defective mitochondrial distribution in neurons is proposed to cause ATP depletion and calcium-buffering deficiencies that compromise cell function. However, it is unclear whether aberrant mitochondrial motility and distribution alone are sufficient to cause neurological disease. Calcium-binding mitochondrial Rho (Miro) GTPases attach mitochondria to motor proteins for anterograde and retrograde transport in neurons. Using two new KO mouse models, we demonstrate that Miro1 is essential for development of cranial motor nuclei required for respiratory control and maintenance of upper motor neurons required for ambulation. Neuron-specific loss of Miro1 causes depletion of mitochondria from corticospinal tract axons and progressive neurological deficits mirroring human upper motor neuron disease. Although Miro1-deficient neurons exhibit defects in retrograde axonal mitochondrial transport, mitochondrial respiratory function continues. Moreover, Miro1 is not essential for calcium-mediated inhibition of mitochondrial movement or mitochondrial calcium buffering. Our findings indicate that defects in mitochondrial motility and distribution are sufficient to cause neurological disease. PMID:25136135
Theory of Random Copolymer Fractionation in Columns
NASA Astrophysics Data System (ADS)
Enders, Sabine
Random copolymers show polydispersity both with respect to molecular weight and with respect to chemical composition, where the physical and chemical properties depend on both polydispersities. For special applications, the two-dimensional distribution function must adjusted to the application purpose. The adjustment can be achieved by polymer fractionation. From the thermodynamic point of view, the distribution function can be adjusted by the successive establishment of liquid-liquid equilibria (LLE) for suitable solutions of the polymer to be fractionated. The fractionation column is divided into theoretical stages. Assuming an LLE on each theoretical stage, the polymer fractionation can be modeled using phase equilibrium thermodynamics. As examples, simulations of stepwise fractionation in one direction, cross-fractionation in two directions, and two different column fractionations (Baker-Williams fractionation and continuous polymer fractionation) have been investigated. The simulation delivers the distribution according the molecular weight and chemical composition in every obtained fraction, depending on the operative properties, and is able to optimize the fractionation effectively.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Adaptive Detector Arrays for Optical Communications Receivers
NASA Technical Reports Server (NTRS)
Vilnrotter, V.; Srinivasan, M.
2000-01-01
The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.
Chakraborty Thakur, Saikat; McCarren, Dustin; Carr, Jerry; Scime, Earl E
2012-02-01
We report continuous wave cavity ring down spectroscopy (CW-CRDS) measurements of ion velocity distribution functions (VDFs) in low pressure argon helicon plasma (magnetic field strength of 600 G, T(e) ≈ 4 eV and n ≈ 5 × 10(11) cm(-3)). Laser induced fluorescence (LIF) is routinely used to measure VDFs of argon ions, argon neutrals, helium neutrals, and xenon ions in helicon sources. Here, we describe a CW-CRDS diagnostic based on a narrow line width, tunable diode laser as an alternative technique to measure VDFs in similar regimes but where LIF is inapplicable. Being an ultra-sensitive, cavity enhanced absorption spectroscopic technique; CW-CRDS can also provide a direct quantitative measurement of the absolute metastable state density. The proof of principle CW-CRDS measurements presented here are of the Doppler broadened absorption spectrum of Ar II at 668.6138 nm. Extrapolating from these initial measurements, it is expected that this diagnostic is suitable for neutrals and ions in plasmas ranging in density from 1 × 10(9) cm(-3) to 1 × 10(13) cm(-3) and target species temperatures less than 20 eV.
Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.
Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei
2018-06-01
This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.
NASA Astrophysics Data System (ADS)
Chakraborty Thakur, Saikat; McCarren, Dustin; Carr, Jerry; Scime, Earl E.
2012-02-01
We report continuous wave cavity ring down spectroscopy (CW-CRDS) measurements of ion velocity distribution functions (VDFs) in low pressure argon helicon plasma (magnetic field strength of 600 G, Te ≈ 4 eV and n ≈ 5 × 1011 cm-3). Laser induced fluorescence (LIF) is routinely used to measure VDFs of argon ions, argon neutrals, helium neutrals, and xenon ions in helicon sources. Here, we describe a CW-CRDS diagnostic based on a narrow line width, tunable diode laser as an alternative technique to measure VDFs in similar regimes but where LIF is inapplicable. Being an ultra-sensitive, cavity enhanced absorption spectroscopic technique; CW-CRDS can also provide a direct quantitative measurement of the absolute metastable state density. The proof of principle CW-CRDS measurements presented here are of the Doppler broadened absorption spectrum of Ar II at 668.6138 nm. Extrapolating from these initial measurements, it is expected that this diagnostic is suitable for neutrals and ions in plasmas ranging in density from 1 × 109 cm-3 to 1 × 1013 cm-3 and target species temperatures less than 20 eV.
Frequency distributions from birth, death, and creation processes.
Bartley, David L; Ogden, Trevor; Song, Ruiguang
2002-01-01
The time-dependent frequency distribution of groups of individuals versus group size was investigated within a continuum approximation, assuming a simplified individual growth, death and creation model. The analogy of the system to a physical fluid exhibiting both convection and diffusion was exploited in obtaining various solutions to the distribution equation. A general solution was approximated through the application of a Green's function. More specific exact solutions were also found to be useful. The solutions were continually checked against the continuum approximation through extensive simulation of the discrete system. Over limited ranges of group size, the frequency distributions were shown to closely exhibit a power-law dependence on group size, as found in many realizations of this type of system, ranging from colonies of mutated bacteria to the distribution of surnames in a given population. As an example, the modeled distributions were successfully fit to the distribution of surnames in several countries by adjusting the parameters specifying growth, death and creation rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta
Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less
NASA Astrophysics Data System (ADS)
Wlodarczyk, Jakub; Kierdaszuk, Borys
2005-08-01
Decays of tyrosine fluorescence in protein-ligand complexes are described by a model of continuous distribution of fluorescence lifetimes. Resulted analytical power-like decay function provides good fits to highly complex fluorescence kinetics. Moreover, this is a manifestation of so-called Tsallis q-exponential function, which is suitable for description of the systems with long-range interactions, memory effect, as well as with fluctuations of the characteristic lifetime of fluorescence. The proposed decay functions were applied to analysis of fluorescence decays of tyrosine in a protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli (the product of the deoD gene), free in aqueous solution and in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate). The power-like function provides new information about enzyme-ligand complex formation based on the physically justified heterogeneity parameter directly related to the lifetime distribution. A measure of the heterogeneity parameter in the enzyme systems is provided by a variance of fluorescence lifetime distribution. The possible number of deactivation channels and excited state mean lifetime can be easily derived without a priori knowledge of the complexity of studied system. Moreover, proposed model is simpler then traditional multi-exponential one, and better describes heterogeneous nature of studied systems.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
Optical arc sensor using energy harvesting power source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Kyoo Nam, E-mail: knchoi@inu.ac.kr; Rho, Hee Hyuk, E-mail: rdoubleh0902@inu.ac.kr
Wireless sensors without external power supply gained considerable attention due to convenience both in installation and operation. Optical arc detecting sensor equipping with self sustaining power supply using energy harvesting method was investigated. Continuous energy harvesting method was attempted using thermoelectric generator to supply standby power in micro ampere scale and operating power in mA scale. Peltier module with heat-sink was used for high efficiency electricity generator. Optical arc detecting sensor with hybrid filter showed insensitivity to fluorescent and incandescent lamps under simulated distribution panel condition. Signal processing using integrating function showed selective arc discharge detection capability to different arcmore » energy levels, with a resolution below 17 J energy difference, unaffected by bursting arc waveform. The sensor showed possibility for application to arc discharge detecting sensor in power distribution panel. Also experiment with proposed continuous energy harvesting method using thermoelectric power showed possibility as a self sustainable power source of remote sensor.« less
Optical arc sensor using energy harvesting power source
NASA Astrophysics Data System (ADS)
Choi, Kyoo Nam; Rho, Hee Hyuk
2016-06-01
Wireless sensors without external power supply gained considerable attention due to convenience both in installation and operation. Optical arc detecting sensor equipping with self sustaining power supply using energy harvesting method was investigated. Continuous energy harvesting method was attempted using thermoelectric generator to supply standby power in micro ampere scale and operating power in mA scale. Peltier module with heat-sink was used for high efficiency electricity generator. Optical arc detecting sensor with hybrid filter showed insensitivity to fluorescent and incandescent lamps under simulated distribution panel condition. Signal processing using integrating function showed selective arc discharge detection capability to different arc energy levels, with a resolution below 17J energy difference, unaffected by bursting arc waveform. The sensor showed possibility for application to arc discharge detecting sensor in power distribution panel. Also experiment with proposed continuous energy harvesting method using thermoelectric power showed possibility as a self sustainable power source of remote sensor.
Fractional System Identification: An Approach Using Continuous Order-Distributions
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Lorenzo, Carl F.
1999-01-01
This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.
A short note on the maximal point-biserial correlation under non-normality.
Cheng, Ying; Liu, Haiyan
2016-11-01
The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Edge effects in angle-ply composite laminates
NASA Technical Reports Server (NTRS)
Hsu, P. W.; Herakovich, C. T.
1977-01-01
This paper presents the results of a zeroth-order solution for edge effects in angle-ply composite laminates obtained using perturbation techniques and a limiting free body approach. The general solution for edge effects in laminates of arbitrary angle ply is applied to the special case of a (+ or - 45)s graphite/epoxy laminate. Interlaminar stress distributions are obtained as a function of the laminate thickness-to-width ratio and compared to finite difference results. The solution predicts stable, continuous stress distributions, determines finite maximum tensile interlaminar normal stress and provides mathematical evidence for singular interlaminar shear stresses in (+ or - 45) graphite/epoxy laminates.
NASA Astrophysics Data System (ADS)
Gearhart, Joshua; Niffte Collaboration
2017-09-01
Fission fragment mass distributions are important observables for developing next generation dynamical models of fission. Many previous measurements have utilized ionization chambers to measure fission fragment energies and emission angles which are then used for mass calculations. The Neutron Induced Fission Fragment Tracking Experiment (NIFFTE) collaboration has built a time projection chamber (fissionTPC) that is capable of measuring additional quantities such as the ionization profiles of detected particles, allowing for the association of an individual fragment's ionization profile with its mass. The fragment masses are measured using the previously established 2E method. The fissionTPC takes its data using a continuous incident neutron energy spectrum provided by the Los Alamos Neutron Science CEnter (LANSCE). Mass distribution measurements across a continuous range of neutron energies put stronger constraints on fission models than similar measurements conducted at a handful of discrete neutron energies. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Numbers DE-NA0003180 and DE-NA0002921.
Jayasinghe, B Sumith; Volz, David C
2012-01-01
G protein-coupled estrogen receptor 1 (GPER) is a G protein-coupled receptor (GPCR) unrelated to nuclear estrogen receptors but strongly activated by 17β-estradiol in both mammals and fish. To date, the distribution and functional characterization of GPER within reproductive and nonreproductive vertebrate organs have been restricted to juvenile and adult animals. In contrast, virtually nothing is known about the spatiotemporal distribution and function of GPER during vertebrate embryogenesis. Using zebrafish as an animal model, we investigated the potential functional role and expression of GPER during embryogenesis. Based on real-time PCR and whole-mount in situ hybridization, gper was expressed as early as 1 h postfertilization (hpf) and exhibited strong stage-dependent expression patterns during embryogenesis. At 26 and 38 hpf, gper mRNA was broadly distributed throughout the body, whereas from 50 to 98 hpf, gper expression was increasingly localized to the heart, brain, neuromasts, craniofacial region, and somite boundaries of developing zebrafish. Continuous exposure to a selective GPER agonist (G-1)-but not continuous exposure to a selective GPER antagonist (G-15)-from 5 to 96 hpf, or within three developmental windows ranging from 10 to 72 hpf, resulted in adverse concentration-dependent effects on survival, gross morphology, and somite formation within the trunk of developing zebrafish embryos. Importantly, based on co-exposure studies, G-15 blocked severe G-1-induced developmental toxicity, suggesting that G-1 toxicity is mediated via aberrant activation of GPER. Overall, our findings suggest that xenobiotic-induced GPER activation represents a potentially novel and understudied mechanism of toxicity for environmentally relevant chemicals that affect vertebrate embryogenesis.
SU-F-18C-11: Diameter Dependency of the Radial Dose Distribution in a Long Polyethylene Cylinder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakalyar, D; McKenney, S; Feng, W
Purpose: The radial dose distribution in the central plane of a long cylinder following a long CT scan depends upon the diameter and composition of the cylinder. An understanding of this behavior is required for determining the spatial average of the dose in the central plane. Polyethylene, the material for construction of the TG200/ICRU phantom (30 cm in diameter) was used for this study. Size effects are germane to the principles incorporated in size specific dose estimates (SSDE); thus diameter dependency was explored as well. Method: ssuming a uniform cylinder and cylindrically symmetric conditions of irradiation, the dose distribution canmore » be described using a radial function. This function must be an even function of the radial distance due to the conditions of symmetry. Two effects are accounted for: The direct beam makes its weakest contribution at the center while the contribution due to scatter is strongest at the center and drops off abruptly at the outer radius. An analytic function incorporating these features was fit to Monte Carlo results determined for infinite polyethylene cylinders of various diameters. A further feature of this function is that it is integrable. Results: Symmetry and continuity dictate a local extremum at the center which is a minimum for the larger sizes. The competing effects described above can Resultin an absolute maximum occurring between the center and outer edge of the cylinders. For the smallest cylinders, the maximum dose may occur at the center. Conclusion: An integrable, analytic function can be used to characterize the radial dependency of dose for cylindrical CT phantoms of various sizes. One use for this is to help determine average dose distribution over the central cylinder plane when equilibrium dose has been reached.« less
Discrete linear canonical transforms based on dilated Hermite functions.
Pei, Soo-Chang; Lai, Yun-Chiu
2011-08-01
Linear canonical transform (LCT) is very useful and powerful in signal processing and optics. In this paper, discrete LCT (DLCT) is proposed to approximate LCT by utilizing the discrete dilated Hermite functions. The Wigner distribution function is also used to investigate DLCT performances in the time-frequency domain. Compared with the existing digital computation of LCT, our proposed DLCT possess additivity and reversibility properties with no oversampling involved. In addition, the length of input/output signals will not be changed before and after the DLCT transformations, which is consistent with the time-frequency area-preserving nature of LCT; meanwhile, the proposed DLCT has very good approximation of continuous LCT.
Duarte Queirós, Sílvio M; Crokidakis, Nuno; Soares-Pinto, Diogo O
2009-07-01
The influence of the tail features of the local magnetic field probability density function (PDF) on the ferromagnetic Ising model is studied in the limit of infinite range interactions. Specifically, we assign a quenched random field whose value is in accordance with a generic distribution that bears platykurtic and leptokurtic distributions depending on a single parameter tau<3 to each site. For tau<5/3, such distributions, which are basically Student-t and r distribution extended for all plausible real degrees of freedom, present a finite standard deviation, if not the distribution has got the same asymptotic power-law behavior as a alpha-stable Lévy distribution with alpha=(3-tau)/(tau-1). For every value of tau, at specific temperature and width of the distribution, the system undergoes a continuous phase transition. Strikingly, we impart the emergence of an inflexion point in the temperature-PDF width phase diagrams for distributions broader than the Cauchy-Lorentz (tau=2) which is accompanied with a divergent free energy per spin (at zero temperature).
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
Optimal Preventive Maintenance Schedule based on Lifecycle Cost and Time-Dependent Reliability
2011-11-10
Page 1 of 16 UNCLASSIFIED: Distribution Statement A. Approved for public release. 12IDM-0064 Optimal Preventive Maintenance Schedule based... 1 . INTRODUCTION Customers and product manufacturers demand continued functionality of complex equipment and processes. Degradation of material...Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response
Mathematical modeling of inhalation exposure
NASA Technical Reports Server (NTRS)
Fiserova-Bergerova, V.
1976-01-01
The paper presents a mathematical model of inhalation exposure in which uptake, distribution and excretion are described by exponential functions, while rate constants are determined by tissue volumes, blood perfusion and by the solubility of vapors (partition coefficients). In the model, tissues are grouped into four pharmokinetic compartments. The model is used to study continuous and interrupted chronic exposures and is applied to the inhalation of Forane and methylene chloride.
Continued Development of Expert System Tools for NPSS Engine Diagnostics
NASA Technical Reports Server (NTRS)
Lewandowski, Henry
1996-01-01
The objectives of this grant were to work with previously developed NPSS (Numerical Propulsion System Simulation) tools and enhance their functionality; explore similar AI systems; and work with the High Performance Computing Communication (HPCC) K-12 program. Activities for this reporting period are briefly summarized and a paper addressing the implementation, monitoring and zooming in a distributed jet engine simulation is included as an attachment.
1983-09-30
Pathways; GABAergic Pathway; Atropine; Reserpine; Alphamethylparatyrosine; Oxotremorine ; Feedback 20 ABSTRACT (Continue on reverse side It necessary and...see Preface). The purpose was the compare the regional distribution of the effect of anticholinesterases with oxotremorine ),a selective centrally...hippocampus, differently from oxotremorine which was ineffective. In the other two regions, physostigmine and oxotremorine were equally active. At the
NASA Astrophysics Data System (ADS)
Gallant, Frederick M.
A novel method of fabricating functionally graded extruded composite materials is proposed for propellant applications using the technology of continuous processing with a Twin-Screw Extruder. The method is applied to the manufacturing of grains for solid rocket motors in an end-burning configuration with an axial gradient in ammonium perchlorate volume fraction and relative coarse/fine particle size distributions. The fabrication of functionally graded extruded polymer composites with either inert or energetic ingredients has yet to be investigated. The lack of knowledge concerning the processing of these novel materials has necessitated that a number of research issues be addressed. Of primary concern is characterizing and modeling the relationship between the extruder screw geometry, transient processing conditions, and the gradient architecture that evolves in the extruder. Recent interpretations of the Residence Time Distributions (RTDs) and Residence Volume Distributions (RVDs) for polymer composites in the TSE are used to develop new process models for predicting gradient architectures in the direction of extrusion. An approach is developed for characterizing the sections of the extrudate using optical, mechanical, and compositional analysis to determine the gradient architectures. The effects of processing on the burning rate properties of extruded energetic polymer composites are characterized for homogeneous formulations over a range of compositions to determine realistic gradient architectures for solid rocket motor applications. The new process models and burning rate properties that have been characterized in this research effort will be the basis for an inverse design procedure that is capable of determining gradient architectures for grains in solid rocket motors that possess tailored burning rate distributions that conform to user-defined performance specifications.
Mapping local and global variability in plant trait distributions
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc; ...
2017-12-01
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Mapping local and global variability in plant trait distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik
2012-02-01
Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.
Off-the-shelf Control of Data Analysis Software
NASA Astrophysics Data System (ADS)
Wampler, S.
The Gemini Project must provide convenient access to data analysis facilities to a wide user community. The international nature of this community makes the selection of data analysis software particularly interesting, with staunch advocates of systems such as ADAM and IRAF among the users. Additionally, the continuing trends towards increased use of networked systems and distributed processing impose additional complexity. To meet these needs, the Gemini Project is proposing the novel approach of using low-cost, off-the-shelf software to abstract out both the control and distribution of data analysis from the functionality of the data analysis software. For example, the orthogonal nature of control versus function means that users might select analysis routines from both ADAM and IRAF as appropriate, distributing these routines across a network of machines. It is the belief of the Gemini Project that this approach results in a system that is highly flexible, maintainable, and inexpensive to develop. The Khoros visualization system is presented as an example of control software that is currently available for providing the control and distribution within a data analysis system. The visual programming environment provided with Khoros is also discussed as a means to providing convenient access to this control.
Comparative genomics approaches to understanding and manipulating plant metabolism.
Bradbury, Louis M T; Niehaus, Tom D; Hanson, Andrew D
2013-04-01
Over 3000 genomes, including numerous plant genomes, are now sequenced. However, their annotation remains problematic as illustrated by the many conserved genes with no assigned function, vague annotations such as 'kinase', or even wrong ones. Around 40% of genes of unknown function that are conserved between plants and microbes are probably metabolic enzymes or transporters; finding functions for these genes is a major challenge. Comparative genomics has correctly predicted functions for many such genes by analyzing genomic context, and gene fusions, distributions and co-expression. Comparative genomics complements genetic and biochemical approaches to dissect metabolism, continues to increase in power and decrease in cost, and has a pivotal role in modeling and engineering by helping identify functions for all metabolic genes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Vikingstad, E M; George, K P; Johnson, A F; Cao, Y
2000-04-01
In 95% of right handed individuals the left hemisphere is dominant for speech and language function. The evidence for this is accumulated primarily from clinical populations. We investigated cortical topography of language function and lateralization in a sample of the right handed population using functional magnetic resonance imaging and two lexical-semantic paradigms. Activated cortical language networks were assessed topographically and quantitatively by using a lateralization index. As a group, we observed left hemispheric language dominance. Individually, the lateralization index varied continuously from left hemisphere dominant to bilateral representation. In males, language primarily lateralized to left, and in females, approximately half had left lateralization and the other half had bilateral representation. Our data indicate that a previous view of female bilateral hemispheric dominance for language (McGlone, 1980. Sex differences in human brain asymmetry: a critical survey. Behav Brain Sci 3:215-263; Shaywitz et al., 1995. Sex differences in the functional organization of the brain for language. Nature 373:607-609) simplifies the complexity of cortical language distribution in this population. Analysis of the distribution of the lateralization index in our study allowed us to make this difference in females apparent.
Spectral decomposition of seismic data with reassigned smoothed pseudo Wigner-Ville distribution
NASA Astrophysics Data System (ADS)
Wu, Xiaoyang; Liu, Tianyou
2009-07-01
Seismic signals are nonstationary mainly due to absorption and attenuation of seismic energy in strata. Referring to spectral decomposition of seismic data, the conventional method using short-time Fourier transform (STFT) limits temporal and spectral resolution by a predefined window length. Continuous-wavelet transform (CWT) uses dilation and translation of a wavelet to produce a time-scale map. However, the wavelets utilized should be orthogonal in order to obtain a satisfactory resolution. The less applied, Wigner-Ville distribution (WVD) being superior in energy distribution concentration, is confronted with cross-terms interference (CTI) when signals are multi-component. In order to reduce the impact of CTI, Cohen class uses kernel function as low-pass filter. Nevertheless it also weakens energy concentration of auto-terms. In this paper, we employ smoothed pseudo Wigner-Ville distribution (SPWVD) with Gauss kernel function to reduce CTI in time and frequency domain, then reassign values of SPWVD (called reassigned SPWVD) according to the center of gravity of the considering energy region so that distribution concentration is maintained simultaneously. We conduct the method above on a multi-component synthetic seismic record and compare with STFT and CWT spectra. Two field examples reveal that RSPWVD potentially can be applied to detect low-frequency shadows caused by hydrocarbons and to delineate the space distribution of abnormal geological body more precisely.
Diffusion theory of decision making in continuous report.
Smith, Philip L
2016-07-01
I present a diffusion model for decision making in continuous report tasks, in which a continuous, circularly distributed, stimulus attribute in working memory is matched to a representation of the attribute in the stimulus display. Memory retrieval is modeled as a 2-dimensional diffusion process with vector-valued drift on a disk, whose bounding circle represents the decision criterion. The direction and magnitude of the drift vector describe the identity of the stimulus and the quality of its representation in memory, respectively. The point at which the diffusion exits the disk determines the reported value of the attribute and the time to exit the disk determines the decision time. Expressions for the joint distribution of decision times and report outcomes are obtained by means of the Girsanov change-of-measure theorem, which allows the properties of the nonzero-drift diffusion process to be characterized as a function of a Euclidian-distance Bessel process. Predicted report precision is equal to the product of the decision criterion and the drift magnitude and follows a von Mises distribution, in agreement with the treatment of precision in the working memory literature. Trial-to-trial variability in criterion and drift rate leads, respectively, to direct and inverse relationships between report accuracy and decision times, in agreement with, and generalizing, the standard diffusion model of 2-choice decisions. The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties of working memory, via the distributions of decision times. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization
NASA Astrophysics Data System (ADS)
Liu, Chuanming; Yao, Huajian
2017-03-01
Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.
Ryu, Jihye; Torres, Elizabeth B.
2018-01-01
The field of enacted/embodied cognition has emerged as a contemporary attempt to connect the mind and body in the study of cognition. However, there has been a paucity of methods that enable a multi-layered approach tapping into different levels of functionality within the nervous systems (e.g., continuously capturing in tandem multi-modal biophysical signals in naturalistic settings). The present study introduces a new theoretical and statistical framework to characterize the influences of cognitive demands on biophysical rhythmic signals harnessed from deliberate, spontaneous and autonomic activities. In this study, nine participants performed a basic pointing task to communicate a decision while they were exposed to different levels of cognitive load. Within these decision-making contexts, we examined the moment-by-moment fluctuations in the peak amplitude and timing of the biophysical time series data (e.g., continuous waveforms extracted from hand kinematics and heart signals). These spike-trains data offered high statistical power for personalized empirical statistical estimation and were well-characterized by a Gamma process. Our approach enabled the identification of different empirically estimated families of probability distributions to facilitate inference regarding the continuous physiological phenomena underlying cognitively driven decision-making. We found that the same pointing task revealed shifts in the probability distribution functions (PDFs) of the hand kinematic signals under study and were accompanied by shifts in the signatures of the heart inter-beat-interval timings. Within the time scale of an experimental session, marked changes in skewness and dispersion of the distributions were tracked on the Gamma parameter plane with 95% confidence. The results suggest that traditional theoretical assumptions of stationarity and normality in biophysical data from the nervous systems are incongruent with the true statistical nature of empirical data. This work offers a unifying platform for personalized statistical inference that goes far beyond those used in conventional studies, often assuming a “one size fits all model” on data drawn from discrete events such as mouse clicks, and observations that leave out continuously co-occurring spontaneous activity taking place largely beneath awareness. PMID:29681805
Real-time segmentation in 4D ultrasound with continuous max-flow
NASA Astrophysics Data System (ADS)
Rajchl, M.; Yuan, J.; Peters, T. M.
2012-02-01
We present a novel continuous Max-Flow based method to segment the inner left ventricular wall from 3D trans-esophageal echocardiography image sequences, which minimizes an energy functional encoding two Fisher-Tippett distributions and a geometrical constraint in form of a Euclidean distance map in a numerically efficient and accurate way. After initialization the method is fully automatic and is able to perform at up to 10Hz making it available for image-guided interventions. Results are shown on 4D TEE data sets from 18 patients with pathological cardiac conditions and the speed of the algorithm is assessed under a variety of conditions.
Diaconis, Persi; Holmes, Susan; Janson, Svante
2015-01-01
We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368
Continued development of a detailed model of arc discharge dynamics
NASA Technical Reports Server (NTRS)
Beers, B. L.; Pine, V. W.; Ives, S. T.
1982-01-01
Using a previously developed set of codes (SEMC, CASCAD, ACORN), a parametric study was performed to quantify the parameters which describe the development of a single electron indicated avalanche into a negative tip streamer. The electron distribution function in Teflon is presented for values of the electric field in the range of four-hundred million volts/meter to four billon volts/meter. A formulation of the scattering parameters is developed which shows that the transport can be represented by three independent variables. The distribution of ionization sites is used to indicate an avalanche. The self consistent evolution of the avalanche is computed over the parameter range of scattering set.
An analogy of the charge distribution on Julia sets with the Brownian motion
NASA Astrophysics Data System (ADS)
Lopes, Artur O.
1989-09-01
A way to compute the entropy of an invariant measure of a hyperbolic rational map from the information given by a Ruelle-Perron-Frobenius operator of a generic Holder-continuous function will be shown. This result was motivated by an analogy of the Brownian motion with the dynamical system given by a rational map and the maximal measure. In the case the rational map is a polynomial, then the maximal measure is the charge distribution in the Julia set. The main theorem of this paper can be seen as a large deviation result. It is a kind of Donsker-Varadhan formula for dynamical systems.
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.
NASA Technical Reports Server (NTRS)
Schmeckpeper, K. R.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list.
Transition in the equilibrium distribution function of relativistic particles.
Mendoza, M; Araújo, N A M; Succi, S; Herrmann, H J
2012-01-01
We analyze a transition from single peaked to bimodal velocity distribution in a relativistic fluid under increasing temperature, in contrast with a non-relativistic gas, where only a monotonic broadening of the bell-shaped distribution is observed. Such transition results from the interplay between the raise in thermal energy and the constraint of maximum velocity imposed by the speed of light. We study the Bose-Einstein, the Fermi-Dirac, and the Maxwell-Jüttner distributions, and show that they all exhibit the same qualitative behavior. We characterize the nature of the transition in the framework of critical phenomena and show that it is either continuous or discontinuous, depending on the group velocity. We analyze the transition in one, two, and three dimensions, with special emphasis on twodimensions, for which a possible experiment in graphene, based on the measurement of the Johnson-Nyquist noise, is proposed.
Transition in the Equilibrium Distribution Function of Relativistic Particles
Mendoza, M.; Araújo, N. A. M.; Succi, S.; Herrmann, H. J.
2012-01-01
We analyze a transition from single peaked to bimodal velocity distribution in a relativistic fluid under increasing temperature, in contrast with a non-relativistic gas, where only a monotonic broadening of the bell-shaped distribution is observed. Such transition results from the interplay between the raise in thermal energy and the constraint of maximum velocity imposed by the speed of light. We study the Bose-Einstein, the Fermi-Dirac, and the Maxwell-Jüttner distributions, and show that they all exhibit the same qualitative behavior. We characterize the nature of the transition in the framework of critical phenomena and show that it is either continuous or discontinuous, depending on the group velocity. We analyze the transition in one, two, and three dimensions, with special emphasis on twodimensions, for which a possible experiment in graphene, based on the measurement of the Johnson-Nyquist noise, is proposed. PMID:22937220
Transformations based on continuous piecewise-affine velocity fields
Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...
2017-01-11
Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less
Wei, Yanling; Park, Ju H; Karimi, Hamid Reza; Tian, Yu-Chu; Jung, Hoyoul; Yanling Wei; Park, Ju H; Karimi, Hamid Reza; Yu-Chu Tian; Hoyoul Jung; Tian, Yu-Chu; Wei, Yanling; Jung, Hoyoul; Karimi, Hamid Reza; Park, Ju H
2018-06-01
Continuous-time semi-Markovian jump neural networks (semi-MJNNs) are those MJNNs whose transition rates are not constant but depend on the random sojourn time. Addressing stochastic synchronization of semi-MJNNs with time-varying delay, an improved stochastic stability criterion is derived in this paper to guarantee stochastic synchronization of the response systems with the drive systems. This is achieved through constructing a semi-Markovian Lyapunov-Krasovskii functional together as well as making use of a novel integral inequality and the characteristics of cumulative distribution functions. Then, with a linearization procedure, controller synthesis is carried out for stochastic synchronization of the drive-response systems. The desired state-feedback controller gains can be determined by solving a linear matrix inequality-based optimization problem. Simulation studies are carried out to demonstrate the effectiveness and less conservatism of the presented approach.
Fluorescence correlation spectroscopy: the case of subdiffusion.
Lubelski, Ariel; Klafter, Joseph
2009-03-18
The theory of fluorescence correlation spectroscopy is revisited here for the case of subdiffusing molecules. Subdiffusion is assumed to stem from a continuous-time random walk process with a fat-tailed distribution of waiting times and can therefore be formulated in terms of a fractional diffusion equation (FDE). The FDE plays the central role in developing the fluorescence correlation spectroscopy expressions, analogous to the role played by the simple diffusion equation for regular systems. Due to the nonstationary nature of the continuous-time random walk/FDE, some interesting properties emerge that are amenable to experimental verification and may help in discriminating among subdiffusion mechanisms. In particular, the current approach predicts 1), a strong dependence of correlation functions on the initial time (aging); 2), sensitivity of correlation functions to the averaging procedure, ensemble versus time averaging (ergodicity breaking); and 3), that the basic mean-squared displacement observable depends on how the mean is taken.
Transformations Based on Continuous Piecewise-Affine Velocity Fields
Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan; Fisher, Jonn W.
2018-01-01
We propose novel finite-dimensional spaces of well-behaved ℝn → ℝn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization over monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available. PMID:28092517
Methods for the behavioral, educational, and social sciences: an R package.
Kelley, Ken
2007-11-01
Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.
Role of information theoretic uncertainty relations in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz; ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin; Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk
2015-04-15
Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again,more » improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.« less
Iterated function systems for DNA replication
NASA Astrophysics Data System (ADS)
Gaspard, Pierre
2017-10-01
The kinetic equations of DNA replication are shown to be exactly solved in terms of iterated function systems, running along the template sequence and giving the statistical properties of the copy sequences, as well as the kinetic and thermodynamic properties of the replication process. With this method, different effects due to sequence heterogeneity can be studied, in particular, a transition between linear and sublinear growths in time of the copies, and a transition between continuous and fractal distributions of the local velocities of the DNA polymerase along the template. The method is applied to the human mitochondrial DNA polymerase γ without and with exonuclease proofreading.
Analysis of Compton continuum measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gold, R.; Olson, I. K.
1970-01-01
Five computer programs: COMPSCAT, FEND, GABCO, DOSE, and COMPLOT, have been developed and used for the analysis and subsequent reduction of measured energy distributions of Compton recoil electrons to continuous gamma spectra. In addition to detailed descriptions of these computer programs, the relationship amongst these codes is stressed. The manner in which these programs function is illustrated by tracing a sample measurement through a complete cycle of the data-reduction process.
Metrinome: Continuous Monitoring and Security Validation of Distributed Systems
2014-03-01
Integration into the SDLC ( Software Development Life Cycle), Retrieved Nov 06 2013, https://www.owasp.org/ images/f/f6/Integration_into_the_SDLC.ppt [2...assessment as part of the software development life cycle, current approaches suffer from a number of shortcomings that limit their application in...with assessing security and correct functionality. Second, integrated and end-to-end testing and experimentation is often postponed until software
Damage and Loss Estimation for Natural Gas Networks: The Case of Istanbul
NASA Astrophysics Data System (ADS)
Çaktı, Eser; Hancılar, Ufuk; Şeşetyan, Karin; Bıyıkoǧlu, Hikmet; Şafak, Erdal
2017-04-01
Natural gas networks are one of the major lifeline systems to support human, urban and industrial activities. The continuity of gas supply is critical for almost all functions of modern life. Under natural phenomena such as earthquakes and landslides the damages to the system elements may lead to explosions and fires compromising human life and damaging physical environment. Furthermore, the disruption in the gas supply puts human activities at risk and also results in economical losses. This study is concerned with the performance of one of the largest natural gas distribution systems in the world. Physical damages to Istanbul's natural gas network are estimated under the most recent probabilistic earthquake hazard models available, as well as under simulated ground motions from physics based models. Several vulnerability functions are used in modelling damages to system elements. A first-order assessment of monetary losses to Istanbul's natural gas distribution network is also attempted.
Prokhorov, Alexander; Prokhorova, Nina I
2012-11-20
We applied the bidirectional reflectance distribution function (BRDF) model consisting of diffuse, quasi-specular, and glossy components to the Monte Carlo modeling of spectral effective emissivities for nonisothermal cavities. A method for extension of a monochromatic three-component (3C) BRDF model to a continuous spectral range is proposed. The initial data for this method are the BRDFs measured in the plane of incidence at a single wavelength and several incidence angles and directional-hemispherical reflectance measured at one incidence angle within a finite spectral range. We proposed the Monte Carlo algorithm for calculation of spectral effective emissivities for nonisothermal cavities whose internal surface is described by the wavelength-dependent 3C BRDF model. The results obtained for a cylindroconical nonisothermal cavity are discussed and compared with results obtained using the conventional specular-diffuse model.
Mitigation of Alfvén activity in a tokamak by externally applied static 3D fields.
Bortolon, A; Heidbrink, W W; Kramer, G J; Park, J-K; Fredrickson, E D; Lore, J D; Podestà, M
2013-06-28
The application of static magnetic field perturbations to a tokamak plasma is observed to alter the dynamics of high-frequency bursting Alfvén modes that are driven unstable by energetic ions. In response to perturbations with an amplitude of δB/B∼0.01 at the plasma boundary, the mode amplitude is reduced, the bursting frequency is increased, and the frequency chirp is smaller. For modes of weaker bursting character, the magnetic perturbation induces a temporary transition to a saturated continuous mode. Calculations of the perturbed distribution function indicate that the 3D perturbation affects the orbits of fast ions that resonate with the bursting modes. The experimental evidence represents an important demonstration of the possibility of controlling fast-ion instabilities through "phase-space engineering" of the fast-ion distribution function, by means of externally applied perturbation fields.
NASA Astrophysics Data System (ADS)
Sinsuebphon, Nattawut; Rudkouskaya, Alena; Barroso, Margarida; Intes, Xavier
2016-02-01
Targeted drug delivery is a critical aspect of successful cancer therapy. Assessment of dynamic distribution of the drug provides relative concentration and bioavailability at the target tissue. The most common approach of the assessment is intensity-based imaging, which only provides information about anatomical distribution. Observation of biomolecular interactions can be performed using Förster resonance energy transfer (FRET). Thus, FRET-based imaging can assess functional distribution and provide potential therapeutic outcomes. In this study, we used wide-field lifetime-based FRET imaging for the study of early functional distribution of transferrin delivery in breast cancer tumor models in small animals. Transferrin is a carrier for cancer drug delivery. Its interaction with its receptor is within a few nanometers, which is suitable for FRET. Alexa Fluor® 700 and Alexa Fluor® 750 were conjugated to holo-transferrin which were then administered via tail vein injection to the mice implanted with T47D breast cancer xenografts. Images were continuously acquired for 60 minutes post-injection. The results showed that transferrin was primarily distributed to the liver, the urinary bladder, and the tumor. The cellular uptake of transferrin, which was indicated by the level of FRET, was high in the liver but very low in the urinary bladder. The results also suggested that the fluorescence intensity and FRET signals were independent. The liver showed increasing intensity and increasing FRET during the observation period, while the urinary bladder showed increasing intensity but minimal FRET. Tumors gave varied results corresponding to their FRET progression. These results were relevant to the biomolecular events that occurred in the animals.
Humayun, Md Tanim; Divan, Ralu; Stan, Liliana; ...
2016-06-16
This paper presents a highly sensitive, energy efficient and low-cost distributed methane (CH 4) sensor system (DMSS) for continuous monitoring, detection, and localization of CH 4 leaks in natural gas infrastructure, such as transmission and distribution pipelines, wells, and production pads. The CH 4 sensing element, a key component of the DMSS, consists of a metal oxide nanocrystal (MONC) functionalized multi-walled carbon nanotube (MWCNT) mesh which, in comparison to existing literature, shows stronger relative resistance change while interacting with lower parts per million (ppm) concentration of CH 4. A Gaussian plume triangulation algorithm has been developed for the DMSS. Givenmore » a geometric model of the surrounding environment the algorithm can precisely detect and localize a CH 4 leak as well as estimate its mass emission rate. A UV-based surface recovery technique making the sensor recover 10 times faster than the reported ones is presented for the DMSS. In conclusion, a control algorithm based on the UV-accelerated recovery is developed which facilitates faster leak detection.« less
Naser, Mohamed A.; Patterson, Michael S.
2011-01-01
Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647
Ergon, T.; Yoccoz, N.G.; Nichols, J.D.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
In many species, age or time of maturation and survival costs of reproduction may vary substantially within and among populations. We present a capture-mark-recapture model to estimate the latent individual trait distribution of time of maturation (or other irreversible transitions) as well as survival differences associated with the two states (representing costs of reproduction). Maturation can take place at any point in continuous time, and mortality hazard rates for each reproductive state may vary according to continuous functions over time. Although we explicitly model individual heterogeneity in age/time of maturation, we make the simplifying assumption that death hazard rates do not vary among individuals within groups of animals. However, the estimates of the maturation distribution are fairly robust against individual heterogeneity in survival as long as there is no individual level correlation between mortality hazards and latent time of maturation. We apply the model to biweekly capture?recapture data of overwintering field voles (Microtus agrestis) in cyclically fluctuating populations to estimate time of maturation and survival costs of reproduction. Results show that onset of seasonal reproduction is particularly late and survival costs of reproduction are particularly large in declining populations.
Focusers of obliquely incident laser radiation
NASA Astrophysics Data System (ADS)
Goncharskiy, A. V.; Danilov, V. A.; Popov, V. V.; Prokhorov, A. M.; Sisakyan, I. N.; Sayfer, V. A.; Stepanov, V. V.
1984-08-01
Focusing obliquely incident laser radiation along a given line in space with a given intensity distribution is treated as a problem of synthesizing a mirror surface. The intricate shape of such a surface, characterized by a function z= z (u,v) in the approximation of geometrical optics, is determined from the equation phi (u,v,z) - phi O(u,v,z)=O, which expresses that the incident field and the reflected field have identical eikonals. Further calculations are facilitated by replacing continuous mirror with a more easily manufactured piecewise continuous one. The problem is solved for the simple case of a plane incident wave with a typical iconal phi O(u,v,z)= -z cos0 at a large angle to a focus mirror in the z-plane region. Mirrors constructed on the basis of the theoretical solution were tested in an experiment with a CO2 laser. A light beam with Gaussian intensity distribution was, upon incidence at a 45 deg angle, focused into a circle or into an ellipse with uniform intensity distribution. Improvements in amplitudinal masking and selective tanning technology should reduce energy losses at the surface which results in efficient laser focusing mirrors.
Kohut, Sviataslau V; Staroverov, Viktor N
2013-10-28
The exchange-correlation potential of Kohn-Sham density-functional theory, vXC(r), can be thought of as an electrostatic potential produced by the static charge distribution qXC(r) = -(1∕4π)∇(2)vXC(r). The total exchange-correlation charge, QXC = ∫qXC(r) dr, determines the rate of the asymptotic decay of vXC(r). If QXC ≠ 0, the potential falls off as QXC∕r; if QXC = 0, the decay is faster than coulombic. According to this rule, exchange-correlation potentials derived from standard generalized gradient approximations (GGAs) should have QXC = 0, but accurate numerical calculations give QXC ≠ 0. We resolve this paradox by showing that the charge density qXC(r) associated with every GGA consists of two types of contributions: a continuous distribution and point charges arising from the singularities of vXC(r) at each nucleus. Numerical integration of qXC(r) accounts for the continuous charge but misses the point charges. When the point-charge contributions are included, one obtains the correct QXC value. These findings provide an important caveat for attempts to devise asymptotically correct Kohn-Sham potentials by modeling the distribution qXC(r).
Pickup Ion Distributions from Three Dimensional Neutral Exospheres
NASA Technical Reports Server (NTRS)
Hartle, R. E.; Sarantos, M.; Sittler, E. C., Jr.
2011-01-01
Pickup ions formed from ionized neutral exospheres in flowing plasmas have phase space distributions that reflect their source's spatial distributions. Phase space distributions of the ions are derived from the Vlasov equation with a delta function source using three.dimensional neutral exospheres. The ExB drift produced by plasma motion picks up the ions while the effects of magnetic field draping, mass loading, wave particle scattering, and Coulomb collisions near a planetary body are ignored. Previously, one.dimensional exospheres were treated, resulting in closed form pickup ion distributions that explicitly depend on the ratio rg/H, where rg is the ion gyroradius and H is the neutral scale height at the exobase. In general, the pickup ion distributions, based on three.dimensional neutral exospheres, cannot be written in closed form, but can be computed numerically. They continue to reflect their source's spatial distributions in an implicit way. These ion distributions and their moments are applied to several bodies, including He(+) and Na(+) at the Moon, H(+2) and CH(+4) at Titan, and H+ at Venus. The best places to use these distributions are upstream of the Moon's surface, the ionopause of Titan, and the bow shock of Venus.
Gibbons, Richard A.; Dixon, Stephen N.; Pocock, David H.
1973-01-01
A specimen of intestinal glycoprotein isolated from the pig and two samples of dextran, all of which are polydisperse (that is, the preparations may be regarded as consisting of a continuous distribution of molecular weights), have been examined in the ultracentrifuge under meniscus-depletion conditions at equilibrium. They are compared with each other and with a glycoprotein from Cysticercus tenuicollis cyst fluid which is almost monodisperse. The quantity c−⅓ (c=concentration) is plotted against ξ (the reduced radius); this plot is linear when the molecular-weight distribution approximates to the `most probable', i.e. when Mn:Mw:Mz: M(z+1)....... is as 1:2:3:4: etc. The use of this plot, and related procedures, to evaluate qualitatively and semi-quantitatively molecular-weight distribution functions where they can be realistically approximated to Schulz distributions is discussed. The theoretical basis is given in an Appendix. PMID:4778265
26 CFR 1.704-1 - Partner's distributive share.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 8 2014-04-01 2014-04-01 false Partner's distributive share. 1.704-1 Section 1.704-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Partners and Partnerships § 1.704-1 Partner's distributive share. (a) Effect of partnership agreement. A partner'...
26 CFR 1.704-1 - Partner's distributive share.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 8 2013-04-01 2013-04-01 false Partner's distributive share. 1.704-1 Section 1.704-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Partners and Partnerships § 1.704-1 Partner's distributive share. (a) Effect of partnership agreement. A partner'...
26 CFR 1.704-1 - Partner's distributive share.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 8 2012-04-01 2012-04-01 false Partner's distributive share. 1.704-1 Section 1.704-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Partners and Partnerships § 1.704-1 Partner's distributive share. (a) Effect of partnership agreement. A partner'...
26 CFR 1.704-1 - Partner's distributive share.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 8 2011-04-01 2011-04-01 false Partner's distributive share. 1.704-1 Section 1.704-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Partners and Partnerships § 1.704-1 Partner's distributive share. (a) Effect of partnership agreement. A partner'...
Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection
NASA Astrophysics Data System (ADS)
Corvaja, Roberto
2017-02-01
In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.
NASA Astrophysics Data System (ADS)
Coral, W.; Rossi, C.; Curet, O. M.
2015-12-01
This paper presents a Differential Quadrature Element Method for free transverse vibration of a robotic fish based on a continuous and non-uniform flexible backbone with distributed masses (fish ribs). The proposed method is based on the theory of a Timoshenko cantilever beam. The effects of the masses (number, magnitude and position) on the value of natural frequencies are investigated. Governing equations, compatibility and boundary conditions are formulated according to the Differential Quadrature rules. The convergence, efficiency and accuracy are compared to other analytical solution proposed in the literature. Moreover, the proposed method has been validate against the physical prototype of a flexible fish backbone. The main advantages of this method, compared to the exact solutions available in the literature are twofold: first, smaller computational cost and second, it allows analysing the free vibration in beams whose section is an arbitrary function, which is normally difficult or even impossible with other analytical methods.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
NASA Astrophysics Data System (ADS)
Hinterberger, F.; Rohdjeß, H.; Altmeier, M.; Bauer, F.; Bisplinghoff, J.; Büßer, K.; Busch, M.; Colberg, T.; Diehl, O.; Dohrmann, F.; Engelhardt, H. P.; Eversheim, P. D.; Felden, O.; Gebel, R.; Glende, M.; Greiff, J.; Groß-Hardt, R.; Hinterberger, F.; Jahn, R.; Jonas, E.; Krause, H.; Langkau, R.; Lindemann, T.; Lindlein, J.; Maier, R.; Maschuw, R.; Mayer-Kuckuk, T.; Meinerzhagen, A.; Nähle, O.; Prasuhn, D.; Rohdjeß, H.; Rosendaal, D.; von Rossen, P.; Schirm, N.; Schulz-Rojahn, M.; Schwarz, V.; Scobel, W.; Trelle, H. J.; Weise, E.; Wellinghausen, A.; Woller, K.; Ziegler, R.
2000-01-01
The EDDA experiment at the cooler synchrotron COSY measures proton-proton elastic scattering excitation functions in the momentum range 0.8 - 3.4 GeV/c. In phase 1 of the experiment, spin-averaged differential cross sections were measured continuously during acceleration with an internal polypropylene (CH2) fiber target, taking particular care to monitor luminosity as a function of beam momentum. In phase 2, excitation functions of the analyzing power AN and the polarization correlation parameters ANN, ASS and ASL are measured using a polarized proton beam and a polarized atomic hydrogen beam target. The paper presents recent dσ/dΩ and AN data. The results provide excitation functions and angular distributions of high precision and internal consistency. No evidence for narrow structures was found. The data are compared to recent phase shift solutions.
NASA Technical Reports Server (NTRS)
Min, Q.-L.; Lummerzheim, D.; Rees, M. H.; Stamnes, K.
1993-01-01
The consequences of electric field acceleration and an inhomogeneous magnetic field on auroral electron energy distributions in the topside ionosphere are investigated. The one-dimensional, steady state electron transport equation includes elastic and inelastic collisions, an inhomogeneous magnetic field, and a field-aligned electric field. The case of a self-consistent polarization electric field is considered first. The self-consistent field is derived by solving the continuity equation for all ions of importance, including diffusion of O(+) and H(+), and the electron and ion energy equations to derive the electron and ion temperatures. The system of coupled electron transport, continuity, and energy equations is solved numerically. Recognizing observations of parallel electric fields of larger magnitude than the baseline case of the polarization field, the effect of two model fields on the electron distribution function is investigated. In one case the field is increased from the polarization field magnitude at 300 km to a maximum at the upper boundary of 800 km, and in another case a uniform field is added to the polarization field. Substantial perturbations of the low energy portion of the electron flux are produced: an upward directed electric field accelerates the downward directed flux of low-energy secondary electrons and decelerates the upward directed component. Above about 400 km the inhomogeneous magnetic field produces anisotropies in the angular distribution of the electron flux. The effects of the perturbed energy distributions on auroral spectral emission features are noted.
NASA Technical Reports Server (NTRS)
Min, Q.-L.; Lummerzheim, D.; Rees, M. H.; Stamnes, K.
1993-01-01
The consequences of electric field acceleration and an inhomogencous magnetic field on auroral electron energy distributions in the topside ionosphere are investigated. The one- dimensional, steady state electron transport equation includes elastic and inelastic collisions, an inhomogencous magnetic field, and a field-aligned electric field. The case of a self-consistent polarization electric field is considered first. The self-consistent field is derived by solving the continuity equation for all ions of importance, including diffusion of 0(+) and H(+), and the electron and ion energy equations to derive the electron and ion temperatures. The system of coupled electron transport, continuity, and energy equations is solved numerically. Recognizing observations of parallel electric fields of larger magnitude than the baseline case of the polarization field, the effect of two model fields on the electron distribution function in investigated. In one case the field is increased from the polarization field magnitude at 300 km to a maximum at the upper boundary of 800 km, and in another case a uniform field is added to the polarization field. Substantial perturbations of the low energy portion of the electron flux are produced: an upward directed electric field accelerates the downward directed flux of low-energy secondary electrons and decelerates the upward directed component. Above about 400 km the inhomogencous magnetic field produces anisotropies in the angular distribution of the electron flux. The effects of the perturbed energy distributions on auroral spectral emission features are noted.
AGIS: Integration of new technologies used in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria
2017-10-01
The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.
Amended FY 1988/1989 Biennial Budget Justification of Estimates Submitted to Congress
1988-02-01
will be installed in order to provide privacy for network data traffic. - Distributed systems technology will continue to be explored, including...techniques and geophysical and satellite data bases to make relevant information rapidly available to the ".fi /./ P ... . C / AMENDED FY 1988/1989...semantic model of available functions and data , %. AMENDED FV 1988/1989 BIENNIAL BUDGET RDT&E DESCRIPTIVE SUMMARY Program Element: fQ1Q0j, Title
KAYA, MEHMET; GREGORY, THOMAS S.; DAYTON, PAUL A.
2009-01-01
Stabilized microbubbles are utilized as ultrasound contrast agents. These micron-sized gas capsules are injected into the bloodstream to provide contrast enhancement during ultrasound imaging. Some contrast imaging strategies, such as destruction-reperfusion, require a continuous injection of microbubbles over several minutes. Most quantitative imaging strategies rely on the ability to administer a consistent dose of contrast agent. Because of the buoyancy of these gas-filled agents, their spatial distribution within a syringe changes over time. The population of microbubbles that is pumped from a horizontal syringe outlet differs from initial population as the microbubbles float to the syringe top. In this manuscript, we study the changes in the population of a contrast agent that is pumped from a syringe due to microbubble floatation. Results are presented in terms of change in concentration and change in mean diameter, as a function of time, suspension medium, and syringe diameter. Data illustrate that the distribution of contrast agents injected from a syringe changes in both concentration and mean diameter over several minutes without mixing. We discuss the application of a mixing system and viscosity agents to keep the contrast solution more evenly distributed in a syringe. These results are significant for researchers utilizing microbubble contrast agents in continuous-infusion applications where it is important to maintain consistent contrast agent delivery rate, or in situations where the injection syringe cannot be mixed immediately prior to administration. PMID:19632760
Description of the SSF PMAD DC testbed control system data acquisition function
NASA Technical Reports Server (NTRS)
Baez, Anastacio N.; Mackin, Michael; Wright, Theodore
1992-01-01
The NASA LeRC in Cleveland, Ohio has completed the development and integration of a Power Management and Distribution (PMAD) DC Testbed. This testbed is a reduced scale representation of the end to end, sources to loads, Space Station Freedom Electrical Power System (SSF EPS). This unique facility is being used to demonstrate DC power generation and distribution, power management and control, and system operation techniques considered to be prime candidates for the Space Station Freedom. A key capability of the testbed is its ability to be configured to address system level issues in support of critical SSF program design milestones. Electrical power system control and operation issues like source control, source regulation, system fault protection, end-to-end system stability, health monitoring, resource allocation, and resource management are being evaluated in the testbed. The SSF EPS control functional allocation between on-board computers and ground based systems is evolving. Initially, ground based systems will perform the bulk of power system control and operation. The EPS control system is required to continuously monitor and determine the current state of the power system. The DC Testbed Control System consists of standard controllers arranged in a hierarchical and distributed architecture. These controllers provide all the monitoring and control functions for the DC Testbed Electrical Power System. Higher level controllers include the Power Management Controller, Load Management Controller, Operator Interface System, and a network of computer systems that perform some of the SSF Ground based Control Center Operation. The lower level controllers include Main Bus Switch Controllers and Photovoltaic Controllers. Power system status information is periodically provided to the higher level controllers to perform system control and operation. The data acquisition function of the control system is distributed among the various levels of the hierarchy. Data requirements are dictated by the control system algorithms being implemented at each level. A functional description of the various levels of the testbed control system architecture, the data acquisition function, and the status of its implementationis presented.
Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z
2018-05-15
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo
2012-01-01
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…
Tfayli, Ali; Bonnier, Franck; Farhane, Zeineb; Libong, Danielle; Byrne, Hugh J; Baillet-Guffroy, Arlette
2014-06-01
The use of animals for scientific research is increasingly restricted by legislation, increasing the demand for human skin models. These constructs present comparable bulk lipid content to human skin. However, their permeability is significantly higher, limiting their applicability as models of barrier function, although the molecular origins of this reduced barrier function remain unclear. This study analyses the stratum corneum (SC) of one such commercially available reconstructed skin model (RSM) compared with human SC by spectroscopic imaging and chromatographic profiling. Total lipid composition was compared by chromatographic analysis (HPLC). Raman spectroscopy was used to evaluate the conformational order, lateral packing and distribution of lipids in the surface and skin/RSM sections. Although HPLC indicates that all SC lipid classes are present, significant differences are observed in ceramide profiles. Raman imaging demonstrated that the RSM lipids are distributed in a non-continuous matrix, providing a better understanding of the limited barrier function. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Bartés-Serrallonga, M; Adan, A; Solé-Casals, J; Caldú, X; Falcón, C; Pérez-Pàmies, M; Bargalló, N; Serra-Grabulosa, J M
2014-04-01
One of the most used paradigms in the study of attention is the Continuous Performance Test (CPT). The identical pairs version (CPT-IP) has been widely used to evaluate attention deficits in developmental, neurological and psychiatric disorders. However, the specific locations and the relative distribution of brain activation in networks identified with functional imaging, varies significantly with differences in task design. To design a task to evaluate sustained attention using functional magnetic resonance imaging (fMRI), and thus to provide data for research concerned with the role of these functions. Forty right-handed, healthy students (50% women; age range: 18-25 years) were recruited. A CPT-IP implemented as a block design was used to assess sustained attention during the fMRI session. The behavioural results from the CPT-IP task showed a good performance in all subjects, higher than 80% of hits. fMRI results showed that the used CPT-IP task activates a network of frontal, parietal and occipital areas, and that these are related to executive and attentional functions. In relation to the use of the CPT to study of attention and working memory, this task provides normative data in healthy adults, and it could be useful to evaluate disorders which have attentional and working memory deficits.
NASA Astrophysics Data System (ADS)
Cao, Xinwu
2010-12-01
A power-law time-dependent light curve for active galactic nuclei (AGNs) is expected by the self-regulated black hole growth scenario, in which the feedback of AGNs expels gas and shut down accretion. This is also supported by the observed power-law Eddington ratio distribution of AGNs. At high redshifts, the AGN life timescale is comparable with (or even shorter than) the age of the universe, which sets a constraint on the minimal Eddington ratio for AGNs on the assumption of a power-law AGN light curve. The black hole mass function (BHMF) of AGN relics is calculated by integrating the continuity equation of massive black hole number density on the assumption of the growth of massive black holes being dominated by mass accretion with a power-law Eddington ratio distribution for AGNs. The derived BHMF of AGN relics at z = 0 can fit the measured local mass function of the massive black holes in galaxies quite well, provided the radiative efficiency ~0.1 and a suitable power-law index for the Eddington ratio distribution are adopted. In our calculations of the black hole evolution, the duty cycle of AGN should be less than unity, which requires the quasar life timescale τQ >~ 5 × 108 years.
Optimal nonlinear filtering using the finite-volume method
NASA Astrophysics Data System (ADS)
Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.
2018-01-01
Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.
NASA Technical Reports Server (NTRS)
Jarzembski, Maurice A.; Srivastava, Vandana
1998-01-01
Backscatter of several Earth surfaces was characterized in the laboratory as a function of incidence angle with a focused continuous-wave 9.1 micro meter CO2 Doppler lidar for use as possible calibration targets. Some targets showed negligible angular dependence, while others showed a slight increase with decreasing angle. The Earth-surface signal measured over the complex Californian terrain during a 1995 NASA airborne mission compared well with laboratory data. Distributions of the Earth's surface signal shows that the lidar efficiency can be estimated with a fair degree of accuracy, preferably with uniform Earth-surface targets during flight for airborne or space-based lidar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
Mesoscopic description of random walks on combs
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner
2015-12-01
Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.
Algebraic grid generation with corner singularities
NASA Technical Reports Server (NTRS)
Vinokur, M.; Lombard, C. K.
1983-01-01
A simple noniterative algebraic procedure is presented for generating smooth computational meshes on a quadrilateral topology. Coordinate distribution and normal derivative are provided on all boundaries, one of which may include a slope discontinuity. The boundary conditions are sufficient to guarantee continuity of global meshes formed of joined patches generated by the procedure. The method extends to 3-D. The procedure involves a synthesis of prior techniques stretching functions, cubic blending functions, and transfinite interpolation - to which is added the functional form of the corner solution. The procedure introduces the concept of generalized blending, which is implemented as an automatic scaling of the boundary derivatives for effective interpolation. Some implications of the treatment at boundaries for techniques solving elliptic PDE's are discussed in an Appendix.
Lindeberg theorem for Gibbs-Markov dynamics
NASA Astrophysics Data System (ADS)
Denker, Manfred; Senti, Samuel; Zhang, Xuan
2017-12-01
A dynamical array consists of a family of functions \\{ fn, i: 1≤slant i≤slant k_n, n≥slant 1\\} and a family of initial times \\{τn, i: 1≤slant i≤slant k_n, n≥slant 1\\} . For a dynamical system (X, T) we identify distributional limits for sums of the form for suitable (non-random) constants s_n>0 and an, i\\in { R} . We derive a Lindeberg-type central limit theorem for dynamical arrays. Applications include new central limit theorems for functions which are not locally Lipschitz continuous and central limit theorems for statistical functions of time series obtained from Gibbs-Markov systems. Our results, which hold for more general dynamics, are stated in the context of Gibbs-Markov dynamical systems for convenience.
26 CFR 1.996-1 - Rules for actual distributions and certain deemed distributions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Rules for actual distributions and certain deemed distributions. 1.996-1 Section 1.996-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Domestic International Sales Corporations...
Model of Mixing Layer With Multicomponent Evaporating Drops
NASA Technical Reports Server (NTRS)
Bellan, Josette; Le Clercq, Patrick
2004-01-01
A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The study is motivated by the fact that typical real petroleum fuels contain hundreds of chemical species. Previously, for the sake of computational efficiency, spray studies were performed using either models based on a single representative species or models based on surrogate fuels of at most 15 species. The present multicomponent model makes it possible to perform more realistic simulations by accounting for hundreds of chemical species in a computationally efficient manner. The model is used to perform Direct Numerical Simulations in continuing studies directed toward understanding the behavior of liquid petroleum fuel sprays. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and the drops, respectively. This representation is consistent with the expected volumetrically small loading of the drops in gas (of the order of 10 3), although the mass loading can be substantial because of the high ratio (of the order of 103) between the densities of liquid and gas. The drops are treated as point sources of mass, momentum, and energy; this representation is consistent with the drop size being smaller than the Kolmogorov scale. Unsteady drag, added-mass effects, Basset history forces, and collisions between the drops are neglected, and the gas is assumed calorically perfect. The model incorporates the concept of continuous thermodynamics, according to which the chemical composition of a fuel is described probabilistically, by use of a distribution function. Distribution functions generally depend on many parameters. However, for mixtures of homologous species, the distribution can be approximated with acceptable accuracy as a sole function of the molecular weight. The mixing layer is initially laden with drops in its lower stream, and the drops are colder than the gas. Drop evaporation leads to a change in the gas-phase composition, which, like the composition of the drops, is described in a probabilistic manner
NASA Astrophysics Data System (ADS)
Waggoner, William Tracy
1990-01-01
Experimental capture cross sections d sigma / dtheta versus theta , are presented for various ions incident on neutral targets. First, distributions are presented for Ar ^{rm 8+} ions incident on H_{rm 2}, D _{rm 2}, and Ar targets. Energy gain studies indicate that capture occurs to primarily a 5d,f final state of Ar^{rm 7+} with some contributions from transfer ionization (T.I.) channels. Angular distribution spectra for all three targets are similar, with spectra having a main peak located at forward angles which is attributed to single capture events, and a secondary structure occurring at large angles which is attributed to T.I. contributions. A series of Ar^{rm 8+} on Ar spectra were collected using a retarding grid system as a low resolution energy spectrometer to resolve single capture events from T.I. events. The resulting single capture and T.I. angular distributions are presented. Results are discussed in terms of a classical deflection function employing a simple two state curve crossing model. Angular distributions for electron capture from He by C, N, O, F, and Ne ions with charge states from 5 ^+-8^+ are presented for projectile energies between 1.2 and 2.0 kV. Distributions for the same charge state but different ion species are simlar, but not identical with distributions for the 5 ^+ and 7^+ ions being strongly forward peaked, the 6^+ distributions are much less forward peaked with the O^{6+} distributions showing structure, the Ne^{8+} ion distribution appears to be an intermediate case between forward peaking and large angle scattering. These results are discussed in terms of classical deflection functions which utilize two state Coulomb diabatic curve crossing models. Finally, angular distributions are presented for electron capture from He by Ar^{rm 6+} ions at energies between 1287 eV and 296 eV. At large projectile energies the distribution is broad. As the energy decreases below 523 eV, distributions shift to forward angles with a second peak appearing outside the Coulomb angle, theta_{c} = Q/2E, which continues to grow in magnitude as the projectile energy decreases further. Results are compared with a model calculation employing a two state diabatic Coulomb curve crossing model and the classical deflection function.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
Laugesen, Kristina; Støvring, Henrik; Hallas, Jesper; Pottegård, Anton; Jørgensen, Jens Otto Lunde; Sørensen, Henrik Toft; Petersen, Irene
2017-01-01
Glucocorticoids are widely used medications. In many pharmacoepidemiological studies, duration of individual prescriptions and definition of treatment episodes are important issues. However, many data sources lack this information. We aimed to estimate duration of individual prescriptions for oral glucocorticoids and to describe continuous treatment episodes using the parametric waiting time distribution. We used Danish nationwide registries to identify all prescriptions for oral glucocorticoids during 1996-2014. We applied the parametric waiting time distribution to estimate duration of individual prescriptions each year by estimating the 80th, 90th, 95th and 99th percentiles for the interarrival distribution. These corresponded to the time since last prescription during which 80%, 90%, 95% and 99% of users presented a new prescription for redemption. We used the Kaplan-Meier survival function to estimate length of first continuous treatment episodes by assigning estimated prescription duration to each prescription and thereby create treatment episodes from overlapping prescriptions. We identified 5,691,985 prescriptions issued to 854,429 individuals of whom 351,202 (41%) only redeemed 1 prescription in the whole study period. The 80th percentile for prescription duration ranged from 87 to 120 days, the 90th percentile from 116 to 150 days, the 95th percentile from 147 to 181 days, and the 99th percentile from 228 to 259 days during 1996-2014. Based on the 80th, 90th, 95th and 99th percentiles of prescription duration, the median length of continuous treatment was 113, 141, 170 and 243 days, respectively. Our method and results may provide an important framework for future pharmacoepidemiological studies. The choice of which percentile of the interarrival distribution to apply as prescription duration has an impact on the level of misclassification. Use of the 80th percentile provides a measure of drug exposure that is specific, while the 99th percentile provides a sensitive measure.
Privacy-Preserving Relationship Path Discovery in Social Networks
NASA Astrophysics Data System (ADS)
Mezzour, Ghita; Perrig, Adrian; Gligor, Virgil; Papadimitratos, Panos
As social networks sites continue to proliferate and are being used for an increasing variety of purposes, the privacy risks raised by the full access of social networking sites over user data become uncomfortable. A decentralized social network would help alleviate this problem, but offering the functionalities of social networking sites is a distributed manner is a challenging problem. In this paper, we provide techniques to instantiate one of the core functionalities of social networks: discovery of paths between individuals. Our algorithm preserves the privacy of relationship information, and can operate offline during the path discovery phase. We simulate our algorithm on real social network topologies.
Formation and structure of food bodies in Cordia nodosa (Boraginaceae).
Solano, Pascal-Jean; Belin-Depoux, Monique; Dejean, Alain
2005-07-01
Cordia nodosa Lamark (Boraginaceae) is a myrmecophyte (i.e., plants housing ants in hollow structures) that provisions associated ants with food bodies (FBs) produced 24 h a day. Distributed over all the young parts of the plants, they induce ants to forage continually and so to protect the plants. Metabolites are stored in the inner cells of C. nodosa FBs as they form. In addition the peripheral cells have an extrafloral nectary-like function and secrete a substance that covers the FBs. The amalgam of these two functions, distinct in other known cases, is discussed taking into account the origin of FBs and extrafloral nectaries.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F; Schnabel, Roman
2015-10-30
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein-Podolsky-Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F.; Schnabel, Roman
2015-01-01
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein–Podolsky–Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components. PMID:26514280
Patching, Simon G
2017-03-01
Glucose transporters (GLUTs) at the blood-brain barrier maintain the continuous high glucose and energy demands of the brain. They also act as therapeutic targets and provide routes of entry for drug delivery to the brain and central nervous system for treatment of neurological and neurovascular conditions and brain tumours. This article first describes the distribution, function and regulation of glucose transporters at the blood-brain barrier, the major ones being the sodium-independent facilitative transporters GLUT1 and GLUT3. Other GLUTs and sodium-dependent transporters (SGLTs) have also been identified at lower levels and under various physiological conditions. It then considers the effects on glucose transporter expression and distribution of hypoglycemia and hyperglycemia associated with diabetes and oxygen/glucose deprivation associated with cerebral ischemia. A reduction in glucose transporters at the blood-brain barrier that occurs before the onset of the main pathophysiological changes and symptoms of Alzheimer's disease is a potential causative effect in the vascular hypothesis of the disease. Mutations in glucose transporters, notably those identified in GLUT1 deficiency syndrome, and some recreational drug compounds also alter the expression and/or activity of glucose transporters at the blood-brain barrier. Approaches for drug delivery across the blood-brain barrier include the pro-drug strategy whereby drug molecules are conjugated to glucose transporter substrates or encapsulated in nano-enabled delivery systems (e.g. liposomes, micelles, nanoparticles) that are functionalised to target glucose transporters. Finally, the continuous development of blood-brain barrier in vitro models is important for studying glucose transporter function, effects of disease conditions and interactions with drugs and xenobiotics.
26 CFR 1.651(a)-2 - Income required to be distributed currently.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Trusts Which Distribute Current Income Only § 1.651(a)-2 Income required to be distributed currently. (a) The determination of whether trust income is required to be distributed currently depends upon the terms of the trust instrument and the applicable local law...
NASA Astrophysics Data System (ADS)
Arik, Sabri
2006-02-01
This Letter presents a sufficient condition for the existence, uniqueness and global asymptotic stability of the equilibrium point for bidirectional associative memory (BAM) neural networks with distributed time delays. The results impose constraint conditions on the network parameters of neural system independently of the delay parameter, and they are applicable to all bounded continuous non-monotonic neuron activation functions. The results are also compared with the previous results derived in the literature.
A User’s Guide to BISAM (BIvariate SAMple): The Bivariate Data Modeling Program.
1983-08-01
method for the null case specified and is then used to form the bivariate density-quantile function as described in section 4. If D(U) in stage...employed assigns average ranks for tied observations. Other methods for assigning ranks to tied observations are often employed but are not attempted...34 €.. . . . .. . .. . . . ,.. . ,•. . . ... *.., .. , - . . . . - - . . .. - -. .. observations will weaken the results obtained since underlying continuous distributions are assumed. One should avoid such situations if possible. Two methods
Basic statistics with Microsoft Excel: a review.
Divisi, Duilio; Di Leonardo, Gabriella; Zaccagna, Gino; Crisci, Roberto
2017-06-01
The scientific world is enriched daily with new knowledge, due to new technologies and continuous discoveries. The mathematical functions explain the statistical concepts particularly those of mean, median and mode along with those of frequency and frequency distribution associated to histograms and graphical representations, determining elaborative processes on the basis of the spreadsheet operations. The aim of the study is to highlight the mathematical basis of statistical models that regulate the operation of spreadsheets in Microsoft Excel.
Basic statistics with Microsoft Excel: a review
Di Leonardo, Gabriella; Zaccagna, Gino; Crisci, Roberto
2017-01-01
The scientific world is enriched daily with new knowledge, due to new technologies and continuous discoveries. The mathematical functions explain the statistical concepts particularly those of mean, median and mode along with those of frequency and frequency distribution associated to histograms and graphical representations, determining elaborative processes on the basis of the spreadsheet operations. The aim of the study is to highlight the mathematical basis of statistical models that regulate the operation of spreadsheets in Microsoft Excel. PMID:28740690
Modeling Evaporation of Drops of Different Kerosenes
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth
2007-01-01
A mathematical model describes the evaporation of drops of a hydrocarbon liquid composed of as many as hundreds of chemical species. The model is intended especially for application to any of several types of kerosenes commonly used as fuels. The concept of continuous thermodynamics, according to which the chemical composition of the evaporating multicomponent liquid is described by use of a probability distribution function (PDF). However, the present model is more generally applicable than is its immediate predecessor.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Effects of diffusion on total biomass in heterogeneous continuous and discrete-patch systems
DeAngelis, Donald L.; Ming Ni, Wei; Zhang, Bo
2016-01-01
Theoretical models of populations on a system of two connected patches previously have shown that when the two patches differ in maximum growth rate and carrying capacity, and in the limit of high diffusion, conditions exist for which the total population size at equilibrium exceeds that of the ideal free distribution, which predicts that the total population would equal the total carrying capacity of the two patches. However, this result has only been shown for the Pearl-Verhulst growth function on two patches and for a single-parameter growth function in continuous space. Here, we provide a general criterion for total population size to exceed total carrying capacity for three commonly used population growth rates for both heterogeneous continuous and multi-patch heterogeneous landscapes with high population diffusion. We show that a sufficient condition for this situation is that there is a convex positive relationship between the maximum growth rate and the parameter that, by itself or together with the maximum growth rate, determines the carrying capacity, as both vary across a spatial region. This relationship occurs in some biological populations, though not in others, so the result has ecological implications.
Groenendaal, D; Freijer, J; de Mik, D; Bouw, M R; Danhof, M; de Lange, E C M
2007-01-01
Background and purpose: The aim was to investigate the influence of biophase distribution including P-glycoprotein (Pgp) function on the pharmacokinetic-pharmacodynamic correlations of morphine's actions in rat brain. Experimental approach: Male rats received a 10-min infusion of morphine as 4 mg kg−1, combined with a continuous infusion of the Pgp inhibitor GF120918 or vehicle, 10 or 40 mg kg−1. EEG signals were recorded continuously and blood samples were collected. Key results: Profound hysteresis was observed between morphine blood concentrations and effects on the EEG. Only the termination of the EEG effect was influenced by GF120918. Biophase distribution was best described with an extended catenary biophase distribution model, with a sequential transfer and effect compartment. The rate constant for transport through the transfer compartment (k1e) was 0.038 min−1, being unaffected by GF120918. In contrast, the rate constant for the loss from the effect compartment (keo) decreased 60% after GF120918. The EEG effect was directly related to concentrations in the effect compartment using the sigmoidal Emax model. The values of the pharmacodynamic parameters E0, Emax, EC50 and Hill factor were 45.0 μV, 44.5 μV, 451 ng ml−1 and 2.3, respectively. Conclusions and implications: The effects of GF120918 on the distribution kinetics of morphine in the effect compartment were consistent with the distribution in brain extracellular fluid (ECF) as estimated by intracerebral microdialysis. However, the time-course of morphine concentrations at the site of action in the brain, as deduced from the biophase model, is distinctly different from the brain ECF concentrations. PMID:17471181
Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures
NASA Technical Reports Server (NTRS)
James, Benjamin Wylie
1935-01-01
This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.
Posterior consistency in conditional distribution estimation
Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.
2014-01-01
A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858
Distributed Economic Dispatch in Microgrids Based on Cooperative Reinforcement Learning.
Liu, Weirong; Zhuang, Peng; Liang, Hao; Peng, Jun; Huang, Zhiwu; Weirong Liu; Peng Zhuang; Hao Liang; Jun Peng; Zhiwu Huang; Liu, Weirong; Liang, Hao; Peng, Jun; Zhuang, Peng; Huang, Zhiwu
2018-06-01
Microgrids incorporated with distributed generation (DG) units and energy storage (ES) devices are expected to play more and more important roles in the future power systems. Yet, achieving efficient distributed economic dispatch in microgrids is a challenging issue due to the randomness and nonlinear characteristics of DG units and loads. This paper proposes a cooperative reinforcement learning algorithm for distributed economic dispatch in microgrids. Utilizing the learning algorithm can avoid the difficulty of stochastic modeling and high computational complexity. In the cooperative reinforcement learning algorithm, the function approximation is leveraged to deal with the large and continuous state spaces. And a diffusion strategy is incorporated to coordinate the actions of DG units and ES devices. Based on the proposed algorithm, each node in microgrids only needs to communicate with its local neighbors, without relying on any centralized controllers. Algorithm convergence is analyzed, and simulations based on real-world meteorological and load data are conducted to validate the performance of the proposed algorithm.
Mars atmosphere studies with the SPICAM IR emission phase function observations
NASA Astrophysics Data System (ADS)
Trokhimovskiy, Alexander; Fedorova, Anna; Montmessin, Franck; Korablev, Oleg; Bertaux, Jean-Loup
Emission Phase Function (EPF) observations is a powerful tool for characterization of atmosphere and surface. EPF sequence provides the extensive coverage of scattering angles above the targeted surface location which allow to separate the surface and aerosol scattering, study a vertical distribution of minor species and aerosol properties. SPICAM IR instrument on Mars Express mission provides continuous atmospheric observations in near IR (1-1.7 mu) in nadir and limb starting from 2004. For the first years of SPICAM operation only a very limited number of EPFs was performed. But from the mid 2013 (Ls=225, MY31) SPICAM EPF observations become rather regular. Based on the multiple-scattering radiative transfer model SHDOM, we analyze equivalent depths of carbon dioxide (1,43 mu) and water vapour (1,38 mu) absorption bands and their dependence on airmass during observation sequence to get aerosol optical depths and properties. The derived seasonal dust opacities from near IR can be used to retrieve the size distribution from comparison with simultaneous results of other instruments in different spectral ranges. Moreover, the EPF observations of water vapour band allow to access poorly known H2O vertical distribution for different season and locations.
NASA Astrophysics Data System (ADS)
Wu, Leyuan
2018-01-01
We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.
Kather, Jakob Nikolas; Marx, Alexander; Reyes-Aldasoro, Constantino Carlos; Schad, Lothar R; Zöllner, Frank Gerrit; Weis, Cleo-Aron
2015-08-07
Blood vessels in solid tumors are not randomly distributed, but are clustered in angiogenic hotspots. Tumor microvessel density (MVD) within these hotspots correlates with patient survival and is widely used both in diagnostic routine and in clinical trials. Still, these hotspots are usually subjectively defined. There is no unbiased, continuous and explicit representation of tumor vessel distribution in histological whole slide images. This shortcoming distorts angiogenesis measurements and may account for ambiguous results in the literature. In the present study, we describe and evaluate a new method that eliminates this bias and makes angiogenesis quantification more objective and more efficient. Our approach involves automatic slide scanning, automatic image analysis and spatial statistical analysis. By comparing a continuous MVD function of the actual sample to random point patterns, we introduce an objective criterion for hotspot detection: An angiogenic hotspot is defined as a clustering of blood vessels that is very unlikely to occur randomly. We evaluate the proposed method in N=11 images of human colorectal carcinoma samples and compare the results to a blinded human observer. For the first time, we demonstrate the existence of statistically significant hotspots in tumor images and provide a tool to accurately detect these hotspots.
NASA Astrophysics Data System (ADS)
Hanisch, R.
1999-12-01
Despite the tremendous advances in electronic publications and the increasing rapidity with which papers are now moving from acceptance into ``print,'' preprints continue to be an important mode of communication within the astronomy community. The Los Alamos e-preprint service, astro-ph, provides for rapid and cost-free (to authors and readers) dissemination of manuscripts. As the use of astro-ph has increased the number of paper preprints in circulation to libraries has decreased, and institutional preprint series appear to be waning. It is unfortunate, however, that astro-ph does not function in collaboration with the refereed publications. For example, there is no systematic tracking of manuscripts from preprint to their final, published form, and as a centralized archive it is difficult to distribute the tracking and maintenance functions. It retains documents that have been superseded or have become obsolete. We are currently developing a distributed preprint and document management system which can support both distributed collections of preprints (e.g., traditional institutional preprint series), can link to the LANL collections, can index other documents in the ``grey'' literature (observatory reports, telescope and instrument user's manuals, calls for proposals, etc.), and can function as a manuscript submission tool for the refereed journals. This system is being developed to work cooperatively with the refereed literature so that, for example, links to preprints are updated to links to the final published papers.
26 CFR 1.305-2 - Distributions in lieu of money.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 4 2012-04-01 2012-04-01 false Distributions in lieu of money. 1.305-2 Section... (CONTINUED) INCOME TAXES (Continued) Effects on Recipients § 1.305-2 Distributions in lieu of money. (a) In... to whether a distribution shall be made either in money or any other property, or in stock or rights...
26 CFR 1.305-2 - Distributions in lieu of money.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 4 2013-04-01 2013-04-01 false Distributions in lieu of money. 1.305-2 Section... (CONTINUED) INCOME TAXES (CONTINUED) Effects on Recipients § 1.305-2 Distributions in lieu of money. (a) In... to whether a distribution shall be made either in money or any other property, or in stock or rights...
26 CFR 1.305-2 - Distributions in lieu of money.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 4 2014-04-01 2014-04-01 false Distributions in lieu of money. 1.305-2 Section... (CONTINUED) INCOME TAXES (CONTINUED) Effects on Recipients § 1.305-2 Distributions in lieu of money. (a) In... to whether a distribution shall be made either in money or any other property, or in stock or rights...
Statistics of Macroturbulence from Flow Equations
NASA Astrophysics Data System (ADS)
Marston, Brad; Iadecola, Thomas; Qi, Wanming
2012-02-01
Probability distribution functions of stochastically-driven and frictionally-damped fluids are governed by a linear framework that resembles quantum many-body theory. Besides the Fokker-Planck approach, there is a closely related Hopf functional methodfootnotetextOokie Ma and J. B. Marston, J. Stat. Phys. Th. Exp. P10007 (2005).; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we generalize the flow equation approachfootnotetextF. Wegner, Ann. Phys. 3, 77 (1994). (also known as the method of continuous unitary transformationsfootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994).) to find the zero mode. We test the approach using a prototypical model of geophysical and astrophysical flows on a rotating sphere that spontaneously organizes into a coherent jet. Good agreement is found with low-order equal-time statistics accumulated by direct numerical simulation, the traditional method. Different choices for the generators of the continuous transformations, and for closure approximations of the operator algebra, are discussed.
Poulos, Helen M.; Chernoff, Barry; Fuller, Pam L.; Butman, David
2012-01-01
Predicting the future spread of non-native aquatic species continues to be a high priority for natural resource managers striving to maintain biodiversity and ecosystem function. Modeling the potential distributions of alien aquatic species through spatially explicit mapping is an increasingly important tool for risk assessment and prediction. Habitat modeling also facilitates the identification of key environmental variables influencing species distributions. We modeled the potential distribution of an aggressive invasive minnow, the red shiner (Cyprinella lutrensis), in waterways of the conterminous United States using maximum entropy (Maxent). We used inventory records from the USGS Nonindigenous Aquatic Species Database, native records for C. lutrensis from museum collections, and a geographic information system of 20 raster climatic and environmental variables to produce a map of potential red shiner habitat. Summer climatic variables were the most important environmental predictors of C. lutrensis distribution, which was consistent with the high temperature tolerance of this species. Results from this study provide insights into the locations and environmental conditions in the US that are susceptible to red shiner invasion.
Entangled-coherent-state quantum key distribution with entanglement witnessing
NASA Astrophysics Data System (ADS)
Simon, David S.; Jaeger, Gregg; Sergienko, Alexander V.
2014-01-01
An entanglement-witness approach to quantum coherent-state key distribution and a system for its practical implementation are described. In this approach, eavesdropping can be detected by a change in sign of either of two witness functions: an entanglement witness S or an eavesdropping witness W. The effects of loss and eavesdropping on system operation are evaluated as a function of distance. Although the eavesdropping witness W does not directly witness entanglement for the system, its behavior remains related to that of the true entanglement witness S. Furthermore, W is easier to implement experimentally than S. W crosses the axis at a finite distance, in a manner reminiscent of entanglement sudden death. The distance at which this occurs changes measurably when an eavesdropper is present. The distance dependence of the two witnesses due to amplitude reduction and due to increased variance resulting from both ordinary propagation losses and possible eavesdropping activity is provided. Finally, the information content and secure key rate of a continuous variable protocol using this witness approach are given.
Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes
NASA Astrophysics Data System (ADS)
González Arenas, Zochil; Barci, Daniel G.
2012-12-01
Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.
Quantitative analysis of autophagic flux by confocal pH-imaging of autophagic intermediates
Maulucci, Giuseppe; Chiarpotto, Michela; Papi, Massimiliano; Samengo, Daniela; Pani, Giovambattista; De Spirito, Marco
2015-01-01
Although numerous techniques have been developed to monitor autophagy and to probe its cellular functions, these methods cannot evaluate in sufficient detail the autophagy process, and suffer limitations from complex experimental setups and/or systematic errors. Here we developed a method to image, contextually, the number and pH of autophagic intermediates by using the probe mRFP-GFP-LC3B as a ratiometric pH sensor. This information is expressed functionally by AIPD, the pH distribution of the number of autophagic intermediates per cell. AIPD analysis reveals how intermediates are characterized by a continuous pH distribution, in the range 4.5–6.5, and therefore can be described by a more complex set of states rather than the usual biphasic one (autophagosomes and autolysosomes). AIPD shape and amplitude are sensitive to alterations in the autophagy pathway induced by drugs or environmental states, and allow a quantitative estimation of autophagic flux by retrieving the concentrations of autophagic intermediates. PMID:26506895
Bratskaya, S; Golikov, A; Lutsenko, T; Nesterova, O; Dudarchik, V
2008-09-01
Charge characteristics of humic and fulvic acids of a different origin (inshore soils, peat, marine sediments, and soil (lysimetric) waters) were evaluated by means of two alternative methods - colloid titration and potentiometric titration. In order to elucidate possible limitations of the colloid titration as an express method of analysis of low content of humic substances we monitored changes in acid-base properties and charge densities of humic substances with soil depth, fractionation, and origin. We have shown that both factors - strength of acidic groups and molecular weight distribution in humic and fulvic acids - can affect the reliability of colloid titration. Due to deviations from 1:1 stoichiometry in interactions of humic substances with polymeric cationic titrant, the colloid titration can underestimate total acidity (charge density) of humic substances with domination of weak acidic functional groups (pK>6) and high content of the fractions with molecular weight below 1kDa.
Towards an initial mass function for giant planets
NASA Astrophysics Data System (ADS)
Carrera, Daniel; Davies, Melvyn B.; Johansen, Anders
2018-07-01
The distribution of exoplanet masses is not primordial. After the initial stage of planet formation, gravitational interactions between planets can lead to the physical collision of two planets, or the ejection of one or more planets from the system. When this occurs, the remaining planets are typically left in more eccentric orbits. In this report we demonstrate how the present-day eccentricities of the observed exoplanet population can be used to reconstruct the initial mass function of exoplanets before the onset of dynamical instability. We developed a Bayesian framework that combines data from N-body simulations with present-day observations to compute a probability distribution for the mass of the planets that were ejected or collided in the past. Integrating across the exoplanet population, one can estimate the initial mass function of exoplanets. We find that the ejected planets are primarily sub-Saturn-type planets. While the present-day distribution appears to be bimodal, with peaks around ˜1MJ and ˜20M⊕, this bimodality does not seem to be primordial. Instead, planets around ˜60M⊕ appear to be preferentially removed by dynamical instabilities. Attempts to reproduce exoplanet populations using population synthesis codes should be mindful of the fact that the present population may have been depleted of sub-Saturn-mass planets. Future observations may reveal that young giant planets have a more continuous size distribution with lower eccentricities and more sub-Saturn-type planets. Lastly, there is a need for additional data and for more research on how the system architecture and multiplicity might alter our results.
Toward an initial mass function for giant planets
NASA Astrophysics Data System (ADS)
Carrera, Daniel; Davies, Melvyn B.; Johansen, Anders
2018-05-01
The distribution of exoplanet masses is not primordial. After the initial stage of planet formation, gravitational interactions between planets can lead to the physical collision of two planets, or the ejection of one or more planets from the system. When this occurs, the remaining planets are typically left in more eccentric orbits. In this report we demonstrate how the present-day eccentricities of the observed exoplanet population can be used to reconstruct the initial mass function of exoplanets before the onset of dynamical instability. We developed a Bayesian framework that combines data from N-body simulations with present-day observations to compute a probability distribution for the mass of the planets that were ejected or collided in the past. Integrating across the exoplanet population, one can estimate the initial mass function of exoplanets. We find that the ejected planets are primarily sub-Saturn type planets. While the present-day distribution appears to be bimodal, with peaks around ˜1MJ and ˜20M⊕, this bimodality does not seem to be primordial. Instead, planets around ˜60M⊕ appear to be preferentially removed by dynamical instabilities. Attempts to reproduce exoplanet populations using population synthesis codes should be mindful of the fact that the present population may have been been depleted of sub-Saturn-mass planets. Future observations may reveal that young giant planets have a more continuous size distribution with lower eccentricities and more sub-Saturn type planets. Lastly, there is a need for additional data and for more research on how the system architecture and multiplicity might alter our results.
Thoron, radon and air ions spatial distribution in indoor air.
Kolarž, Predrag; Vaupotič, Janja; Kobal, Ivan; Ujić, Predrag; Stojanovska, Zdenka; Žunić, Zora S
2017-07-01
Spatial distribution of radioactive gasses thoron (Tn) and radon (Rn) in indoor air of 9 houses mostly during winter period of 2013 has been studied. According to properties of alpha decay of both elements, air ionization was also measured. Simultaneous continual measurements using three Rn/Tn and three air-ion active instruments deployed on to three different distances from the wall surface have shown various outcomes. It has turned out that Tn and air ions concentrations decrease with the distance increase, while Rn remained uniformly distributed. Exponential fittings function for Tn variation with distance was used for the diffusion length and constant as well as the exhalation rate determination. The obtained values were similar with experimental data reported in the literature. Concentrations of air ions were found to be in relation with Rn and obvious, but to a lesser extent, with Tn. Copyright © 2016 Elsevier Ltd. All rights reserved.
Joint analysis of air pollution in street canyons in St. Petersburg and Copenhagen
NASA Astrophysics Data System (ADS)
Genikhovich, E. L.; Ziv, A. D.; Iakovleva, E. A.; Palmgren, F.; Berkowicz, R.
The bi-annual data set of concentrations of several traffic-related air pollutants, measured continuously in street canyons in St. Petersburg and Copenhagen, is analysed jointly using different statistical techniques. Annual mean concentrations of NO 2, NO x and, especially, benzene are found systematically higher in St. Petersburg than in Copenhagen but for ozone the situation is opposite. In both cities probability distribution functions (PDFs) of concentrations and their daily or weekly extrema are fitted with the Weibull and double exponential distributions, respectively. Sample estimates of bi-variate distributions of concentrations, concentration roses, and probabilities of concentration of one pollutant being extreme given that another one reaches its extremum are presented in this paper as well as auto- and co-spectra. It is demonstrated that there is a reasonably high correlation between seasonally averaged concentrations of pollutants in St. Petersburg and Copenhagen.
The pressure distribution for biharmonic transmitting array: theoretical study
NASA Astrophysics Data System (ADS)
Baranowska, A.
2005-03-01
The aim of the paper is theoretical analysis of the finite amplitude waves interaction problem for the biharmonic transmitting array. We assume that the array consists of 16 circular pistons of the same dimensions that regrouped in two sections. Two different arrangements of radiating elements were considered. In this situation the radiating surface is non-continuous without axial symmetry. The mathematical model was built on the basis of the Khokhlov - Zabolotskaya - Kuznetsov (KZK) equation. To solve the problem the finite-difference method was applied. On-axis pressure amplitude for different frequency waves as a function of distance from the source, transverse pressure distribution of these waves at fixed distances from the source and pressure amplitude distribution for them at fixed planes were examined. Especially changes of normalized pressure amplitude for difference frequency were studied. The paper presents mathematical model and some results of theoretical investigations obtained for different values of source parameters.
Implementation of jump-diffusion algorithms for understanding FLIR scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1995-07-01
Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.
Anomalous transport in fluid field with random waiting time depending on the preceding jump length
NASA Astrophysics Data System (ADS)
Zhang, Hong; Li, Guo-Hua
2016-11-01
Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).
26 CFR 1.815-2 - Distributions to shareholders.
Code of Federal Regulations, 2010 CFR
2010-04-01
....815-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Distributions to Shareholders § 1.815-2 Distributions to shareholders. (a) In general. Section 815 provides that every stock life insurance company subject to the tax imposed by...
ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers
NASA Astrophysics Data System (ADS)
Torrent, Marc
2014-03-01
For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization algorithm, as well as the use of external optimized librairies. Part of this work has been supported by the european Prace project (PaRtnership for Advanced Computing in Europe) in the framework of its workpackage 8.
Madurga, Sergio; Martín-Molina, Alberto; Vilaseca, Eudald; Mas, Francesc; Quesada-Pérez, Manuel
2007-06-21
The structure of the electric double layer in contact with discrete and continuously charged planar surfaces is studied within the framework of the primitive model through Monte Carlo simulations. Three different discretization models are considered together with the case of uniform distribution. The effect of discreteness is analyzed in terms of charge density profiles. For point surface groups, a complete equivalence with the situation of uniformly distributed charge is found if profiles are exclusively analyzed as a function of the distance to the charged surface. However, some differences are observed moving parallel to the surface. Significant discrepancies with approaches that do not account for discreteness are reported if charge sites of finite size placed on the surface are considered.
Han, Xiahui; Li, Jianlang
2014-11-01
The transient temperature evolution in the gain medium of a continuous wave (CW) end-pumped passively Q-switched microchip (PQSM) laser is analyzed. By approximating the time-dependent population inversion density as a sawtooth function of time and treating the time-dependent pump absorption of a CW end-pumped PQSM laser as the superposition of an infinite series of short pumping pulses, the analytical expressions of transient temperature evolution and distribution in the gain medium for four- and three-level laser systems, respectively, are given. These analytical solutions are applied to evaluate the transient temperature evolution and distribution in the gain medium of CW end-pumped PQSM Nd:YAG and Yb:YAG lasers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goswami, Rituparno; Joshi, Pankaj S.; Vaz, Cenalo
We construct a class of spherically symmetric collapse models in which a naked singularity may develop as the end state of collapse. The matter distribution considered has negative radial and tangential pressures, but the weak energy condition is obeyed throughout. The singularity forms at the center of the collapsing cloud and continues to be visible for a finite time. The duration of visibility depends on the nature of energy distribution. Hence the causal structure of the resulting singularity depends on the nature of the mass function chosen for the cloud. We present a general model in which the naked singularitymore » formed is timelike, neither pointlike nor null. Our work represents a step toward clarifying the necessary conditions for the validity of the Cosmic Censorship Conjecture.« less
Evaluation of probabilistic forecasts with the scoringRules package
NASA Astrophysics Data System (ADS)
Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian
2017-04-01
Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.
Continuous variable quantum key distribution with modulated entangled states.
Madsen, Lars S; Usenko, Vladyslav C; Lassen, Mikael; Filip, Radim; Andersen, Ulrik L
2012-01-01
Quantum key distribution enables two remote parties to grow a shared key, which they can use for unconditionally secure communication over a certain distance. The maximal distance depends on the loss and the excess noise of the connecting quantum channel. Several quantum key distribution schemes based on coherent states and continuous variable measurements are resilient to high loss in the channel, but are strongly affected by small amounts of channel excess noise. Here we propose and experimentally address a continuous variable quantum key distribution protocol that uses modulated fragile entangled states of light to greatly enhance the robustness to channel noise. We experimentally demonstrate that the resulting quantum key distribution protocol can tolerate more noise than the benchmark set by the ideal continuous variable coherent state protocol. Our scheme represents a very promising avenue for extending the distance for which secure communication is possible.
An extended 3D discrete-continuous model and its application on single- and bi-crystal micropillars
NASA Astrophysics Data System (ADS)
Huang, Minsheng; Liang, Shuang; Li, Zhenhuan
2017-04-01
A 3D discrete-continuous model (3D DCM), which couples the 3D discrete dislocation dynamics (3D DDD) and finite element method (FEM), is extended in this study. New schemes for two key information transfers between DDD and FEM, i.e. plastic-strain distribution from DDD to FEM and stress transfer from FEM to DDD, are suggested. The plastic strain induced by moving dislocation segments is distributed to an elementary spheroid (ellipsoid or sphere) via a specific new distribution function. The influence of various interfaces (such as free surfaces and grain boundaries (GBs)) on the plastic-strain distribution is specially considered. By these treatments, the deformation fields can be solved accurately even for dislocations on slip planes severely inclined to the FE mesh, with no spurious stress concentration points produced. In addition, a stress correction by singular and non-singular theoretical solutions within a cut-off sphere is introduced to calculate the stress on the dislocations accurately. By these schemes, the present DCM becomes less sensitive to the FE mesh and more numerically efficient, which can also consider the interaction between neighboring dislocations appropriately even though they reside in the same FE mesh. Furthermore, the present DCM has been employed to model the compression of single-crystal and bi-crystal micropillars with rigid and dislocation-absorbed GBs. The influence of internal GB on the jerky stress-strain response and deformation mode is studied in detail to shed more light on these important micro-plastic problems.
Rényi continuous entropy of DNA sequences.
Vinga, Susana; Almeida, Jonas S
2004-12-07
Entropy measures of DNA sequences estimate their randomness or, inversely, their repeatability. L-block Shannon discrete entropy accounts for the empirical distribution of all length-L words and has convergence problems for finite sequences. A new entropy measure that extends Shannon's formalism is proposed. Renyi's quadratic entropy calculated with Parzen window density estimation method applied to CGR/USM continuous maps of DNA sequences constitute a novel technique to evaluate sequence global randomness without some of the former method drawbacks. The asymptotic behaviour of this new measure was analytically deduced and the calculation of entropies for several synthetic and experimental biological sequences was performed. The results obtained were compared with the distributions of the null model of randomness obtained by simulation. The biological sequences have shown a different p-value according to the kernel resolution of Parzen's method, which might indicate an unknown level of organization of their patterns. This new technique can be very useful in the study of DNA sequence complexity and provide additional tools for DNA entropy estimation. The main MATLAB applications developed and additional material are available at the webpage . Specialized functions can be obtained from the authors.
Continuous Wave Ring-Down Spectroscopy for Velocity Distribution Measurements in Plasma
NASA Astrophysics Data System (ADS)
McCarren, Dustin W.
Cavity Ring-Down Spectroscopy CRDS is a proven, ultra-sensitive, cavity enhanced absorption spectroscopy technique. When combined with a continuous wavelength (CW) diode laser that has a sufficiently narrow line width, the Doppler broadened absorption line, i.e., the velocity distribution functions (VDFs) of the absorbing species, can be measured. Measurements of VDFs can be made using established techniques such as laser induced fluorescence (LIF). However, LIF suffers from the requirement that the initial state of the LIF sequence have a substantial density and that the excitation scheme fluoresces at an easily detectable wavelength. This usually limits LIF to ions and atoms with large metastable state densities for the given plasma conditions. CW-CRDS is considerably more sensitive than LIF and can potentially be applied to much lower density populations of ion and atom states. Also, as a direct absorption technique, CW-CRDS measurements only need to be concerned with the species' absorption wavelength and provide an absolute measure of the line integrated initial state density. Presented in this work are measurements of argon ion and neutral VDFs in a helicon plasma using CW-CRDS and LIF.
Analysis of the Binary Euclidean Algorithm
1976-06-01
Probleme de Gauss," Atti del Congresso Internationale dei Matematici 6 (Bologna, 1928), 83-89. Levy [29] Levy, P., "Sur les Lois de Probabilite...r- Report) 11. SUPPL ENEN T A IllY NOTES lt . KEY WOI’IOS ( Continue on revere• ai de II nec:eaeary and Identify by bloc I< number) I 20...easily de - n n duced by differentiation. 3. The Distribution Functions F ’’ LI The following theorem gives the form of F (x) for finite n n
2009-07-01
Comments from the Department of Defense 33 Appendix IV GAO Contact and Staff Acknowledgments 36 Related GAO Products 37 Tables Table...work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in...disrupting maintenance production schedules due to personnel transfers since the employees are already experienced in performing these jobs. DLA Has
Electromagnetic field scattering by a triangular aperture.
Harrison, R E; Hyman, E
1979-03-15
The multiple Laplace transform has been applied to analysis and computation of scattering by a double triangular aperture. Results are obtained which match far-field intensity distributions observed in experiments. Arbitrary polarization components, as well as in-phase and quadrature-phase components, may be determined, in the transform domain, as a continuous function of distance from near to far-field for any orientation, aperture, and transformable waveform. Numerical results are obtained by application of numerical multiple inversions of the fully transformed solution.
Accuracy of Time Phasing Aircraft Development using the Continuous Distribution Function
2015-03-26
Breusch - Pagan test ; the reported p-value of 0.5264 fails to rejects the null hypothesis of constant... Breusch - Pagan Test : P-value – 0.6911 0 2 4 6 8 10 12 -1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1 Shapiro-Wilk W Test Prob. < W: 0.9849 -1...Weibull Scale Parameter β – Constant Variance Breusch - Pagan Test : P-value – 0.5176 Beta Shape Parameter α – Influential Data
26 CFR 1.995-3 - Distributions upon disqualification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Distributions upon disqualification. 1.995-3 Section 1.995-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Domestic International Sales Corporations § 1.995-3 Distributions upon...
Imaging structural and functional brain networks in temporal lobe epilepsy.
Bernhardt, Boris C; Hong, Seokjun; Bernasconi, Andrea; Bernasconi, Neda
2013-10-01
Early imaging studies in temporal lobe epilepsy (TLE) focused on the search for mesial temporal sclerosis, as its surgical removal results in clinically meaningful improvement in about 70% of patients. Nevertheless, a considerable subgroup of patients continues to suffer from post-operative seizures. Although the reasons for surgical failure are not fully understood, electrophysiological and imaging data suggest that anomalies extending beyond the temporal lobe may have negative impact on outcome. This hypothesis has revived the concept of human epilepsy as a disorder of distributed brain networks. Recent methodological advances in non-invasive neuroimaging have led to quantify structural and functional networks in vivo. While structural networks can be inferred from diffusion MRI tractography and inter-regional covariance patterns of structural measures such as cortical thickness, functional connectivity is generally computed based on statistical dependencies of neurophysiological time-series, measured through functional MRI or electroencephalographic techniques. This review considers the application of advanced analytical methods in structural and functional connectivity analyses in TLE. We will specifically highlight findings from graph-theoretical analysis that allow assessing the topological organization of brain networks. These studies have provided compelling evidence that TLE is a system disorder with profound alterations in local and distributed networks. In addition, there is emerging evidence for the utility of network properties as clinical diagnostic markers. Nowadays, a network perspective is considered to be essential to the understanding of the development, progression, and management of epilepsy.
Poorter, Hendrik; Jagodzinski, Andrzej M; Ruiz-Peinado, Ricardo; Kuyah, Shem; Luo, Yunjian; Oleksyn, Jacek; Usoltsev, Vladimir A; Buckley, Thomas N; Reich, Peter B; Sack, Lawren
2015-11-01
We compiled a global database for leaf, stem and root biomass representing c. 11 000 records for c. 1200 herbaceous and woody species grown under either controlled or field conditions. We used this data set to analyse allometric relationships and fractional biomass distribution to leaves, stems and roots. We tested whether allometric scaling exponents are generally constant across plant sizes as predicted by metabolic scaling theory, or whether instead they change dynamically with plant size. We also quantified interspecific variation in biomass distribution among plant families and functional groups. Across all species combined, leaf vs stem and leaf vs root scaling exponents decreased from c. 1.00 for small plants to c. 0.60 for the largest trees considered. Evergreens had substantially higher leaf mass fractions (LMFs) than deciduous species, whereas graminoids maintained higher root mass fractions (RMFs) than eudicotyledonous herbs. These patterns do not support the hypothesis of fixed allometric exponents. Rather, continuous shifts in allometric exponents with plant size during ontogeny and evolution are the norm. Across seed plants, variation in biomass distribution among species is related more to function than phylogeny. We propose that the higher LMF of evergreens at least partly compensates for their relatively low leaf area : leaf mass ratio. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Dependence of the Contact Resistance on the Design of Stranded Conductors
Zeroukhi, Youcef; Napieralska-Juszczak, Ewa; Vega, Guillaume; Komeza, Krzysztof; Morganti, Fabrice; Wiak, Slawomir
2014-01-01
During the manufacturing process multi-strand conductors are subject to compressive force and rotation moments. The current distribution in the multi-strand conductors is not uniform and is controlled by the transverse resistivity. This is mainly determined by the contact resistance at the strand crossovers and inter-strand contact resistance. The surface layer properties, and in particular the crystalline structure and degree of oxidation, are key parameters in determining the transverse resistivity. The experimental set-ups made it possible to find the dependence of contact resistivity as a function of continuous working stresses and cable design. A study based on measurements and numerical simulation is made to identify the contact resistivity functions. PMID:25196112
26 CFR 1.668(b)-1A - Tax on distribution.
Code of Federal Regulations, 2011 CFR
2011-04-01
...)-1A Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Treatment of Excess Distributions of Trusts Applicable to Taxable Years... inclusion of the section 666 amounts in the beneficiary's gross income and the tax for such year computed...
26 CFR 1.852-10 - Distributions in redemption of interests in unit investment trusts.
Code of Federal Regulations, 2014 CFR
2014-04-01
... investment trusts. 1.852-10 Section 1.852-10 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Regulated Investment Companies and Real Estate Investment Trusts § 1.852-10 Distributions in redemption of interests in unit investment...
26 CFR 1.852-10 - Distributions in redemption of interests in unit investment trusts.
Code of Federal Regulations, 2013 CFR
2013-04-01
... investment trusts. 1.852-10 Section 1.852-10 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Regulated Investment Companies and Real Estate Investment Trusts § 1.852-10 Distributions in redemption of interests in unit investment...
26 CFR 1.852-10 - Distributions in redemption of interests in unit investment trusts.
Code of Federal Regulations, 2011 CFR
2011-04-01
... investment trusts. 1.852-10 Section 1.852-10 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Regulated Investment Companies and Real Estate Investment Trusts § 1.852-10 Distributions in redemption of interests in unit investment...
26 CFR 1.852-10 - Distributions in redemption of interests in unit investment trusts.
Code of Federal Regulations, 2012 CFR
2012-04-01
... investment trusts. 1.852-10 Section 1.852-10 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Regulated Investment Companies and Real Estate Investment Trusts § 1.852-10 Distributions in redemption of interests in unit investment...
Kappa Distribution in a Homogeneous Medium: Adiabatic Limit of a Super-diffusive Process?
NASA Astrophysics Data System (ADS)
Roth, I.
2015-12-01
The classical statistical theory predicts that an ergodic, weakly interacting system like charged particles in the presence of electromagnetic fields, performing Brownian motions (characterized by small range deviations in phase space and short-term microscopic memory), converges into the Gibbs-Boltzmann statistics. Observation of distributions with a kappa-power-law tails in homogeneous systems contradicts this prediction and necessitates a renewed analysis of the basic axioms of the diffusion process: characteristics of the transition probability density function (pdf) for a single interaction, with a possibility of non-Markovian process and non-local interaction. The non-local, Levy walk deviation is related to the non-extensive statistical framework. Particles bouncing along (solar) magnetic field with evolving pitch angles, phases and velocities, as they interact resonantly with waves, undergo energy changes at undetermined time intervals, satisfying these postulates. The dynamic evolution of a general continuous time random walk is determined by pdf of jumps and waiting times resulting in a fractional Fokker-Planck equation with non-integer derivatives whose solution is given by a Fox H-function. The resulting procedure involves the known, although not frequently used in physics fractional calculus, while the local, Markovian process recasts the evolution into the standard Fokker-Planck equation. Solution of the fractional Fokker-Planck equation with the help of Mellin transform and evaluation of its residues at the poles of its Gamma functions results in a slowly converging sum with power laws. It is suggested that these tails form the Kappa function. Gradual vs impulsive solar electron distributions serve as prototypes of this description.
van Reenen, Mari; Westerhuis, Johan A; Reinecke, Carolus J; Venter, J Hendrik
2017-02-02
ERp is a variable selection and classification method for metabolomics data. ERp uses minimized classification error rates, based on data from a control and experimental group, to test the null hypothesis of no difference between the distributions of variables over the two groups. If the associated p-values are significant they indicate discriminatory variables (i.e. informative metabolites). The p-values are calculated assuming a common continuous strictly increasing cumulative distribution under the null hypothesis. This assumption is violated when zero-valued observations can occur with positive probability, a characteristic of GC-MS metabolomics data, disqualifying ERp in this context. This paper extends ERp to address two sources of zero-valued observations: (i) zeros reflecting the complete absence of a metabolite from a sample (true zeros); and (ii) zeros reflecting a measurement below the detection limit. This is achieved by allowing the null cumulative distribution function to take the form of a mixture between a jump at zero and a continuous strictly increasing function. The extended ERp approach is referred to as XERp. XERp is no longer non-parametric, but its null distributions depend only on one parameter, the true proportion of zeros. Under the null hypothesis this parameter can be estimated by the proportion of zeros in the available data. XERp is shown to perform well with regard to bias and power. To demonstrate the utility of XERp, it is applied to GC-MS data from a metabolomics study on tuberculosis meningitis in infants and children. We find that XERp is able to provide an informative shortlist of discriminatory variables, while attaining satisfactory classification accuracy for new subjects in a leave-one-out cross-validation context. XERp takes into account the distributional structure of data with a probability mass at zero without requiring any knowledge of the detection limit of the metabolomics platform. XERp is able to identify variables that discriminate between two groups by simultaneously extracting information from the difference in the proportion of zeros and shifts in the distributions of the non-zero observations. XERp uses simple rules to classify new subjects and a weight pair to adjust for unequal sample sizes or sensitivity and specificity requirements.
NASA Astrophysics Data System (ADS)
Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi
2018-05-01
A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.
26 CFR 1.1368-1 - Distributions by S corporations.
Code of Federal Regulations, 2014 CFR
2014-04-01
... without earnings and profits) to the extent that portion is a distribution of money and does not exceed... TAX (CONTINUED) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-1... by the shareholder. (c) S corporation with no earnings and profits. A distribution made by an S...
26 CFR 1.1368-1 - Distributions by S corporations.
Code of Federal Regulations, 2012 CFR
2012-04-01
... without earnings and profits) to the extent that portion is a distribution of money and does not exceed... TAX (CONTINUED) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-1... by the shareholder. (c) S corporation with no earnings and profits. A distribution made by an S...
26 CFR 1.1368-1 - Distributions by S corporations.
Code of Federal Regulations, 2013 CFR
2013-04-01
... without earnings and profits) to the extent that portion is a distribution of money and does not exceed... TAX (CONTINUED) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-1... by the shareholder. (c) S corporation with no earnings and profits. A distribution made by an S...
26 CFR 1.1368-1 - Distributions by S corporations.
Code of Federal Regulations, 2011 CFR
2011-04-01
... without earnings and profits) to the extent that portion is a distribution of money and does not exceed... TAX (CONTINUED) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-1... by the shareholder. (c) S corporation with no earnings and profits. A distribution made by an S...
NASA Astrophysics Data System (ADS)
Su, Zhu; Jin, Guoyong; Ye, Tiangui
2016-06-01
The paper presents a unified solution for free and transient vibration analyses of a functionally graded piezoelectric curved beam with general boundary conditions within the framework of Timoshenko beam theory. The formulation is derived by means of the variational principle in conjunction with a modified Fourier series which consists of standard Fourier cosine series and supplemented functions. The mechanical and electrical properties of functionally graded piezoelectric materials (FGPMs) are assumed to vary continuously in the thickness direction and are estimated by Voigt’s rule of mixture. The convergence, accuracy and reliability of the present formulation are demonstrated by comparing the present solutions with those from the literature and finite element analysis. Numerous results for FGPM beams with different boundary conditions, geometrical parameters as well as material distributions are given. Moreover, forced vibration of the FGPM beams subjected to dynamic loads and general boundary conditions are also investigated.
E-Standards For Mass Properties Engineering
NASA Technical Reports Server (NTRS)
Cerro, Jeffrey A.
2008-01-01
A proposal is put forth to promote the concept of a Society of Allied Weight Engineers developed voluntary consensus standard for mass properties engineering. This standard would be an e-standard, and would encompass data, data manipulation, and reporting functionality. The standard would be implemented via an open-source SAWE distribution site with full SAWE member body access. Engineering societies and global standards initiatives are progressing toward modern engineering standards, which become functioning deliverable data sets. These data sets, if properly standardized, will integrate easily between supplier and customer enabling technically precise mass properties data exchange. The concepts of object-oriented programming support all of these requirements, and the use of a JavaTx based open-source development initiative is proposed. Results are reported for activity sponsored by the NASA Langley Research Center Innovation Institute to scope out requirements for developing a mass properties engineering e-standard. An initial software distribution is proposed. Upon completion, an open-source application programming interface will be available to SAWE members for the development of more specific programming requirements that are tailored to company and project requirements. A fully functioning application programming interface will permit code extension via company proprietary techniques, as well as through continued open-source initiatives.
CTEQ-TEA parton distribution functions and HERA Run I and II combined data
NASA Astrophysics Data System (ADS)
Hou, Tie-Jiun; Dulat, Sayipjamal; Gao, Jun; Guzzi, Marco; Huston, Joey; Nadolsky, Pavel; Pumplin, Jon; Schmidt, Carl; Stump, Daniel; Yuan, C.-P.
2017-02-01
We analyze the impact of the recent HERA Run I +II combination of inclusive deep inelastic scattering cross-section data on the CT14 global analysis of parton distribution functions (PDFs). New PDFs at next-to-leading order and next-to-next-to-leading order, called CT14 HERA 2 , are obtained by a refit of the CT14 data ensembles, in which the HERA Run I combined measurements are replaced by the new HERA Run I +II combination. The CT14 functional parametrization of PDFs is flexible enough to allow good descriptions of different flavor combinations, so we use the same parametrization for CT14 HERA 2 but with an additional shape parameter for describing the strange quark PDF. We find that the HERA I +II data can be fit reasonably well, and both CT14 and CT14 HERA 2 PDFs can describe equally well the non-HERA data included in our global analysis. Because the CT14 and CT14 HERA 2 PDFs agree well within the PDF errors, we continue to recommend CT14 PDFs for the analysis of LHC Run 2 experiments.
RTD-based Material Tracking in a Fully-Continuous Dry Granulation Tableting Line.
Martinetz, M C; Karttunen, A-P; Sacher, S; Wahl, P; Ketolainen, J; Khinast, J G; Korhonen, O
2018-06-06
Continuous manufacturing (CM) offers quality and cost-effectiveness benefits over currently dominating batch processing. One challenge that needs to be addressed when implementing CM is traceability of materials through the process, which is needed for the batch/lot definition and control strategy. In this work the residence time distributions (RTD) of single unit operations (blender, roller compactor and tablet press) of a continuous dry granulation tableting line were captured with NIR based methods at selected mass flow rates to create training data. RTD models for continuous operated unit operations and the entire line were developed based on transfer functions. For semi-continuously operated bucket conveyor and pneumatic transport an assumption based the operation frequency was used. For validation of the parametrized process model, a pre-defined API step change and its propagation through the manufacturing line was computed and compared to multi-scale experimental runs conducted with the fully assembled continuous operated manufacturing line. This novel approach showed a very good prediction power at the selected mass flow rates for a complete continuous dry granulation line. Furthermore, it shows and proves the capabilities of process simulation as a tool to support development and control of pharmaceutical manufacturing processes. Copyright © 2018. Published by Elsevier B.V.
Subchondral bone density distribution of the talus in clinically normal Labrador Retrievers.
Dingemanse, W; Müller-Gerbl, M; Jonkers, I; Vander Sloten, J; van Bree, H; Gielen, I
2016-03-15
Bones continually adapt their morphology to their load bearing function. At the level of the subchondral bone, the density distribution is highly correlated with the loading distribution of the joint. Therefore, subchondral bone density distribution can be used to study joint biomechanics non-invasively. In addition physiological and pathological joint loading is an important aspect of orthopaedic disease, and research focusing on joint biomechanics will benefit veterinary orthopaedics. This study was conducted to evaluate density distribution in the subchondral bone of the canine talus, as a parameter reflecting the long-term joint loading in the tarsocrural joint. Two main density maxima were found, one proximally on the medial trochlear ridge and one distally on the lateral trochlear ridge. All joints showed very similar density distribution patterns and no significant differences were found in the localisation of the density maxima between left and right limbs and between dogs. Based on the density distribution the lateral trochlear ridge is most likely subjected to highest loads within the tarsocrural joint. The joint loading distribution is very similar between dogs of the same breed. In addition, the joint loading distribution supports previous suggestions of the important role of biomechanics in the development of OC lesions in the tarsus. Important benefits of computed tomographic osteoabsorptiometry (CTOAM), i.e. the possibility of in vivo imaging and temporal evaluation, make this technique a valuable addition to the field of veterinary orthopaedic research.
26 CFR 1.72(p)-1 - Loans treated as distributions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 2 2011-04-01 2011-04-01 false Loans treated as distributions. 1.72(p)-1 Section 1.72(p)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Items Specifically Included in Gross Income § 1.72(p)-1 Loans...
26 CFR 1.72(p)-1 - Loans treated as distributions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 2 2013-04-01 2013-04-01 false Loans treated as distributions. 1.72(p)-1 Section 1.72(p)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Items Specifically Included in Gross Income § 1.72(p)-1 Loans...
26 CFR 1.72(p)-1 - Loans treated as distributions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 2 2012-04-01 2012-04-01 false Loans treated as distributions. 1.72(p)-1 Section 1.72(p)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Items Specifically Included in Gross Income § 1.72(p)-1 Loans...
26 CFR 1.72(p)-1 - Loans treated as distributions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Loans treated as distributions. 1.72(p)-1 Section 1.72(p)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Items Specifically Included in Gross Income § 1.72(p)-1 Loans...
26 CFR 1.704-1 - Partner's distributive share.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Partner's distributive share. 1.704-1 Section 1.704-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Partners and Partnerships § 1.704-1 Partner's distributive share. (a) Effect of partnership agreement. A partner's...
Leverrier, Anthony; Grangier, Philippe
2009-05-08
We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.
On the Electromagnetic Momentum of Static Charge and Steady Current Distributions
ERIC Educational Resources Information Center
Gsponer, Andre
2007-01-01
Faraday's and Furry's formulae for the electromagnetic momentum of static charge distributions combined with steady electric current distributions are generalized in order to obtain full agreement with Poynting's formula in the case where all fields are of class C[superscript 1], i.e., continuous and continuously differentiable, and the…
Continuous Time Random Walks with memory and financial distributions
NASA Astrophysics Data System (ADS)
Montero, Miquel; Masoliver, Jaume
2017-11-01
We study financial distributions from the perspective of Continuous Time Random Walks with memory. We review some of our previous developments and apply them to financial problems. We also present some new models with memory that can be useful in characterizing tendency effects which are inherent in most markets. We also briefly study the effect on return distributions of fractional behaviors in the distribution of pausing times between successive transactions.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
Theiss-Nyland, Katherine; Lynch, Michael; Lines, Jo
2016-05-04
In addition to mass distribution campaigns, the World Health Organization (WHO) recommends the continuous distribution of long-lasting insecticidal nets (LLINs) to all pregnant women attending antenatal care (ANC) and all infants attending the Expanded Programme on Immunization (EPI) services in countries implementing mosquito nets for malaria control. Countries report LLIN distribution data to the WHO annually. For this analysis, these data were used to assess policy and practice in implementing these recommendations and to compare the numbers of LLINs available through ANC and EPI services with the numbers of women and children attending these services. For each reporting country in sub-Saharan Africa, the presence of a reported policy for LLIN distribution through ANC and EPI was reviewed. Prior to inclusion in the analysis the completeness of data was assessed in terms of the numbers of LLINs distributed through all channels (campaigns, EPI, ANC, other). For each country with adequate data, the numbers of LLINs reportedly distributed by national programmes to ANC was compared to the number of women reportedly attending ANC at least once; the ratio between these two numbers was used as an indicator of LLIN availability at ANC services. The same calculations were repeated for LLINs distributed through EPI to produce the corresponding LLIN availability through this distribution channel. Among 48 malaria-endemic countries in Africa, 33 malaria programmes reported adopting policies of ANC-based continuous distribution of LLINs, and 25 reported adopting policies of EPI-based distribution. Over a 3-year period through 2012, distribution through ANC accounted for 9 % of LLINs distributed, and LLINs distributed through EPI accounted for 4 %. The LLIN availability ratios achieved were 55 % through ANC and 34 % through EPI. For 38 country programmes reporting on LLIN distribution, data to calculate LLIN availability through ANC and EPI was available for 17 and 16, respectively. These continuous LLIN distribution channels appear to be under-utilized, especially EPI-based distribution. However, quality data from more countries are needed for consistent and reliable programme performance monitoring. A greater focus on routine data collection, monitoring and reporting on LLINs distributed through both ANC and EPI can provide insight into both strengths and weaknesses of continuous distribution, and improve the effectiveness of these delivery channels.
scoringRules - A software package for probabilistic model evaluation
NASA Astrophysics Data System (ADS)
Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian
2016-04-01
Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.
TRP channel functions in the gastrointestinal tract.
Yu, Xiaoyun; Yu, Mingran; Liu, Yingzhe; Yu, Shaoyong
2016-05-01
Transient receptor potential (TRP) channels are predominantly distributed in both somatic and visceral sensory nervous systems and play a crucial role in sensory transduction. As the largest visceral organ system, the gastrointestinal (GI) tract frequently accommodates external inputs, which stimulate sensory nerves to initiate and coordinate sensory and motor functions in order to digest and absorb nutrients. Meanwhile, the sensory nerves in the GI tract are also able to detect potential tissue damage by responding to noxious irritants. This nocifensive function is mediated through specific ion channels and receptors expressed in a subpopulation of spinal and vagal afferent nerve called nociceptor. In the last 18 years, our understanding of TRP channel expression and function in GI sensory nervous system has been continuously improved. In this review, we focus on the expressions and functions of TRPV1, TRPA1, and TRPM8 in primary extrinsic afferent nerves innervated in the esophagus, stomach, intestine, and colon and briefly discuss their potential roles in relevant GI disorders.
The development of hub architecture in the human functional brain network.
Hwang, Kai; Hallquist, Michael N; Luna, Beatriz
2013-10-01
Functional hubs are brain regions that play a crucial role in facilitating communication among parallel, distributed brain networks. The developmental emergence and stability of hubs, however, is not well understood. The current study used measures of network topology drawn from graph theory to investigate the development of functional hubs in 99 participants, 10-20 years of age. We found that hub architecture was evident in late childhood and was stable from adolescence to early adulthood. Connectivity between hub and non-hub ("spoke") regions, however, changed with development. From childhood to adolescence, the strength of connections between frontal hubs and cortical and subcortical spoke regions increased. From adolescence to adulthood, hub-spoke connections with frontal hubs were stable, whereas connectivity between cerebellar hubs and cortical spoke regions increased. Our findings suggest that a developmentally stable functional hub architecture provides the foundation of information flow in the brain, whereas connections between hubs and spokes continue to develop, possibly supporting mature cognitive function.
Topology optimized design of functionally graded piezoelectric ultrasonic transducers
NASA Astrophysics Data System (ADS)
Rubio, Wilfredo Montealegre; Buiochi, Flávio; Adamowski, Julio Cezar; Silva, Emílio C. N.
2010-01-01
This work presents a new approach to systematically design piezoelectric ultrasonic transducers based on Topology Optimization Method (TOM) and Functionally Graded Material (FGM) concepts. The main goal is to find the optimal material distribution of Functionally Graded Piezoelectric Ultrasonic Transducers, to achieve the following requirements: (i) the transducer must be designed to have a multi-modal or uni-modal frequency response, which defines the kind of generated acoustic wave, either short pulse or continuous wave, respectively; (ii) the transducer is required to oscillate in a thickness extensional mode or piston-like mode, aiming at acoustic wave generation applications. Two kinds of piezoelectric materials are mixed for producing the FGM transducer. Material type 1 represents a PZT-5A piezoelectric ceramic and material type 2 represents a PZT-5H piezoelectric ceramic. To illustrate the proposed method, two Functionally Graded Piezoelectric Ultrasonic Transducers are designed. The TOM has shown to be a useful tool for designing Functionally Graded Piezoelectric Ultrasonic Transducers with uni-modal or multi-modal dynamic behavior.
PDF turbulence modeling and DNS
NASA Technical Reports Server (NTRS)
Hsu, A. T.
1992-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in probability density function (pdf). A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models. The effect of Coriolis forces on compressible homogeneous turbulence is studied using direct numerical simulation (DNS). The numerical method used in this study is an eight order compact difference scheme. Contrary to the conclusions reached by previous DNS studies on incompressible isotropic turbulence, the present results show that the Coriolis force increases the dissipation rate of turbulent kinetic energy, and that anisotropy develops as the Coriolis force increases. The Taylor-Proudman theory does apply since the derivatives in the direction of the rotation axis vanishes rapidly. A closer analysis reveals that the dissipation rate of the incompressible component of the turbulent kinetic energy indeed decreases with a higher rotation rate, consistent with incompressible flow simulations (Bardina), while the dissipation rate of the compressible part increases; the net gain is positive. Inertial waves are observed in the simulation results.
NASA Astrophysics Data System (ADS)
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.
Correlated continuous time random walk and option pricing
NASA Astrophysics Data System (ADS)
Lv, Longjin; Xiao, Jianbin; Fan, Liangzhong; Ren, Fuyao
2016-04-01
In this paper, we study a correlated continuous time random walk (CCTRW) with averaged waiting time, whose probability density function (PDF) is proved to follow stretched Gaussian distribution. Then, we apply this process into option pricing problem. Supposing the price of the underlying is driven by this CCTRW, we find this model captures the subdiffusive characteristic of financial markets. By using the mean self-financing hedging strategy, we obtain the closed-form pricing formulas for a European option with and without transaction costs, respectively. At last, comparing the obtained model with the classical Black-Scholes model, we find the price obtained in this paper is higher than that obtained from the Black-Scholes model. A empirical analysis is also introduced to confirm the obtained results can fit the real data well.
A BGK model for reactive mixtures of polyatomic gases with continuous internal energy
NASA Astrophysics Data System (ADS)
Bisi, M.; Monaco, R.; Soares, A. J.
2018-03-01
In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.
Effects of Reinforcer Magnitude and Distribution on Preference for Work Schedules
ERIC Educational Resources Information Center
Ward-Horner, John C.; Pittenger, Alexis; Pace, Gary; Fienup, Daniel M.
2014-01-01
When the overall magnitude of reinforcement is matched between 2 alternative work schedules, some students prefer to complete all of their work for continuous access to a reinforcer (continuous work) rather than distributed access to a reinforcer while they work (discontinuous work). We evaluated a student's preference for continuous work by…
M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples
2013-01-01
Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...
76 FR 31019 - Distribution of Continued Dumping and Subsidy Offset to Affected Domestic Producers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-27
...Pursuant to the Continued Dumping and Subsidy Offset Act of 2000, this document is U.S. Customs and Border Protection's notice of intent to distribute assessed antidumping or countervailing duties (known as the continued dumping and subsidy offset) for Fiscal Year 2011 in connection with countervailing duty orders, antidumping duty orders, or findings under the Antidumping Act of 1921. This document sets forth the case name and number of each order or finding for which funds may become available for distribution, together with the list of affected domestic producers, based on the list supplied by the United States International Trade Commission (USITC) associated with each order or finding, who are potentially eligible to receive a distribution. This document also provides the instructions for affected domestic producers (and anyone alleging eligibility to receive a distribution) to file certifications to claim a distribution in relation to the listed orders or findings.
75 FR 30529 - Distribution of Continued Dumping and Subsidy Offset to Affected Domestic Producers
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
...Pursuant to the Continued Dumping and Subsidy Offset Act of 2000, this document is U.S. Customs and Border Protection's notice of intent to distribute assessed antidumping or countervailing duties (known as the continued dumping and subsidy offset) for Fiscal Year 2010 in connection with countervailing duty orders, antidumping duty orders, or findings under the Antidumping Act of 1921. This document sets forth the case name and number of each order or finding for which funds may become available for distribution, together with the list of affected domestic producers, based on the list supplied by the United States International Trade Commission (USITC) associated with each order or finding, who are potentially eligible to receive a distribution. This document also provides the instructions for affected domestic producers (and anyone alleging eligibility to receive a distribution) to file certifications to claim a distribution in relation to the listed orders or findings.
77 FR 32717 - Distribution of Continued Dumping and Subsidy Offset to Affected Domestic Producers
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
...Pursuant to the Continued Dumping and Subsidy Offset Act of 2000, this document is U.S. Customs and Border Protection's notice of intent to distribute assessed antidumping or countervailing duties (known as the continued dumping and subsidy offset) for Fiscal Year 2012 in connection with countervailing duty orders, antidumping duty orders, or findings under the Antidumping Act of 1921. This document sets forth the case name and number of each order or finding for which funds may become available for distribution, together with the list of affected domestic producers, based on the list supplied by the United States International Trade Commission (USITC) associated with each order or finding, who are potentially eligible to receive a distribution. This document also provides the instructions for affected domestic producers (and anyone alleging eligibility to receive a distribution) to file certifications to claim a distribution in relation to the listed orders or findings.
78 FR 32713 - Distribution of Continued Dumping and Subsidy Offset to Affected Domestic Producers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-31
...Pursuant to the Continued Dumping and Subsidy Offset Act of 2000, this document is U.S. Customs and Border Protection's notice of intent to distribute assessed antidumping or countervailing duties (known as the continued dumping and subsidy offset) for Fiscal Year 2013 in connection with countervailing duty orders, antidumping duty orders, or findings under the Antidumping Act of 1921. This document sets forth the case name and number of each order or finding for which funds may become available for distribution, together with the list of affected domestic producers, based on the list supplied by the United States International Trade Commission (USITC) associated with each order or finding, who are potentially eligible to receive a distribution. This document also provides the instructions for affected domestic producers (and anyone alleging eligibility to receive a distribution) to file certifications to claim a distribution in relation to the listed orders or findings.
Peng, Jie; He, Xiang; Ye, Hanming
2015-01-01
The vacuum preloading is an effective method which is widely used in ground treatment. In consolidation analysis, the soil around prefabricated vertical drain (PVD) is traditionally divided into smear zone and undisturbed zone, both with constant permeability. In reality, the permeability of soil changes continuously within the smear zone. In this study, the horizontal permeability coefficient of soil within the smear zone is described by an exponential function of radial distance. A solution for vacuum preloading consolidation considers the nonlinear distribution of horizontal permeability within the smear zone is presented and compared with previous analytical results as well as a numerical solution, the results show that the presented solution correlates well with the numerical solution, and is more precise than previous analytical solution.
Peng, Jie; He, Xiang; Ye, Hanming
2015-01-01
The vacuum preloading is an effective method which is widely used in ground treatment. In consolidation analysis, the soil around prefabricated vertical drain (PVD) is traditionally divided into smear zone and undisturbed zone, both with constant permeability. In reality, the permeability of soil changes continuously within the smear zone. In this study, the horizontal permeability coefficient of soil within the smear zone is described by an exponential function of radial distance. A solution for vacuum preloading consolidation considers the nonlinear distribution of horizontal permeability within the smear zone is presented and compared with previous analytical results as well as a numerical solution, the results show that the presented solution correlates well with the numerical solution, and is more precise than previous analytical solution. PMID:26447973
Lateral density anomalies and the earth's gravitational field
NASA Technical Reports Server (NTRS)
Lowrey, B. E.
1978-01-01
The interpretation of gravity is valuable for understanding lithospheric plate motion and mantle convection. Postulated models of anomalous mass distributions in the earth and the observed geopotential as expressed in the spherical harmonic expansion are compared. In particular, models of the anomalous density as a function of radius are found which can closely match the average magnitude of the spherical harmonic coefficients of a degree. These models include: (1) a two-component model consisting of an anomalous layer at 200 km depth (below the earth's surface) and at 1500 km depth (2) a two-component model where the upper component is distributed in the region between 1000 and 2800 km depth, and(3) a model with density anomalies which continuously increase with depth more than an order of magnitude.
Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses
NASA Astrophysics Data System (ADS)
Rutkowska, Eva; Baker, Colin; Nahum, Alan
2010-04-01
A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.
Continuous-wave lasing in an organic-inorganic lead halide perovskite semiconductor
NASA Astrophysics Data System (ADS)
Jia, Yufei; Kerner, Ross A.; Grede, Alex J.; Rand, Barry P.; Giebink, Noel C.
2017-12-01
Hybrid organic-inorganic perovskites have emerged as promising gain media for tunable, solution-processed semiconductor lasers. However, continuous-wave operation has not been achieved so far1-3. Here, we demonstrate that optically pumped continuous-wave lasing can be sustained above threshold excitation intensities of 17 kW cm-2 for over an hour in methylammonium lead iodide (MAPbI3) distributed feedback lasers that are maintained below the MAPbI3 tetragonal-to-orthorhombic phase transition temperature of T ≈ 160 K. In contrast with the lasing death phenomenon that occurs for pure tetragonal-phase MAPbI3 at T > 160 K (ref. 4), we find that continuous-wave gain becomes possible at T ≈ 100 K from tetragonal-phase inclusions that are photogenerated by the pump within the normally existing, larger-bandgap orthorhombic host matrix. In this mixed-phase system, the tetragonal inclusions function as carrier recombination sinks that reduce the transparency threshold, in loose analogy to inorganic semiconductor quantum wells, and may serve as a model for engineering improved perovskite gain media.
NASA Technical Reports Server (NTRS)
Merz, A. W.; Hague, D. S.
1975-01-01
An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Xiangfang; Lancelle, Chelsea; Thurber, Clifford
A field test that was conducted at Garner Valley, California, on 11 and 12 September 2013 using distributed acoustic sensing (DAS) to sense ground vibrations provided a continuous overnight record of ambient noise. Furthermore, the energy of ambient noise was concentrated between 5 and 25 Hz, which falls into the typical traffic noise frequency band. A standard procedure (Bensen et al., 2007) was adopted to calculate noise cross-correlation functions (NCFs) for 1-min intervals. The 1-min-long NCFs were stacked using the time–frequency domain phase-weighted-stacking method, which significantly improves signal quality. The obtained NCFs were asymmetrical, which was a result of themore » nonuniform distributed noise sources. A precursor appeared on NCFs along one segment, which was traced to a strong localized noise source or a scatterer at a nearby road intersection. NCF for the radial component of two surface accelerometers along a DAS profile gave similar results to those from DAS channels. Here, we calculated the phase velocity dispersion from DAS NCFs using the multichannel analysis of surface waves technique, and the result agrees with active-source results. We then conclude that ambient noise sources and the high spatial sampling of DAS can provide the same subsurface information as traditional active-source methods.« less
Zeng, Xiangfang; Lancelle, Chelsea; Thurber, Clifford; ...
2017-01-31
A field test that was conducted at Garner Valley, California, on 11 and 12 September 2013 using distributed acoustic sensing (DAS) to sense ground vibrations provided a continuous overnight record of ambient noise. Furthermore, the energy of ambient noise was concentrated between 5 and 25 Hz, which falls into the typical traffic noise frequency band. A standard procedure (Bensen et al., 2007) was adopted to calculate noise cross-correlation functions (NCFs) for 1-min intervals. The 1-min-long NCFs were stacked using the time–frequency domain phase-weighted-stacking method, which significantly improves signal quality. The obtained NCFs were asymmetrical, which was a result of themore » nonuniform distributed noise sources. A precursor appeared on NCFs along one segment, which was traced to a strong localized noise source or a scatterer at a nearby road intersection. NCF for the radial component of two surface accelerometers along a DAS profile gave similar results to those from DAS channels. Here, we calculated the phase velocity dispersion from DAS NCFs using the multichannel analysis of surface waves technique, and the result agrees with active-source results. We then conclude that ambient noise sources and the high spatial sampling of DAS can provide the same subsurface information as traditional active-source methods.« less
The orbital PDF: general inference of the gravitational potential from steady-state tracers
NASA Astrophysics Data System (ADS)
Han, Jiaxin; Wang, Wenting; Cole, Shaun; Frenk, Carlos S.
2016-02-01
We develop two general methods to infer the gravitational potential of a system using steady-state tracers, I.e. tracers with a time-independent phase-space distribution. Combined with the phase-space continuity equation, the time independence implies a universal orbital probability density function (oPDF) dP(λ|orbit) ∝ dt, where λ is the coordinate of the particle along the orbit. The oPDF is equivalent to Jeans theorem, and is the key physical ingredient behind most dynamical modelling of steady-state tracers. In the case of a spherical potential, we develop a likelihood estimator that fits analytical potentials to the system and a non-parametric method (`phase-mark') that reconstructs the potential profile, both assuming only the oPDF. The methods involve no extra assumptions about the tracer distribution function and can be applied to tracers with any arbitrary distribution of orbits, with possible extension to non-spherical potentials. The methods are tested on Monte Carlo samples of steady-state tracers in dark matter haloes to show that they are unbiased as well as efficient. A fully documented C/PYTHON code implementing our method is freely available at a GitHub repository linked from http://icc.dur.ac.uk/data/#oPDF.
Linear dispersion properties of ring velocity distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandas, Marek, E-mail: marek.vandas@asu.cas.cz; Hellinger, Petr; Institute of Atmospheric Physics, AS CR, Bocni II/1401, CZ-14100 Prague
2015-06-15
Linear properties of ring velocity distribution functions are investigated. The dispersion tensor in a form similar to the case of a Maxwellian distribution function, but for a general distribution function separable in velocities, is presented. Analytical forms of the dispersion tensor are derived for two cases of ring velocity distribution functions: one obtained from physical arguments and one for the usual, ad hoc ring distribution. The analytical expressions involve generalized hypergeometric, Kampé de Fériet functions of two arguments. For a set of plasma parameters, the two ring distribution functions are compared. At the parallel propagation with respect to the ambientmore » magnetic field, the two ring distributions give the same results identical to the corresponding bi-Maxwellian distribution. At oblique propagation, the two ring distributions give similar results only for strong instabilities, whereas for weak growth rates their predictions are significantly different; the two ring distributions have different marginal stability conditions.« less
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
Lim, Jing; Chong, Mark Seow Khoon; Chan, Jerry Kok Yen; Teoh, Swee-Hin
2014-06-25
Synthetic polymers used in tissue engineering require functionalization with bioactive molecules to elicit specific physiological reactions. These additives must be homogeneously dispersed in order to achieve enhanced composite mechanical performance and uniform cellular response. This work demonstrates the use of a solvent-free powder processing technique to form osteoinductive scaffolds from cryomilled polycaprolactone (PCL) and tricalcium phosphate (TCP). Cryomilling is performed to achieve micrometer-sized distribution of PCL and reduce melt viscosity, thus improving TCP distribution and improving structural integrity. A breakthrough is achieved in the successful fabrication of 70 weight percentage of TCP into a continuous film structure. Following compaction and melting, PCL/TCP composite scaffolds are found to display uniform distribution of TCP throughout the PCL matrix regardless of composition. Homogeneous spatial distribution is also achieved in fabricated 3D scaffolds. When seeded onto powder-processed PCL/TCP films, mesenchymal stem cells are found to undergo robust and uniform osteogenic differentiation, indicating the potential application of this approach to biofunctionalize scaffolds for tissue engineering applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ON THE BRIGHTNESS AND WAITING-TIME DISTRIBUTIONS OF A TYPE III RADIO STORM OBSERVED BY STEREO/WAVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastwood, J. P.; Hudson, H. S.; Krucker, S.
2010-01-10
Type III solar radio storms, observed at frequencies below {approx}16 MHz by space-borne radio experiments, correspond to the quasi-continuous, bursty emission of electron beams onto open field lines above active regions. The mechanisms by which a storm can persist in some cases for more than a solar rotation whilst exhibiting considerable radio activity are poorly understood. To address this issue, the statistical properties of a type III storm observed by the STEREO/WAVES radio experiment are presented, examining both the brightness distribution and (for the first time) the waiting-time distribution (WTD). Single power-law behavior is observed in the number distribution asmore » a function of brightness; the power-law index is {approx}2.1 and is largely independent of frequency. The WTD is found to be consistent with a piecewise-constant Poisson process. This indicates that during the storm individual type III bursts occur independently and suggests that the storm dynamics are consistent with avalanche-type behavior in the underlying active region.« less
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Long-distance continuous-variable quantum key distribution by controlling excess noise
NASA Astrophysics Data System (ADS)
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise.
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-13
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727
SU-E-I-16: Scan Length Dependency of the Radial Dose Distribution in a Long Polyethylene Cylinder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakalyar, D; McKenney, S; Feng, W
Purpose: The area-averaged dose in the central plane of a long cylinder following a CT scan depends upon the radial dose distribution and the length of the scan. The ICRU/TG200 phantom, a polyethylene cylinder 30 cm in diameter and 60 cm long, was the subject of this study. The purpose was to develop an analytic function that could determine the dose for a scan length L at any point in the central plane of this phantom. Methods: Monte Carlo calculations were performed on a simulated ICRU/TG200 phantom under conditions of cylindrically symmetric conditions of irradiation. Thus, the radial dose distributionmore » function must be an even function that accounts for two competing effects: The direct beam makes its weakest contribution at the center while the scatter begins abruptly at the outer radius and grows as the center is approached. The scatter contribution also increases with scan length with the increase approaching its limiting value at the periphery faster than along the central axis. An analytic function was developed that fit the data and possessed these features. Results: Symmetry and continuity dictate a local extremum at the center which is a minimum for the ICRU/TG200 phantom. The relative depth of the minimum decreases as the scan length grows and an absolute maximum can occur between the center and outer edge of the cylinders. As the scan length grows, the relative dip in the center decreases so that for very long scan lengths, the dose profile is relatively flat. Conclusion: An analytic function characterizes the radial and scan length dependency of dose for long cylindrical phantoms. The function can be integrated with the results expressed in closed form. One use for this is to help determine average dose distribution over the central cylinder plane for any scan length.« less
Pore Pressure and Stress Distributions Around a Hydraulic Fracture in Heterogeneous Rock
NASA Astrophysics Data System (ADS)
Gao, Qian; Ghassemi, Ahmad
2017-12-01
One of the most significant characteristics of unconventional petroleum bearing formations is their heterogeneity, which affects the stress distribution, hydraulic fracture propagation and also fluid flow. This study focuses on the stress and pore pressure redistributions during hydraulic stimulation in a heterogeneous poroelastic rock. Lognormal random distributions of Young's modulus and permeability are generated to simulate the heterogeneous distributions of material properties. A 3D fully coupled poroelastic model based on the finite element method is presented utilizing a displacement-pressure formulation. In order to verify the model, numerical results are compared with analytical solutions showing excellent agreements. The effects of heterogeneities on stress and pore pressure distributions around a penny-shaped fracture in poroelastic rock are then analyzed. Results indicate that the stress and pore pressure distributions are more complex in a heterogeneous reservoir than in a homogeneous one. The spatial extent of stress reorientation during hydraulic stimulations is a function of time and is continuously changing due to the diffusion of pore pressure in the heterogeneous system. In contrast to the stress distributions in homogeneous media, irregular distributions of stresses and pore pressure are observed. Due to the change of material properties, shear stresses and nonuniform deformations are generated. The induced shear stresses in heterogeneous rock cause the initial horizontal principal stresses to rotate out of horizontal planes.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Long memory behavior of returns after intraday financial jumps
NASA Astrophysics Data System (ADS)
Behfar, Stefan Kambiz
2016-11-01
In this paper, characterization of intraday financial jumps and time dynamics of returns after jumps is investigated, and will be analytically and empirically shown that intraday jumps are power-law distributed with the exponent 1 < μ < 2; in addition, returns after jumps show long-memory behavior. In the theory of finance, it is important to be able to distinguish between jumps and continuous sample path price movements, and this can be achieved by introducing a statistical test via calculating sums of products of returns over small period of time. In the case of having jump, the null hypothesis for normality test is rejected; this is based on the idea that returns are composed of mixture of normally-distributed and power-law distributed data (∼ 1 /r 1 + μ). Probability of rejection of null hypothesis is a function of μ, which is equal to one for 1 < μ < 2 within large intraday sample size M. To test this idea empirically, we downloaded S&P500 index data for both periods of 1997-1998 and 2014-2015, and showed that the Complementary Cumulative Distribution Function of jump return is power-law distributed with the exponent 1 < μ < 2. There are far more jumps in 1997-1998 as compared to 2015-2016; and it represents a power law exponent in 2015-2016 greater than one in 1997-1998. Assuming that i.i.d returns generally follow Poisson distribution, if the jump is a causal factor, high returns after jumps are the effect; we show that returns caused by jump decay as power-law distribution. To test this idea empirically, we average over the time dynamics of all days; therefore the superposed time dynamics after jump represent a power-law, which indicates that there is a long memory with a power-law distribution of return after jump.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
47 CFR 54.611 - Distributing support.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) UNIVERSAL SERVICE Universal Service Support for Health Care Providers § 54.611 Distributing support. (a) A telecommunications carrier providing services eligible for support under this subpart to eligible health care...
26 CFR 1.734-2 - Adjustment after distribution to transferee partner.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Distributions by A Partnership § 1.734-2 Adjustment after... by the following example: Example. Upon the death of his father, partner S acquires by inheritance a...
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
Infrared and Raman spectroscopic features of plant cuticles: a review
Heredia-Guerrero, José A.; Benítez, José J.; Domínguez, Eva; Bayer, Ilker S.; Cingolani, Roberto; Athanassiou, Athanassia; Heredia, Antonio
2014-01-01
The cuticle is one of the most important plant barriers. It is an external and continuous lipid membrane that covers the surface of epidermal cells and whose main function is to prevent the massive loss of water. The spectroscopic characterization of the plant cuticle and its components (cutin, cutan, waxes, polysaccharides and phenolics) by infrared and Raman spectroscopies has provided significant advances in the knowledge of the functional groups present in the cuticular matrix and on their structural role, interaction and macromolecular arrangement. Additionally, these spectroscopies have been used in the study of cuticle interaction with exogenous molecules, degradation, distribution of components within the cuticle matrix, changes during growth and development and characterization of fossil plants. PMID:25009549
NASA Technical Reports Server (NTRS)
Bradford, D. F.; Kelejian, H. H.
1974-01-01
The results of an investigation of the value of improving information for forecasting future crop harvests are described. A theoretical model is developed to calculate the value of increased speed of availablitiy of that information. The analysis of U.S. domestic wheat consumption was implemented. New estimates of a demand function for wheat and of a cost of storage function were involved, along with a Monte Carlo simulation for the wheat spot and future markets and a model of market determinations of wheat inventories. Results are shown to depend critically on the accuracy of current and proposed measurement techniques.
Shannon entropy in the research on stationary regimes and the evolution of complexity
NASA Astrophysics Data System (ADS)
Eskov, V. M.; Eskov, V. V.; Vochmina, Yu. V.; Gorbunov, D. V.; Ilyashenko, L. K.
2017-05-01
The questions of the identification of complex biological systems (complexity) as special self-organizing systems or systems of the third type first defined by W. Weaver in 1948 continue to be of interest. No reports on the evaluation of entropy for systems of the third type were found among the publications currently available to the authors. The present study addresses the parameters of muscle biopotentials recorded using surface interference electromyography and presents the results of calculation of the Shannon entropy, autocorrelation functions, and statistical distribution functions for electromyograms of subjects in different physiological states (rest and tension of muscles). The results do not allow for statistically reliable discrimination between the functional states of muscles. However, the data obtained by calculating electromyogram quasiatttractor parameters and matrices of paired comparisons of electromyogram samples (calculation of the number k of "coinciding" pairs among the electromyogram samples) provide an integral characteristic that allows the identification of substantial differences between the state of rest and the different states of functional activity. Modifications and implementation of new methods in combination with the novel methods of the theory of chaos and self-organization are obviously essential. The stochastic approach paradigm is not applicable to systems of the third type due to continuous and chaotic changes of the parameters of the state vector x( t) of an organism or the contrasting constancy of these parameters (in the case of entropy).
Higher-Order Theory for Functionally Graded Materials
NASA Technical Reports Server (NTRS)
Aboudi, J.; Pindera, M. J.; Arnold, Steven M.
2001-01-01
Functionally graded materials (FGM's) are a new generation of engineered materials wherein the microstructural details are spatially varied through nonuniform distribution of the reinforcement phase(s). Engineers accomplish this by using reinforcements with different properties, sizes, and shapes, as well as by interchanging the roles of the reinforcement and matrix phases in a continuous manner (ref. 1). The result is a microstructure that produces continuously or discretely changing thermal and mechanical properties at the macroscopic or continuum scale. This new concept of engineering the material's microstructure marks the beginning of a revolution both in the materials science and mechanics of materials areas since it allows one, for the first time, to fully integrate the material and structural considerations into the final design of structural components. Functionally graded materials are ideal candidates for applications involving severe thermal gradients, ranging from thermal structures in advanced aircraft and aerospace engines to computer circuit boards. Owing to the many variables that control the design of functionally graded microstructures, full exploitation of the FGM's potential requires the development of appropriate modeling strategies for their response to combined thermomechanical loads. Previously, most computational strategies for the response of FGM's did not explicitly couple the material's heterogeneous microstructure with the structural global analysis. Rather, local effective or macroscopic properties at a given point within the FGM were first obtained through homogenization based on a chosen micromechanics scheme and then subsequently used in a global thermomechanical analysis.
Lam, H K; Leung, Frank H F
2007-10-01
This correspondence presents the stability analysis and performance design of the continuous-time fuzzy-model-based control systems. The idea of the nonparallel-distributed-compensation (non-PDC) control laws is extended to the continuous-time fuzzy-model-based control systems. A nonlinear controller with non-PDC control laws is proposed to stabilize the continuous-time nonlinear systems in Takagi-Sugeno's form. To produce the stability-analysis result, a parameter-dependent Lyapunov function (PDLF) is employed. However, two difficulties are usually encountered: 1) the time-derivative terms produced by the PDLF will complicate the stability analysis and 2) the stability conditions are not in the form of linear-matrix inequalities (LMIs) that aid the design of feedback gains. To tackle the first difficulty, the time-derivative terms are represented by some weighted-sum terms in some existing approaches, which will increase the number of stability conditions significantly. In view of the second difficulty, some positive-definitive terms are added in order to cast the stability conditions into LMIs. In this correspondence, the favorable properties of the membership functions and nonlinear control laws, which allow the introduction of some free matrices, are employed to alleviate the two difficulties while retaining the favorable properties of PDLF-based approach. LMI-based stability conditions are derived to ensure the system stability. Furthermore, based on a common scalar performance index, LMI-based performance conditions are derived to guarantee the system performance. Simulation examples are given to illustrate the effectiveness of the proposed approach.
Flight measurements of surface pressures on a flexible supercritical research wing
NASA Technical Reports Server (NTRS)
Eckstrom, C. V.
1985-01-01
A flexible supercritical research wing, designated as ARW-1, was flight-tested as part of the NASA Drones for Aerodynamic and Structural Testing Program. Aerodynamic loads, in the form of wing surface pressure measurements, were obtained during flights at altitudes of 15,000, 20,000, and 25,000 feet at Mach numbers from 0.70 to 0.91. Surface pressure coefficients determined from pressure measurements at 80 orifice locations are presented individually as nearly continuous functions of angle of attack for constant values of Mach number. The surface pressure coefficients are also presented individually as a function of Mach number for an angle of attack of 2.0 deg. The nearly continuous values of the pressure coefficient clearly show details of the pressure gradient, which occurred in a rather narrow Mach number range. The effects of changes in angle of attack, Mach number, and dynamic pressure are also shown by chordwise pressure distributions for the range of test conditions experienced. Reynolds numbers for the tests ranged from 5.7 to 8.4 x 1,000,000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puerari, Ivânio; Elmegreen, Bruce G.; Block, David L., E-mail: puerari@inaoep.mx
2014-12-01
We examine 8 μm IRAC images of the grand design two-arm spiral galaxies M81 and M51 using a new method whereby pitch angles are locally determined as a function of scale and position, in contrast to traditional Fourier transform spectral analyses which fit to average pitch angles for whole galaxies. The new analysis is based on a correlation between pieces of a galaxy in circular windows of (lnR,θ) space and logarithmic spirals with various pitch angles. The diameter of the windows is varied to study different scales. The result is a best-fit pitch angle to the spiral structure as amore » function of position and scale, or a distribution function of pitch angles as a function of scale for a given galactic region or area. We apply the method to determine the distribution of pitch angles in the arm and interarm regions of these two galaxies. In the arms, the method reproduces the known pitch angles for the main spirals on a large scale, but also shows higher pitch angles on smaller scales resulting from dust feathers. For the interarms, there is a broad distribution of pitch angles representing the continuation and evolution of the spiral arm feathers as the flow moves into the interarm regions. Our method shows a multiplicity of spiral structures on different scales, as expected from gas flow processes in a gravitating, turbulent and shearing interstellar medium. We also present results for M81 using classical 1D and 2D Fourier transforms, together with a new correlation method, which shows good agreement with conventional 2D Fourier transforms.« less
Cai, Jing; Read, Paul W; Altes, Talissa A; Molloy, Janelle A; Brookeman, James R; Sheng, Ke
2007-01-21
Treatment planning based on probability distribution function (PDF) of patient geometries has been shown a potential off-line strategy to incorporate organ motion, but the application of such approach highly depends upon the reproducibility of the PDF. In this paper, we investigated the dependences of the PDF reproducibility on the imaging acquisition parameters, specifically the scan time and the frame rate. Three healthy subjects underwent a continuous 5 min magnetic resonance (MR) scan in the sagittal plane with a frame rate of approximately 10 f s-1, and the experiments were repeated with an interval of 2 to 3 weeks. A total of nine pulmonary vessels from different lung regions (upper, middle and lower) were tracked and the dependences of their displacement PDF reproducibility were evaluated as a function of scan time and frame rate. As results, the PDF reproducibility error decreased with prolonged scans and appeared to approach equilibrium state in subjects 2 and 3 within the 5 min scan. The PDF accuracy increased in the power function with the increase of frame rate; however, the PDF reproducibility showed less sensitivity to frame rate presumably due to the randomness of breathing which dominates the effects. As the key component of the PDF-based treatment planning, the reproducibility of the PDF affects the dosimetric accuracy substantially. This study provides a reference for acquiring MR-based PDF of structures in the lung.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Treatment of distributions where substantially all contributions are employee contributions (temporary). 1.72(e)-1T Section 1.72(e)-1T Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Items Specifically Included in...
Joint measurement of complementary observables in moment tomography
NASA Astrophysics Data System (ADS)
Teo, Yong Siah; Müller, Christian R.; Jeong, Hyunseok; Hradil, Zdeněk; Řeháček, Jaroslav; Sánchez-Soto, Luis L.
Wigner and Husimi quasi-distributions, owing to their functional regularity, give the two archetypal and equivalent representations of all observable-parameters in continuous-variable quantum information. Balanced homodyning (HOM) and heterodyning (HET) that correspond to their associated sampling procedures, on the other hand, fare very differently concerning their state or parameter reconstruction accuracies. We present a general theory of a now-known fact that HET can be tomographically more powerful than balanced homodyning to many interesting classes of single-mode quantum states, and discuss the treatment for two-mode sources.
Imaging structural and functional brain networks in temporal lobe epilepsy
Bernhardt, Boris C.; Hong, SeokJun; Bernasconi, Andrea; Bernasconi, Neda
2013-01-01
Early imaging studies in temporal lobe epilepsy (TLE) focused on the search for mesial temporal sclerosis, as its surgical removal results in clinically meaningful improvement in about 70% of patients. Nevertheless, a considerable subgroup of patients continues to suffer from post-operative seizures. Although the reasons for surgical failure are not fully understood, electrophysiological and imaging data suggest that anomalies extending beyond the temporal lobe may have negative impact on outcome. This hypothesis has revived the concept of human epilepsy as a disorder of distributed brain networks. Recent methodological advances in non-invasive neuroimaging have led to quantify structural and functional networks in vivo. While structural networks can be inferred from diffusion MRI tractography and inter-regional covariance patterns of structural measures such as cortical thickness, functional connectivity is generally computed based on statistical dependencies of neurophysiological time-series, measured through functional MRI or electroencephalographic techniques. This review considers the application of advanced analytical methods in structural and functional connectivity analyses in TLE. We will specifically highlight findings from graph-theoretical analysis that allow assessing the topological organization of brain networks. These studies have provided compelling evidence that TLE is a system disorder with profound alterations in local and distributed networks. In addition, there is emerging evidence for the utility of network properties as clinical diagnostic markers. Nowadays, a network perspective is considered to be essential to the understanding of the development, progression, and management of epilepsy. PMID:24098281
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
NASA Astrophysics Data System (ADS)
Ghadiri, Majid; Shafiei, Navvab
2016-04-01
In this study, thermal vibration of rotary functionally graded Timoshenko microbeam has been analyzed based on modified couple stress theory considering temperature change in four types of temperature distribution on thermal environment. Material properties of FG microbeam are supposed to be temperature dependent and vary continuously along the thickness according to the power-law form. The axial forces are also included in the model as the thermal and true spatial variation due to the rotation. Governing equations and boundary conditions have been derived by employing Hamiltonian's principle. The differential quadrature method is employed to solve the governing equations for cantilever and propped cantilever boundary conditions. Validations are done by comparing available literatures and obtained results which indicate accuracy of applied method. Results represent effects of temperature changes, different boundary conditions, nondimensional angular velocity, length scale parameter, different boundary conditions, FG index and beam thickness on fundamental, second and third nondimensional frequencies. Results determine critical values of temperature changes and other essential parameters which can be applicable to design micromachines like micromotor and microturbine.
Acoustic design by topology optimization
NASA Astrophysics Data System (ADS)
Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole
2008-11-01
To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.
Serious games for elderly continuous monitoring.
Lemus-Zúñiga, Lenin-G; Navarro-Pardo, Esperanza; Moret-Tatay, Carmen; Pocinho, Ricardo
2015-01-01
Information technology (IT) and serious games allow older population to remain independent for longer. Hence, when designing technology for this population, developmental changes, such as attention and/or perception, should be considered. For instance, a crucial developmental change has been related to cognitive speed in terms of reaction time (RT). However, this variable presents a skewed distribution that difficult data analysis. An alternative strategy is to characterize the data to an ex-Gaussian function. Furthermore, this procedure provides different parameters that have been related to underlying cognitive processes in the literature. Another issue to be considered is the optimal data recording, storing and processing. For that purpose mobile devices (smart phones and tablets) are a good option for targeting serious games where valuable information can be stored (time spent in the application, reaction time, frequency of use, and a long etcetera). The data stored inside the smartphones and tablets can be sent to a central computer (cloud storage) in order to store the data collected to not only fill the distribution of reaction times to mathematical functions, but also to estimate parameters which may reflect cognitive processes underlying language, aging, and decisional process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matlis, N. H., E-mail: nmatlis@gmail.com; Gonsalves, A. J.; Steinke, S.
We present an analysis of the gas dynamics and density distributions within a capillary-discharge waveguide with an embedded supersonic jet. This device provides a target for a laser plasma accelerator which uses longitudinal structuring of the gas-density profile to enable control of electron trapping and acceleration. The functionality of the device depends sensitively on the details of the density profile, which are determined by the interaction between the pulsed gas in the jet and the continuously-flowing gas in the capillary. These dynamics are captured by spatially resolving recombination light from several emission lines of the plasma as a function ofmore » the delay between the jet and the discharge. We provide a phenomenological description of the gas dynamics as well as a quantitative evaluation of the density evolution. In particular, we show that the pressure difference between the jet and the capillary defines three regimes of operation with qualitatively different longitudinal density profiles and show that jet timing provides a sensitive method for tuning between these regimes.« less
Stochastic effects in a thermochemical system with Newtonian heat exchange.
Nowakowski, B; Lemarchand, A
2001-12-01
We develop a mesoscopic description of stochastic effects in the Newtonian heat exchange between a diluted gas system and a thermostat. We explicitly study the homogeneous Semenov model involving a thermochemical reaction and neglecting consumption of reactants. The master equation includes a transition rate for the thermal transfer process, which is derived on the basis of the statistics for inelastic collisions between gas particles and walls of the thermostat. The main assumption is that the perturbation of the Maxwellian particle velocity distribution can be neglected. The transition function for the thermal process admits a continuous spectrum of temperature changes, and consequently, the master equation has a complicated integro-differential form. We perform Monte Carlo simulations based on this equation to study the stochastic effects in the Semenov system in the explosive regime. The dispersion of ignition times is calculated as a function of system size. For sufficiently small systems, the probability distribution of temperature displays transient bimodality during the ignition period. The results of the stochastic description are successfully compared with those of direct simulations of microscopic particle dynamics.
Priority queues with bursty arrivals of incoming tasks
NASA Astrophysics Data System (ADS)
Masuda, N.; Kim, J. S.; Kahng, B.
2009-03-01
Recently increased accessibility of large-scale digital records enables one to monitor human activities such as the interevent time distributions between two consecutive visits to a web portal by a single user, two consecutive emails sent out by a user, two consecutive library loans made by a single individual, etc. Interestingly, those distributions exhibit a universal behavior, D(τ)˜τ-δ , where τ is the interevent time, and δ≃1 or 3/2 . The universal behaviors have been modeled via the waiting-time distribution of a task in the queue operating based on priority; the waiting time follows a power-law distribution Pw(τ)˜τ-α with either α=1 or 3/2 depending on the detail of queuing dynamics. In these models, the number of incoming tasks in a unit time interval has been assumed to follow a Poisson-type distribution. For an email system, however, the number of emails delivered to a mail box in a unit time we measured follows a power-law distribution with general exponent γ . For this case, we obtain analytically the exponent α , which is not necessarily 1 or 3/2 and takes nonuniversal values depending on γ . We develop the generating function formalism to obtain the exponent α , which is distinct from the continuous time approximation used in the previous studies.
NASA Astrophysics Data System (ADS)
Moebius, F.; Or, D.
2012-12-01
Dynamics of fluid fronts in porous media shape transport properties of the unsaturated zone and affect management of petroleum reservoirs and their storage properties. What appears macroscopically as smooth and continuous motion of a displacement fluid front may involve numerous rapid interfacial jumps often resembling avalanches of invasion events. Direct observations using high-speed camera and pressure sensors in sintered glass micro-models provide new insights on the influence of flow rates, pore size, and gravity on invasion events and on burst size distribution. Fundamental differences emerge between geometrically-defined pores and "functional" pores invaded during a single burst (invasion event). The waiting times distribution of individual invasion events and decay times of inertial oscillations (following a rapid interfacial jump) are characteristics of different displacement regimes. An invasion percolation model with gradients and including the role of inertia provide a framework for linking flow regimes with invasion sequences and phase entrapment. Model results were compared with measurements and with early studies on invasion burst sizes and waiting times distribution during slow drainage processes by Måløy et al. [1992]. The study provides new insights into the discrete invasion events and their weak links with geometrically-deduced pore geometry. Results highlight factors controlling pore invasion events that exert strong influence on macroscopic phenomena such as front morphology and residual phase entrapment shaping hydraulic properties after the passage of a fluid front.
Elucidation of spin echo small angle neutron scattering correlation functions through model studies.
Shew, Chwen-Yang; Chen, Wei-Ren
2012-02-14
Several single-modal Debye correlation functions to approximate part of the overall Debey correlation function of liquids are closely examined for elucidating their behavior in the corresponding spin echo small angle neutron scattering (SESANS) correlation functions. We find that the maximum length scale of a Debye correlation function is identical to that of its SESANS correlation function. For discrete Debye correlation functions, the peak of SESANS correlation function emerges at their first discrete point, whereas for continuous Debye correlation functions with greater width, the peak position shifts to a greater value. In both cases, the intensity and shape of the peak of the SESANS correlation function are determined by the width of the Debye correlation functions. Furthermore, we mimic the intramolecular and intermolecular Debye correlation functions of liquids composed of interacting particles based on a simple model to elucidate their competition in the SESANS correlation function. Our calculations show that the first local minimum of a SESANS correlation function can be negative and positive. By adjusting the spatial distribution of the intermolecular Debye function in the model, the calculated SESANS spectra exhibit the profile consistent with that of hard-sphere and sticky-hard-sphere liquids predicted by more sophisticated liquid state theory and computer simulation. © 2012 American Institute of Physics
Detecting Genetic Interactions for Quantitative Traits Using m-Spacing Entropy Measure
Yee, Jaeyong; Kwon, Min-Seok; Park, Taesung; Park, Mira
2015-01-01
A number of statistical methods for detecting gene-gene interactions have been developed in genetic association studies with binary traits. However, many phenotype measures are intrinsically quantitative and categorizing continuous traits may not always be straightforward and meaningful. Association of gene-gene interactions with an observed distribution of such phenotypes needs to be investigated directly without categorization. Information gain based on entropy measure has previously been successful in identifying genetic associations with binary traits. We extend the usefulness of this information gain by proposing a nonparametric evaluation method of conditional entropy of a quantitative phenotype associated with a given genotype. Hence, the information gain can be obtained for any phenotype distribution. Because any functional form, such as Gaussian, is not assumed for the entire distribution of a trait or a given genotype, this method is expected to be robust enough to be applied to any phenotypic association data. Here, we show its use to successfully identify the main effect, as well as the genetic interactions, associated with a quantitative trait. PMID:26339620
A Network Scheduling Model for Distributed Control Simulation
NASA Technical Reports Server (NTRS)
Culley, Dennis; Thomas, George; Aretskin-Hariton, Eliot
2016-01-01
Distributed engine control is a hardware technology that radically alters the architecture for aircraft engine control systems. Of its own accord, it does not change the function of control, rather it seeks to address the implementation issues for weight-constrained vehicles that can limit overall system performance and increase life-cycle cost. However, an inherent feature of this technology, digital communication networks, alters the flow of information between critical elements of the closed-loop control. Whereas control information has been available continuously in conventional centralized control architectures through virtue of analog signaling, moving forward, it will be transmitted digitally in serial fashion over the network(s) in distributed control architectures. An underlying effect is that all of the control information arrives asynchronously and may not be available every loop interval of the controller, therefore it must be scheduled. This paper proposes a methodology for modeling the nominal data flow over these networks and examines the resulting impact for an aero turbine engine system simulation.
NASA Technical Reports Server (NTRS)
Silk, J. K.; Kahler, S. W.; Krieger, A. S.; Vaiana, G. S.
1976-01-01
The X-ray flare of 9 August 1973 was characterized by a spatially small kernel structure which persisted throughout its duration. The decay phase of this flare was observed in the objective grating mode of the X-ray telescope aboard the Skylab. Data analysis was carried out by scanning the images with a microdensitometer, converting the density arrays to energy using laboratory film calibration data and taking cross sections of the energy images. The 9 August flare shows two distinct periods in its decay phase, involving both cooling and material loss. The objective grating observations reveal that the two phenomena are separated in time. During the earlier phase of the flare decay, the distribution of emission measure as a function of temperature is changing, the high temperature component of the distribution being depleted relative to the cooler body of plasma. As the decay continues, the emission measure distribution stabilizes and the flux diminishes as the amount of material at X-ray emitting temperatures decreases.
NASA Astrophysics Data System (ADS)
Pierini, J. O.; Restrepo, J. C.; Aguirre, J.; Bustamante, A. M.; Velásquez, G. J.
2017-04-01
A measure of the variability in seasonal extreme streamflow was estimated for the Colombian Caribbean coast, using monthly time series of freshwater discharge from ten watersheds. The aim was to detect modifications in the streamflow monthly distribution, seasonal trends, variance and extreme monthly values. A 20-year length time moving window, with 1-year successive shiftments, was applied to the monthly series to analyze the seasonal variability of streamflow. The seasonal-windowed data were statistically fitted through the Gamma distribution function. Scale and shape parameters were computed using the Maximum Likelihood Estimation (MLE) and the bootstrap method for 1000 resample. A trend analysis was performed for each windowed-serie, allowing to detect the window of maximum absolute values for trends. Significant temporal shifts in seasonal streamflow distribution and quantiles (QT), were obtained for different frequencies. Wet and dry extremes periods increased significantly in the last decades. Such increase did not occur simultaneously through the region. Some locations exhibited continuous increases only at minimum QT.
NASA Astrophysics Data System (ADS)
Martinez, R.; Larouche, D.; Cailletaud, G.; Guillot, I.; Massinon, D.
2015-06-01
The precipitation of Al2Cu particles in a 319 T7 aluminum alloy has been modeled. A theoretical approach enables the concomitant computation of nucleation, growth and coarsening. The framework is based on an implicit scheme using the finite differences. The equation of continuity is discretized in time and space in order to obtain a matricial form. The inversion of a tridiagonal matrix gives way to determining the evolution of the size distribution of Al2Cu particles at t +Δt. The fluxes of in-between the boundaries are computed in order to respect the conservation of the mass of the system, as well as the fluxes at the boundaries. The essential results of the model are compared to TEM measurements. Simulations provide quantitative features on the impact of the cooling rate on the size distribution of particles. They also provide results in agreement with the TEM measurements. This kind of multiscale approach allows new perspectives to be examined in the process of designing highly loaded components such as cylinder heads. It enables a more precise prediction of the microstructure and its evolution as a function of continuous cooling rates.
A complete set of two-dimensional harmonic vortices on a spherical surface
NASA Astrophysics Data System (ADS)
Esparza, Christian; Rendón, Pablo Luis; Ley Koo, Eugenio
2018-03-01
The solutions of the Euler equations on a spherical surface are constructed, starting from a vector velocity potential A in the radial direction and with a two-dimensional spherical harmonic variation of order m and well-defined parity under \\varphi \\mapsto -\\varphi . The solutions are well-behaved on the entire surface and continuous at the position of a parallel circle θ ={θ }0, where the vorticity is shown to be harmonically distributed. The velocity field is evaluated as the curl of the vector potential: it is shown that the velocity is divergenceless and distributed on the spherical surface. Its polar components at the parallel circle are shown to be continuous, confirming its divergenceless nature, while its azimuthal components are discontinuous at the circle, and their discontinuity is a measure of the vorticity in the radial direction. A closed form for the velocity field lines is also obtained in terms of fixed values of the scalar harmonic function associated with the vector potential. Additionally, the connections of the solutions on a spherical surface with their circular, elliptic and bipolar counterparts on the equatorial plane are implemented via stereographic projections.
Escamilla-Martínez, Elena; Martínez-Nova, Alfonso; Gómez-Martín, Beatriz; Sánchez-Rodríguez, Raquel; Fernández-Seguín, Lourdes María
2013-01-01
Fatigue due to running has been shown to contribute to changes in plantar pressure distribution. However, little is known about changes in foot posture after running. We sought to compare the foot posture index before and after moderate exercise and to relate any changes to plantar pressure patterns. A baropodometric evaluation was made, using the FootScan platform (RSscan International, Olen, Belgium), of 30 men who were regular runners and their foot posture was examined using the Foot Posture Index before and after a 60-min continuous run at a moderate pace (3.3 m/sec). Foot posture showed a tendency toward pronation after the 60-min run, gaining 2 points in the foot posture index. The total support and medial heel contact areas increased, as did pressures under the second metatarsal head and medial heel. Continuous running at a moderate speed (3.3 m/sec) induced changes in heel strike related to enhanced pronation posture, indicative of greater stress on that zone after physical activity. This observation may help us understand the functioning of the foot, prevent injuries, and design effective plantar orthoses in sport.
Kim, Namje; Han, Sang-Pil; Ryu, Han-Cheol; Ko, Hyunsung; Park, Jeong-Woo; Lee, Donghun; Jeon, Min Yong; Park, Kyung Hyun
2012-07-30
A widely tunable dual mode laser diode with a single cavity structure is demonstrated. This novel device consists of a distributed feedback (DFB) laser diode and distributed Bragg reflector (DBR). Micro-heaters are integrated on the top of each section for continuous and independent wavelength tuning of each mode. By using a single gain medium in the DFB section, an effective common optical cavity and common modes are realized. The laser diode shows a wide tunability of the optical beat frequency, from 0.48 THz to over 2.36 THz. Continuous wave THz radiation is also successfully generated with low-temperature grown InGaAs photomixers from 0.48 GHz to 1.5 THz.
Huang, Jie; Lan, Xinwei; Luo, Ming; Xiao, Hai
2014-07-28
This paper reports a spatially continuous distributed fiber optic sensing technique using optical carrier based microwave interferometry (OCMI), in which many optical interferometers with the same or different optical path differences are interrogated in the microwave domain and their locations can be unambiguously determined. The concept is demonstrated using cascaded weak optical reflectors along a single optical fiber, where any two arbitrary reflectors are paired to define a low-finesse Fabry-Perot interferometer. While spatially continuous (i.e., no dark zone), fully distributed strain measurement was used as an example to demonstrate the capability, the proposed concept may also be implemented on other types of waveguide or free-space interferometers and used for distributed measurement of various physical, chemical and biological quantities.
Dynamic frailty models based on compound birth-death processes.
Putter, Hein; van Houwelingen, Hans C
2015-07-01
Frailty models are used in survival analysis to model unobserved heterogeneity. They accommodate such heterogeneity by the inclusion of a random term, the frailty, which is assumed to multiply the hazard of a subject (individual frailty) or the hazards of all subjects in a cluster (shared frailty). Typically, the frailty term is assumed to be constant over time. This is a restrictive assumption and extensions to allow for time-varying or dynamic frailties are of interest. In this paper, we extend the auto-correlated frailty models of Henderson and Shimakura and of Fiocco, Putter and van Houwelingen, developed for longitudinal count data and discrete survival data, to continuous survival data. We present a rigorous construction of the frailty processes in continuous time based on compound birth-death processes. When the frailty processes are used as mixtures in models for survival data, we derive the marginal hazards and survival functions and the marginal bivariate survival functions and cross-ratio function. We derive distributional properties of the processes, conditional on observed data, and show how to obtain the maximum likelihood estimators of the parameters of the model using a (stochastic) expectation-maximization algorithm. The methods are applied to a publicly available data set. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Kondakci, Yasar; Zayim, Merve; Beycioglu, Kadir; Sincar, Mehmet; Ugurlu, Celal T
2016-01-01
This study aims at building a theoretical base for continuous change in education and using this base to test the mediating roles of two key contextual variables, knowledge sharing and trust, in the relationship between the distributed leadership perceptions and continuous change behaviours of teachers. Data were collected from 687 public school…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Probst, Alexander J.; Ladd, Bethany; Jarett, Jessica K.
An enormous diversity of previously unknown bacteria and archaea has been discovered recently, yet their functional capacities and distributions in the terrestrial subsurface remain uncertain. Here, we continually sampled a CO 2-driven geyser (Colorado Plateau, Utah, USA) over its 5-day eruption cycle to test the hypothesis that stratified, sandstone-hosted aquifers sampled over three phases of the eruption cycle have microbial communities that differ both in membership and function. Genome-resolved metagenomics, single-cell genomics and geochemical analyses confirmed this hypothesis and linked microorganisms to groundwater compositions from different depths. Autotrophic Candidatus “Altiarchaeum sp.” and phylogenetically deep-branching nanoarchaea dominate the deepest groundwater. Amore » nanoarchaeon with limited metabolic capacity is inferred to be a potential symbiont of the Ca. “Altiarchaeum”. Candidate Phyla Radiation bacteria are also present in the deepest groundwater and they are relatively abundant in water from intermediate depths. During the recovery phase of the geyser, microaerophilic Fe- and S-oxidizers have high in situ genome replication rates. Autotrophic Sulfurimonas sustained by aerobic sulfide oxidation and with the capacity for N 2 fixation dominate the shallow aquifer. Overall, 104 different phylum-level lineages are present in water from these subsurface environments, with uncultivated archaea and bacteria partitioned to the deeper subsurface.« less
Effect of posttranslational modifications on enzyme function and assembly.
Ryšlavá, Helena; Doubnerová, Veronika; Kavan, Daniel; Vaněk, Ondřej
2013-10-30
The detailed examination of enzyme molecules by mass spectrometry and other techniques continues to identify hundreds of distinct PTMs. Recently, global analyses of enzymes using methods of contemporary proteomics revealed widespread distribution of PTMs on many key enzymes distributed in all cellular compartments. Critically, patterns of multiple enzymatic and nonenzymatic PTMs within a single enzyme are now functionally evaluated providing a holistic picture of a macromolecule interacting with low molecular mass compounds, some of them being substrates, enzyme regulators, or activated precursors for enzymatic and nonenzymatic PTMs. Multiple PTMs within a single enzyme molecule and their mutual interplays are critical for the regulation of catalytic activity. Full understanding of this regulation will require detailed structural investigation of enzymes, their structural analogs, and their complexes. Further, proteomics is now integrated with molecular genetics, transcriptomics, and other areas leading to systems biology strategies. These allow the functional interrogation of complex enzymatic networks in their natural environment. In the future, one might envisage the use of robust high throughput analytical techniques that will be able to detect multiple PTMs on a global scale of individual proteomes from a number of carefully selected cells and cellular compartments. This article is part of a Special Issue entitled: Posttranslational Protein modifications in biology and Medicine. Copyright © 2013 Elsevier B.V. All rights reserved.
Borghero, Francesco; Demontis, Francesco
2016-09-01
In the framework of geometrical optics, we consider the following inverse problem: given a two-parameter family of curves (congruence) (i.e., f(x,y,z)=c1,g(x,y,z)=c2), construct the refractive-index distribution function n=n(x,y,z) of a 3D continuous transparent inhomogeneous isotropic medium, allowing for the creation of the given congruence as a family of monochromatic light rays. We solve this problem by following two different procedures: 1. By applying Fermat's principle, we establish a system of two first-order linear nonhomogeneous PDEs in the unique unknown function n=n(x,y,z) relating the assigned congruence of rays with all possible refractive-index profiles compatible with this family. Moreover, we furnish analytical proof that the family of rays must be a normal congruence. 2. By applying the eikonal equation, we establish a second system of two first-order linear homogeneous PDEs whose solutions give the equation S(x,y,z)=const. of the geometric wavefronts and, consequently, all pertinent refractive-index distribution functions n=n(x,y,z). Finally, we make a comparison between the two procedures described above, discussing appropriate examples having exact solutions.
Yasumatsu, Naoya; Watanabe, Shinichi
2012-02-01
We propose and develop a method to quickly and precisely determine the polarization direction of coherent terahertz electromagnetic waves generated by femtosecond laser pulses. The measurement system consists of a conventional terahertz time-domain spectroscopy system with the electro-optic (EO) sampling method, but we add a new functionality in the EO crystal which is continuously rotating with the angular frequency ω. We find a simple yet useful formulation of the EO signal as a function of the crystal orientation, which enables a lock-in-like detection of both the electric-field amplitude and the absolute polarization direction of the terahertz waves with respect to the probe laser pulse polarization direction at the same time. The single measurement finishes around two periods of the crystal rotations (∼21 ms), and we experimentally prove that the accuracy of the polarization measurement does not suffer from the long-term amplitude fluctuation of the terahertz pulses. Distribution of the measured polarization directions by repeating the measurements is excellently fitted by a gaussian distribution function with a standard deviation of σ = 0.56°. The developed technique is useful for the fast direct determination of the polarization state of the terahertz electromagnetic waves for polarization imaging applications as well as the precise terahertz Faraday or Kerr rotation spectroscopy.
Probst, Alexander J.; Ladd, Bethany; Jarett, Jessica K.; ...
2018-01-29
An enormous diversity of previously unknown bacteria and archaea has been discovered recently, yet their functional capacities and distributions in the terrestrial subsurface remain uncertain. Here, we continually sampled a CO 2-driven geyser (Colorado Plateau, Utah, USA) over its 5-day eruption cycle to test the hypothesis that stratified, sandstone-hosted aquifers sampled over three phases of the eruption cycle have microbial communities that differ both in membership and function. Genome-resolved metagenomics, single-cell genomics and geochemical analyses confirmed this hypothesis and linked microorganisms to groundwater compositions from different depths. Autotrophic Candidatus “Altiarchaeum sp.” and phylogenetically deep-branching nanoarchaea dominate the deepest groundwater. Amore » nanoarchaeon with limited metabolic capacity is inferred to be a potential symbiont of the Ca. “Altiarchaeum”. Candidate Phyla Radiation bacteria are also present in the deepest groundwater and they are relatively abundant in water from intermediate depths. During the recovery phase of the geyser, microaerophilic Fe- and S-oxidizers have high in situ genome replication rates. Autotrophic Sulfurimonas sustained by aerobic sulfide oxidation and with the capacity for N 2 fixation dominate the shallow aquifer. Overall, 104 different phylum-level lineages are present in water from these subsurface environments, with uncultivated archaea and bacteria partitioned to the deeper subsurface.« less
Stochastic theory of size exclusion chromatography by the characteristic function approach.
Dondi, Francesco; Cavazzini, Alberto; Remelli, Maurizio; Felinger, Attila; Martin, Michel
2002-01-18
A general stochastic theory of size exclusion chromatography (SEC) able to account for size dependence on both pore ingress and egress processes, moving zone dispersion and pore size distribution, was developed. The relationship between stochastic-chromatographic and batch equilibrium conditions are discussed and the fundamental role of the 'ergodic' hypothesis in establishing a link between them is emphasized. SEC models are solved by means of the characteristic function method and chromatographic parameters like plate height, peak skewness and excess are derived. The peak shapes are obtained by numerical inversion of the characteristic function under the most general conditions of the exploited models. Separate size effects on pore ingress and pore egress processes are investigated and their effects on both retention selectivity and efficiency are clearly shown. The peak splitting phenomenon and peak tailing due to incomplete sample sorption near to the exclusion limit is discussed. An SEC model for columns with two types of pores is discussed and several effects on retention selectivity and efficiency coming from pore size differences and their relative abundance are singled out. The relevance of moving zone dispersion on separation is investigated. The present approach proves to be general and able to account for more complex SEC conditions such as continuous pore size distributions and mixed retention mechanism.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Accumulation distributions of trusts other than certain foreign trusts; in general. 1.665(b)-1 Section 1.665(b)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Treatment of Excess Distributions of Trusts Applicable to Taxable...
An annular superposition integral for axisymmetric radiators.
Kelly, James F; McGough, Robert J
2007-02-01
A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.
Shehla, Romana; Khan, Athar Ali
2016-01-01
Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.
NASA Astrophysics Data System (ADS)
Touil, B.; Bendib, A.; Bendib-Kalache, K.
2017-02-01
The longitudinal dielectric function is derived analytically from the relativistic Vlasov equation for arbitrary values of the relevant parameters z = m c 2 / T , where m is the rest electron mass, c is the speed of light, and T is the electron temperature in energy units. A new analytical approach based on the Legendre polynomial expansion and continued fractions was used. Analytical expression of the electron distribution function was derived. The real part of the dispersion relation and the damping rate of electron plasma waves are calculated both analytically and numerically in the whole range of the parameter z . The results obtained improve significantly the previous results reported in the literature. For practical purposes, explicit expressions of the real part of the dispersion relation and the damping rate in the range z > 30 and strongly relativistic regime are also proposed.
Cerebral White Matter Integrity and Cognitive Aging: Contributions from Diffusion Tensor Imaging
Madden, David J.; Bennett, Ilana J.; Song, Allen W.
2009-01-01
The integrity of cerebral white matter is critical for efficient cognitive functioning, but little is known regarding the role of white matter integrity in age-related differences in cognition. Diffusion tensor imaging (DTI) measures the directional displacement of molecular water and as a result can characterize the properties of white matter that combine to restrict diffusivity in a spatially coherent manner. This review considers DTI studies of aging and their implications for understanding adult age differences in cognitive performance. Decline in white matter integrity contributes to a disconnection among distributed neural systems, with a consistent effect on perceptual speed and executive functioning. The relation between white matter integrity and cognition varies across brain regions, with some evidence suggesting that age-related effects exhibit an anterior-posterior gradient. With continued improvements in spatial resolution and integration with functional brain imaging, DTI holds considerable promise, both for theories of cognitive aging and for translational application. PMID:19705281
Differential Location and Distribution of Hepatic Immune Cells
Freitas-Lopes, Maria Alice; Mafra, Kassiana; David, Bruna A.; Carvalho-Gontijo, Raquel; Menezes, Gustavo B.
2017-01-01
The liver is one of the main organs in the body, performing several metabolic and immunological functions that are indispensable to the organism. The liver is strategically positioned in the abdominal cavity between the intestine and the systemic circulation. Due to its location, the liver is continually exposed to nutritional insults, microbiota products from the intestinal tract, and to toxic substances. Hepatocytes are the major functional constituents of the hepatic lobes, and perform most of the liver’s secretory and synthesizing functions, although another important cell population sustains the vitality of the organ: the hepatic immune cells. Liver immune cells play a fundamental role in host immune responses and exquisite mechanisms are necessary to govern the density and the location of the different hepatic leukocytes. Here we discuss the location of these pivotal cells within the different liver compartments, and how their frequency and tissular location can dictate the fate of liver immune responses. PMID:29215603
Continuous description of fluctuating eccentricities
NASA Astrophysics Data System (ADS)
Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves
2014-11-01
We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Penetration of magnetosonic waves into the plasmasphere observed by the Van Allen Probes
Xiao, Fuliang; Zhou, Qinghua; He, Yihua; ...
2015-09-11
During the small storm on 14–15 April 2014, Van Allen Probe A measured a continuously distinct proton ring distribution and enhanced magnetosonic (MS) waves along its orbit outside the plasmapause. Inside the plasmasphere, strong MS waves were still present but the distinct proton ring distribution was falling steeply with distance. We adopt a sum of subtracted bi-Maxwellian components to model the observed proton ring distribution and simulate the wave trajectory and growth. MS waves at first propagate toward lower L shells outside the plasmasphere, with rapidly increasing path gains related to the continuous proton ring distribution. The waves then graduallymore » cross the plasmapause into the deep plasmasphere, with almost unchanged path gains due to the falling proton ring distribution and higher ambient density. These results present the first report on how MS waves penetrate into the plasmasphere with the aid of the continuous proton ring distributions during weak geomagnetic activities.« less
Statistical distribution of blood serotonin as a predictor of early autistic brain abnormalities
Janušonis, Skirmantas
2005-01-01
Background A wide range of abnormalities has been reported in autistic brains, but these abnormalities may be the result of an earlier underlying developmental alteration that may no longer be evident by the time autism is diagnosed. The most consistent biological finding in autistic individuals has been their statistically elevated levels of 5-hydroxytryptamine (5-HT, serotonin) in blood platelets (platelet hyperserotonemia). The early developmental alteration of the autistic brain and the autistic platelet hyperserotonemia may be caused by the same biological factor expressed in the brain and outside the brain, respectively. Unlike the brain, blood platelets are short-lived and continue to be produced throughout the life span, suggesting that this factor may continue to operate outside the brain years after the brain is formed. The statistical distributions of the platelet 5-HT levels in normal and autistic groups have characteristic features and may contain information about the nature of this yet unidentified factor. Results The identity of this factor was studied by using a novel, quantitative approach that was applied to published distributions of the platelet 5-HT levels in normal and autistic groups. It was shown that the published data are consistent with the hypothesis that a factor that interferes with brain development in autism may also regulate the release of 5-HT from gut enterochromaffin cells. Numerical analysis revealed that this factor may be non-functional in autistic individuals. Conclusion At least some biological factors, the abnormal function of which leads to the development of the autistic brain, may regulate the release of 5-HT from the gut years after birth. If the present model is correct, it will allow future efforts to be focused on a limited number of gene candidates, some of which have not been suspected to be involved in autism (such as the 5-HT4 receptor gene) based on currently available clinical and experimental studies. PMID:16029508
Statistical distribution of blood serotonin as a predictor of early autistic brain abnormalities.
Janusonis, Skirmantas
2005-07-19
A wide range of abnormalities has been reported in autistic brains, but these abnormalities may be the result of an earlier underlying developmental alteration that may no longer be evident by the time autism is diagnosed. The most consistent biological finding in autistic individuals has been their statistically elevated levels of 5-hydroxytryptamine (5-HT, serotonin) in blood platelets (platelet hyperserotonemia). The early developmental alteration of the autistic brain and the autistic platelet hyperserotonemia may be caused by the same biological factor expressed in the brain and outside the brain, respectively. Unlike the brain, blood platelets are short-lived and continue to be produced throughout the life span, suggesting that this factor may continue to operate outside the brain years after the brain is formed. The statistical distributions of the platelet 5-HT levels in normal and autistic groups have characteristic features and may contain information about the nature of this yet unidentified factor. The identity of this factor was studied by using a novel, quantitative approach that was applied to published distributions of the platelet 5-HT levels in normal and autistic groups. It was shown that the published data are consistent with the hypothesis that a factor that interferes with brain development in autism may also regulate the release of 5-HT from gut enterochromaffin cells. Numerical analysis revealed that this factor may be non-functional in autistic individuals. At least some biological factors, the abnormal function of which leads to the development of the autistic brain, may regulate the release of 5-HT from the gut years after birth. If the present model is correct, it will allow future efforts to be focused on a limited number of gene candidates, some of which have not been suspected to be involved in autism (such as the 5-HT4 receptor gene) based on currently available clinical and experimental studies.
Magnetorheological Finishing for Imprinting Continuous Phase Plate Structure onto Optical Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menapace, J A; Dixit, S N; Genin, F Y
2004-01-05
Magnetorheological finishing (MRF) techniques have been developed to manufacture continuous phase plates (CPP's) and custom phase corrective structures on polished fused silica surfaces. These phase structures are important for laser applications requiring precise manipulation and control of beam-shape, energy distribution, and wavefront profile. The MRF's unique deterministic-sub-aperture polishing characteristics make it possible to imprint complex topographical information onto optical surfaces at spatial scale-lengths approaching 1 mm. In this study, we present the results of experiments and model calculations that explore imprinting two-dimensional sinusoidal structures. Results show how the MRF removal function impacts and limits imprint fidelity and what must bemore » done to arrive at a high quality surface. We also present several examples of this imprinting technology for fabrication of phase correction plates and CPPs for use at high fluences.« less
Crack propagation in functionally graded strip under thermal shock
NASA Astrophysics Data System (ADS)
Ivanov, I. V.; Sadowski, T.; Pietras, D.
2013-09-01
The thermal shock problem in a strip made of functionally graded composite with an interpenetrating network micro-structure of Al2O3 and Al is analysed numerically. The material considered here could be used in brake disks or cylinder liners. In both applications it is subjected to thermal shock. The description of the position-dependent properties of the considered functionally graded material are based on experimental data. Continuous functions were constructed for the Young's modulus, thermal expansion coefficient, thermal conductivity and thermal diffusivity and implemented as user-defined material properties in user-defined subroutines of the commercial finite element software ABAQUS™. The thermal stress and the residual stress of the manufacturing process distributions inside the strip are considered. The solution of the transient heat conduction problem for thermal shock is used for crack propagation simulation using the XFEM method. The crack length developed during the thermal shock is the criterion for crack resistance of the different graduation profiles as a step towards optimization of the composition gradient with respect to thermal shock sensitivity.
NASA Astrophysics Data System (ADS)
Reimberg, Paulo; Bernardeau, Francis
2018-01-01
We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Optimal solution and optimality condition of the Hunter-Saxton equation
NASA Astrophysics Data System (ADS)
Shen, Chunyu
2018-02-01
This paper is devoted to the optimal distributed control problem governed by the Hunter-Saxton equation with constraints on the control. We first investigate the existence and uniqueness of weak solution for the controlled system with appropriate initial value and boundary conditions. In contrast with our previous research, the proof of solution mapping is local Lipschitz continuous, which is one big improvement. Second, based on the well-posedness result, we find a unique optimal control and optimal solution for the controlled system with the quadratic cost functional. Moreover, we establish the sufficient and necessary optimality condition of an optimal control by means of the optimal control theory, not limited to the necessary condition, which is another major novelty of this paper. We also discuss the optimality conditions corresponding to two physical meaningful distributed observation cases.
Continuous Problem of Function Continuity
ERIC Educational Resources Information Center
Jayakody, Gaya; Zazkis, Rina
2015-01-01
We examine different definitions presented in textbooks and other mathematical sources for "continuity of a function at a point" and "continuous function" in the context of introductory level Calculus. We then identify problematic issues related to definitions of continuity and discontinuity: inconsistency and absence of…
The turbomachine blading design using S2-S1 approach
NASA Technical Reports Server (NTRS)
Luu, T. S.; Bencherif, L.; Viney, B.; Duc, J. M. Nguyen
1991-01-01
The boundary conditions corresponding to the design problem when the blades being simulated by the bound vorticity distribution are presented. The 3D flow is analyzed by the two steps S2 - S1 approach. In the first step, the number of blades is supposed to be infinite, the vortex distribution is transformed into an axisymmetric one, so that the flow field can be analyzed in a meridional plane. The thickness distribution of the blade producing the flow channel striction is taken into account by the modification of metric tensor in the continuity equation. Using the meridional stream function to define the flow field, the mass conservation is satisfied automatically. The governing equation is deduced from the relation between the azimuthal component of the vorticity and the meridional velocity. The value of the azimuthal component of the vorticity is provided by the hub to shroud equilibrium condition. This step leads to the determination of the axisymmetric stream sheets as well as the approximate camber surface of the blade. In the second step, the finite number of blades is taken into account, the inverse problem corresponding to the blade to blade flow confined in each stream sheet is analyzed. The momentum equation implies that the free vortex of the absolute velocity must be tangential to the stream sheet. The governing equation for the blade to blade flow stream function is deduced from this condition. At the beginning, the upper and the lower surfaces of the blades are created from the camber surface obtained from the first step with the assigned thickness distribution. The bound vorticity distribution and the penetrating flux conservation applied on the presumed blade surface constitute the boundary conditions of the inverse problem. The detection of this flux leads to the rectification of the geometry of the blades.
High statistical heterogeneity is more frequent in meta-analysis of continuous than binary outcomes.
Alba, Ana C; Alexander, Paul E; Chang, Joanne; MacIsaac, John; DeFry, Samantha; Guyatt, Gordon H
2016-02-01
We compared the distribution of heterogeneity in meta-analyses of binary and continuous outcomes. We searched citations in MEDLINE and Cochrane databases for meta-analyses of randomized trials published in 2012 that reported a measure of heterogeneity of either binary or continuous outcomes. Two reviewers independently performed eligibility screening and data abstraction. We evaluated the distribution of I(2) in meta-analyses of binary and continuous outcomes and explored hypotheses explaining the difference in distributions. After full-text screening, we selected 671 meta-analyses evaluating 557 binary and 352 continuous outcomes. Heterogeneity as assessed by I(2) proved higher in continuous than in binary outcomes: the proportion of continuous and binary outcomes reporting an I(2) of 0% was 34% vs. 52%, respectively, and reporting an I(2) of 60-100% was 39% vs. 14%. In continuous but not binary outcomes, I(2) increased with larger number of studies included in a meta-analysis. Increased precision and sample size do not explain the larger I(2) found in meta-analyses of continuous outcomes with a larger number of studies. Meta-analyses evaluating continuous outcomes showed substantially higher I(2) than meta-analyses of binary outcomes. Results suggest differing standards for interpreting I(2) in continuous vs. binary outcomes may be appropriate. Copyright © 2016 Elsevier Inc. All rights reserved.
Lau, Phei Li; Allen, Ray W K
2013-01-01
Summary The palladium metal catalysed Heck reaction of 4-iodoanisole with styrene or methyl acrylate has been studied in a continuous plug flow reactor (PFR) using supercritical carbon dioxide (scCO2) as the solvent, with THF and methanol as modifiers. The catalyst was 2% palladium on silica and the base was diisopropylethylamine due to its solubility in the reaction solvent. No phosphine co-catalysts were used so the work-up procedure was simplified and the green credentials of the reaction were enhanced. The reactions were studied as a function of temperature, pressure and flow rate and in the case of the reaction with styrene compared against a standard, stirred autoclave reaction. Conversion was determined and, in the case of the reaction with styrene, the isomeric product distribution was monitored by GC. In the case of the reaction with methyl acrylate the reactor was scaled from a 1.0 mm to 3.9 mm internal diameter and the conversion and turnover frequency determined. The results show that the Heck reaction can be effectively performed in scCO2 under continuous flow conditions with a palladium metal, phosphine-free catalyst, but care must be taken when selecting the reaction temperature in order to ensure the appropriate isomer distribution is achieved. Higher reaction temperatures were found to enhance formation of the branched terminal alkene isomer as opposed to the linear trans-isomer. PMID:24367454
Distribution-Connected PV's Response to Voltage Sags at Transmission-Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry; Ding, Fei
The ever increasing amount of residential- and commercial-scale distribution-connected PV generation being installed and operated on the U.S.'s electric power system necessitates the use of increased fidelity representative distribution system models for transmission stability studies in order to ensure the continued safe and reliable operation of the grid. This paper describes a distribution model-based analysis that determines the amount of distribution-connected PV that trips off-line for a given voltage sag seen at the distribution circuit's substation. Such sags are what could potentially be experienced over a wide area of an interconnection during a transmission-level line fault. The results of thismore » analysis show that the voltage diversity of the distribution system does cause different amounts of PV generation to be lost for differing severity of voltage sags. The variation of the response is most directly a function of the loading of the distribution system. At low load levels the inversion of the circuit's voltage profile results in considerable differences in the aggregated response of distribution-connected PV Less variation is seen in the response to specific PV deployment scenarios, unless pushed to extremes, and in the total amount of PV penetration attained. A simplified version of the combined CMPLDW and PVD1 models is compared to the results from the model-based analysis. Furthermore, the parameters of the simplified model are tuned to better match the determined response. The resulting tuning parameters do not match the expected physical model of the distribution system and PV systems and thus may indicate that another modeling approach would be warranted.« less
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
Distributed neural control of a hexapod walking vehicle
NASA Technical Reports Server (NTRS)
Beer, R. D.; Sterling, L. S.; Quinn, R. D.; Chiel, H. J.; Ritzmann, R.
1989-01-01
There has been a long standing interest in the design of controllers for multilegged vehicles. The approach is to apply distributed control to this problem, rather than using parallel computing of a centralized algorithm. Researchers describe a distributed neural network controller for hexapod locomotion which is based on the neural control of locomotion in insects. The model considers the simplified kinematics with two degrees of freedom per leg, but the model includes the static stability constraint. Through simulation, it is demonstrated that this controller can generate a continuous range of statically stable gaits at different speeds by varying a single control parameter. In addition, the controller is extremely robust, and can continue the function even after several of its elements have been disabled. Researchers are building a small hexapod robot whose locomotion will be controlled by this network. Researchers intend to extend their model to the dynamic control of legs with more than two degrees of freedom by using data on the control of multisegmented insect legs. Another immediate application of this neural control approach is also exhibited in biology: the escape reflex. Advanced robots are being equipped with tactile sensing and machine vision so that the sensory inputs to the robot controller are vast and complex. Neural networks are ideal for a lower level safety reflex controller because of their extremely fast response time. The combination of robotics, computer modeling, and neurobiology has been remarkably fruitful, and is likely to lead to deeper insights into the problems of real time sensorimotor control.
NASA Astrophysics Data System (ADS)
Miller, Alina; Pertassek, Thomas; Steins, Andreas; Durner, Wolfgang; Göttlein, Axel; Petrik, Wolfgang; von Unold, Georg
2017-04-01
The particle-size distribution (PSD) is a key property of soils. The reference method for determining the PSD is based on gravitational sedimentation of particles in an initially homogeneous suspension. Traditional methods measure manually (i) the uplift of a floating body in the suspension at different times (Hydrometer method) or (ii) the mass of solids in extracted suspension aliquots at predefined sampling depths and times (Pipette method). Both methods lead to a disturbance of the sedimentation process and provide only discrete data of the PSD. Durner et al. (2017) recently developed a new automated method to determine particle-size distributions of soils and sediments from gravitational sedimentation (Durner, W., S.C. Iden, and G. von Unold: The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation, Water Resources Research, doi:10.1002/2016WR019830, 2017). The so-called integral suspension method (ISP) method estimates continuous PSD's from sedimentation experiments by recording the temporal evolution of the suspension pressure at a certain measurement depth in a sedimentation cylinder. It requires no manual interaction after start and thus no specialized training of the lab personnel and avoids any disturbance of the sedimentation process. The required technology to perform these experiments was developed by the UMS company, Munich and is now available as an instrument called PARIO, traded by the METER Group. In this poster, the basic functioning of PARIO is shown and key components and parameters of the technology are explained.
Renormalizability of quasiparton distribution functions
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...
2017-11-21
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
Renormalizability of quasiparton distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
NASA Technical Reports Server (NTRS)
Cios, K. J.; Vary, A.; Berke, L.; Kautz, H. E.
1992-01-01
Two types of neural networks were used to evaluate acousto-ultrasonic (AU) data for material characterization and mechanical reponse prediction. The neural networks included a simple feedforward network (backpropagation) and a radial basis functions network. Comparisons of results in terms of accuracy and training time are given. Acousto-ultrasonic (AU) measurements were performed on a series of tensile specimens composed of eight laminated layers of continuous, SiC fiber reinforced Ti-15-3 matrix. The frequency spectrum was dominated by frequencies of longitudinal wave resonance through the thickness of the specimen at the sending transducer. The magnitude of the frequency spectrum of the AU signal was used for calculating a stress-wave factor based on integrating the spectral distribution function and used for comparison with neural networks results.
26 CFR 1.668(b)-1A - Tax on distribution.
Code of Federal Regulations, 2010 CFR
2010-04-01
...)-1A Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Treatment of Excess Distributions of Trusts Applicable to Taxable Years Beginning... the section 666 amounts in the beneficiary's gross income and the tax for such year computed without...
49 CFR 191.12 - Distribution Systems: Mechanical Fitting Failure Reports
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Distribution Systems: Mechanical Fitting Failure Reports 191.12 Section 191.12 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER...
49 CFR 191.11 - Distribution system: Annual report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Distribution system: Annual report. 191.11 Section 191.11 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE;...
49 CFR 191.9 - Distribution system: Incident report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Distribution system: Incident report. 191.9 Section 191.9 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE;...
Potential energy distribution function and its application to the problem of evaporation
NASA Astrophysics Data System (ADS)
Gerasimov, D. N.; Yurin, E. I.
2017-10-01
Distribution function on potential energy in a strong correlated system can be calculated analytically. In an equilibrium system (for instance, in the bulk of the liquid) this distribution function depends only on temperature and mean potential energy, which can be found through the specific heat of vaporization. At the surface of the liquid this distribution function differs significantly, but its shape still satisfies analytical correlation. Distribution function on potential energy nearby the evaporation surface can be used instead of the work function of the atom of the liquid.
Unifying distribution functions: some lesser known distributions.
Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P
2008-08-01
We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.
Li, Jiahui; Yu, Qiqing
2016-01-01
Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.
2014-01-01
Background The UK continues to experience a rise in the number of anabolic steroid-using clients attending harm reduction services such as needle and syringe programmes. Methods The present study uses interviews conducted with harm reduction service providers as well as illicit users of anabolic steroids from different areas of England and Wales to explore harm reduction for this group of drug users, focussing on needle distribution policies and harm reduction interventions developed specifically for this population of drug users. Results The article addresses the complexity of harm reduction service delivery, highlighting different models of needle distribution, such as peer-led distribution networks, as well as interventions available in steroid clinics, including liver function testing of anabolic steroid users. Aside from providing insights into the function of interventions available to steroid users, along with principles adopted by service providers, the study found significant tensions and dilemmas in policy implementation due to differing perspectives between service providers and service users relating to practices, risks and effective interventions. Conclusion The overarching finding of the study was the tremendous variability across harm reduction delivery sites in terms of available measures and mode of operation. Further research into the effectiveness of different policies directed towards people who use anabolic steroids is critical to the development of harm reduction. PMID:24986546
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
NASA Astrophysics Data System (ADS)
Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.
2005-04-01
In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.
How biological soil crusts became recognized as a functional unit: a selective history
Lange, Otto L.; Belnap, Jayne
2016-01-01
It is surprising that despite the world-wide distribution and general importance of biological soil crusts (biocrusts), scientific recognition and functional analysis of these communities is a relatively young field of science. In this chapter, we sketch the historical lines that led to the recognition of biocrusts as a community with important ecosystem functions. The idea of biocrusts as a functional ecological community has come from two main scientific branches: botany and soil science. For centuries, botanists have long recognized that multiple organisms colonize the soil surface in the open and often dry areas occurring between vascular plants. Much later, after the initial taxonomic and phyto-sociological descriptions were made, soil scientists and agronomists observed that these surface organisms interacted with soils in ways that changed the soil structure. In the 1970’s, research on these communities as ecological units that played an important functional role in drylands began in earnest, and these studies have continued to this day. Here, we trace the history of these studies from the distant past until 1990, when biocrusts became well-known to scientists and the public.
NASA Astrophysics Data System (ADS)
Keilbach, D.; Drews, C.; Berger, L.; Marsch, E.; Wimmer-Schweingruber, R. F.
2017-12-01
Using a test particle approach we have investigated, how an oxygen pickup ion torus velocity distribution is modified by continuous and intermittent alfvènic waves on timescales, where the gyro trajectory of each particle can be traced.We have therefore exposed the test particles to mono frequent waves, which expanded through the whole simulation in time and space. The general behavior of the pitch angle distribution is found to be stationary and a nonlinear function of the wave frequency, amplitude and the initial angle between wave elongation and field-perpendicular particle velocity vector. The figure shows the time-averaged pitch angle distributions as a function of the Doppler shifted wave frequency (where the Doppler shift was calculated with respect to the particles initial velocity) for three different wave amplitudes (labeled in each panel). The background field is chosen to be 5 nT and the 500 test particles were initially distributed on a torus with 120° pitch angle at a solar wind velocity of 450 km/s. Each y-slice of the histogram (which has been normalized to it's respective maximum) represents an individual run of the simulation.The frequency-dependent behavior of the test particles is found to be classifiable into the regimes of very low/high frequencies and frequencies close to first order resonance. We have found, that only in the latter regime the particles interact strongly with the wave, where in the time averaged histograms a branch structure is found, which was identified as a trace of particles co-moving with the wave phase. The magnitude of pitch angle change of these particles is as well as the frequency margin, where the branch structure is found, an increasing function with the wave amplitude.We have also investigated the interaction with mono frequent intermittent waves. Exposed to such waves a torus distribution is scattered in pitch angle space, whereas the pitch angle distribution is broadened systematically over time similar to pitch angle diffusion.The framework of our simulations is a first step toward understanding wave particle interactions at the most basic level and is readily expandable to e.g. the inclusion of multiple wave frequencies, intermittent wave activity, gradients in the background magnetic field or collisions with solar wind particles.
de Beyl, Celine Zegers; Kilian, Albert; Brown, Andrea; Sy-Ar, Mohamad; Selby, Richmond Ato; Randriamanantenasoa, Felicien; Ranaivosoa, Jocelyn; Zigirumugabe, Sixte; Gerberg, Lilia; Fotheringham, Megan; Lynch, Matthew; Koenker, Hannah
2017-08-10
Continuous distribution of insecticide-treated nets (ITNs) is thought to be an effective mechanism to maintain ITN ownership and access between or in the absence of mass campaigns, but evidence is limited. A community-based ITN distribution pilot was implemented and evaluated in Toamasina II District, Madagascar, to assess this new channel for continuous ITN distribution. Beginning 9 months after the December 2012 mass campaign, a community-based distribution pilot ran for an additional 9 months, from September 2013 to June 2014. Households requested ITN coupons from community agents in their village. After verification by the agents, households exchanged the coupon for an ITN at a distribution point. The evaluation was a two-stage cluster survey with a sample size of 1125 households. Counterfactual ITN ownership and access were calculated by excluding ITNs received through the community pilot. At the end of the pilot, household ownership of any ITN was 96.5%, population access to ITN was 81.5 and 61.5% of households owned at least 1 ITN for every 2 people. Without the ITNs provided through the community channel, household ownership of any ITN was estimated at 74.6%, population access to an ITN at 55.5%, and households that owned at least 1 ITN for 2 people at only 34.7%, 18 months after the 2012 campaign. Ownership of community-distributed ITNs was higher among the poorest wealth quintiles. Over 80% of respondents felt the community scheme was fair and simple to use. Household ITN ownership and population ITN access exceeded RBM targets after the 9-month community distribution pilot. The pilot successfully provided coupons and ITNs to households requesting them, particularly for the least poor wealth quintiles, and the scheme was well-perceived by communities. Further research is needed to determine whether community-based distribution can sustain ITN ownership and access over the long term, how continuous availability of ITNs affects household net replacement behaviour, and whether community-based distribution is cost-effective when combined with mass campaigns, or if used with other continuous channels instead of mass campaigns.
Lee, Heung-Rae
1997-01-01
A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.
NASA Technical Reports Server (NTRS)
Lazaro, Ester; Escarmis, Cristina; Perez-Mercader, Juan; Manrubia, Susanna C.; Domingo, Esteban
2003-01-01
RNA viruses display high mutation rates and their populations replicate as dynamic and complex mutant distributions, termed viral quasispecies. Repeated genetic bottlenecks, which experimentally are carried out through serial plaque-to-plaque transfers of the virus, lead to fitness decrease (measured here as diminished capacity to produce infectious progeny). Here we report an analysis of fitness evolution of several low fitness foot-and-mouth disease virus clones subjected to 50 plaque-to-plaque transfers. Unexpectedly, fitness decrease, rather than being continuous and monotonic, displayed a fluctuating pattern, which was influenced by both the virus and the state of the host cell as shown by effects of recent cell passage history. The amplitude of the fluctuations increased as fitness decreased, resulting in a remarkable resistance of virus to extinction. Whereas the frequency distribution of fitness in control (independent) experiments follows a log-normal distribution, the probability of fitness values in the evolving bottlenecked populations fitted a Weibull distribution. We suggest that multiple functions of viral genomic RNA and its encoded proteins, subjected to high mutational pressure, interact with cellular components to produce this nontrivial, fluctuating pattern.
26 CFR 1.1368-2 - Accumulated adjustments account (AAA).
Code of Federal Regulations, 2011 CFR
2011-04-01
... TAX (CONTINUED) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-2... earnings and profits or previously taxed income pursuant to an election made under section 1368(e)(3) and... AAA for redemptions and distributions in the year of a redemption. (c) Distribution of money and loss...
26 CFR 1.1368-2 - Accumulated adjustments account (AAA).
Code of Federal Regulations, 2014 CFR
2014-04-01
... TAX (CONTINUED) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-2... earnings and profits or previously taxed income pursuant to an election made under section 1368(e)(3) and... AAA for redemptions and distributions in the year of a redemption. (c) Distribution of money and loss...
26 CFR 1.1368-2 - Accumulated adjustments account (AAA).
Code of Federal Regulations, 2012 CFR
2012-04-01
... TAX (CONTINUED) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-2... earnings and profits or previously taxed income pursuant to an election made under section 1368(e)(3) and... AAA for redemptions and distributions in the year of a redemption. (c) Distribution of money and loss...
26 CFR 1.1368-2 - Accumulated adjustments account (AAA).
Code of Federal Regulations, 2013 CFR
2013-04-01
... TAX (CONTINUED) INCOME TAXES (CONTINUED) Small Business Corporations and Their Shareholders § 1.1368-2... earnings and profits or previously taxed income pursuant to an election made under section 1368(e)(3) and... AAA for redemptions and distributions in the year of a redemption. (c) Distribution of money and loss...
32 CFR 644.361 - Distribution of report of excess.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 4 2011-07-01 2011-07-01 false Distribution of report of excess. 644.361 Section 644.361 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) REAL PROPERTY REAL ESTATE HANDBOOK Disposal Reports of Excess Real Property and Related Personal Property to...
32 CFR 644.361 - Distribution of report of excess.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 4 2012-07-01 2011-07-01 true Distribution of report of excess. 644.361 Section 644.361 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) REAL PROPERTY REAL ESTATE HANDBOOK Disposal Reports of Excess Real Property and Related Personal Property to...
32 CFR 644.361 - Distribution of report of excess.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 4 2013-07-01 2013-07-01 false Distribution of report of excess. 644.361 Section 644.361 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) REAL PROPERTY REAL ESTATE HANDBOOK Disposal Reports of Excess Real Property and Related Personal Property to...
32 CFR 644.361 - Distribution of report of excess.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true Distribution of report of excess. 644.361 Section 644.361 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) REAL PROPERTY REAL ESTATE HANDBOOK Disposal Reports of Excess Real Property and Related Personal Property to...
32 CFR 644.361 - Distribution of report of excess.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 4 2014-07-01 2013-07-01 true Distribution of report of excess. 644.361 Section 644.361 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) REAL PROPERTY REAL ESTATE HANDBOOK Disposal Reports of Excess Real Property and Related Personal Property to...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Distribution Pipeline Integrity Management (IM...
Efficient Evaluation Functions for Multi-Rover Systems
NASA Technical Reports Server (NTRS)
Agogino, Adrian; Tumer, Kagan
2004-01-01
Evolutionary computation can be a powerful tool in cresting a control policy for a single agent receiving local continuous input. This paper extends single-agent evolutionary computation to multi-agent systems, where a collection of agents strives to maximize a global fitness evaluation function that rates the performance of the entire system. This problem is solved in a distributed manner, where each agent evolves its own population of neural networks that are used as the control policies for the agent. Each agent evolves its population using its own agent-specific fitness evaluation function. We propose to create these agent-specific evaluation functions using the theory of collectives to avoid the coordination problem where each agent evolves a population that maximizes its own fitness function, yet the system has a whole achieves low values of the global fitness function. Instead we will ensure that each fitness evaluation function is both "aligned" with the global evaluation function and is "learnable," i.e., the agents can readily see how their behavior affects their evaluation function. We then show how these agent-specific evaluation functions outperform global evaluation methods by up to 600% in a domain where a set of rovers attempt to maximize the amount of information observed while navigating through a simulated environment.
Exploring the Dynamics of Transit Times and Subsurface Mixing in a Small Agricultural Catchment
NASA Astrophysics Data System (ADS)
Yang, Jie; Heidbüchel, Ingo; Musolff, Andreas; Reinstorf, Frido; Fleckenstein, Jan H.
2018-03-01
The analysis of transit/residence time distributions (TTDs and RTDs) provides important insights into the dynamics of stream-water ages and subsurface mixing. These insights have significant implications for water quality. For a small agricultural catchment in central Germany, we use a 3D fully coupled surface-subsurface hydrological model to simulate water flow and perform particle tracking to determine flow paths and transit times. The TTDs of discharge, RTDs of storage and fractional StorAge Selection (fSAS) functions are computed and analyzed on daily basis for a period of 10 years. Results show strong seasonal fluctuations of the median transit time of discharge and the median residence time, with the former being strongly related to the catchment wetness. Computed fSAS functions suggest systematic shifts of the discharge selection preference over four main periods: In the wet period, the youngest water in storage is preferentially selected, and this preference shifts gradually toward older ages of stored water when the catchment transitions into the drying, dry and wetting periods. These changes are driven by distinct shifts in the dominance of deeper flow paths and fast shallow flow paths. Changes in the shape of the fSAS functions can be captured by changes in the two parameters of the approximating Beta distributions, allowing the generation of continuous fSAS functions representing the general catchment behavior. These results improve our understanding of the seasonal dynamics of TTDs and fSAS functions for a complex real-world catchment and are important for interpreting solute export to the stream in a spatially implicit manner.
HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain
Huppert, Theodore J.; Diamond, Solomon G.; Franceschini, Maria A.; Boas, David A.
2009-01-01
Near-infrared spectroscopy (NIRS) is a noninvasive neuroimaging tool for studying evoked hemodynamic changes within the brain. By this technique, changes in the optical absorption of light are recorded over time and are used to estimate the functionally evoked changes in cerebral oxyhemoglobin and deoxyhemoglobin concentrations that result from local cerebral vascular and oxygen metabolic effects during brain activity. Over the past three decades this technology has continued to grow, and today NIRS studies have found many niche applications in the fields of psychology, physiology, and cerebral pathology. The growing popularity of this technique is in part associated with a lower cost and increased portability of NIRS equipment when compared with other imaging modalities, such as functional magnetic resonance imaging and positron emission tomography. With this increasing number of applications, new techniques for the processing, analysis, and interpretation of NIRS data are continually being developed. We review some of the time-series and functional analysis techniques that are currently used in NIRS studies, we describe the practical implementation of various signal processing techniques for removing physiological, instrumental, and motion-artifact noise from optical data, and we discuss the unique aspects of NIRS analysis in comparison with other brain imaging modalities. These methods are described within the context of the MATLAB-based graphical user interface program, HomER, which we have developed and distributed to facilitate the processing of optical functional brain data. PMID:19340120
Stender, Michael E.; Raub, Christopher B.; Yamauchi, Kevin A.; Shirazi, Reza; Vena, Pasquale; Sah, Robert L.; Hazelwood, Scott J.; Klisch, Stephen M.
2013-01-01
A continuum mixture model with distinct collagen (COL) and glycosaminoglycan (GAG) elastic constituents was developed for the solid matrix of immature bovine articular cartilage. A continuous COL fiber volume fraction distribution function and a true COL fiber elastic modulus (Ef) were used. Quantitative polarized light microscopy (qPLM) methods were developed to account for the relatively high cell density of immature articular cartilage and used with a novel algorithm that constructs a 3D distribution function from 2D qPLM data. For specimens untreated and cultured in vitro, most model parameters were specified from qPLM analysis and biochemical assay results; consequently, Ef was predicted using an optimization to measured mechanical properties in uniaxial tension and unconfined compression. Analysis of qPLM data revealed a highly anisotropic fiber distribution, with principal fiber orientation parallel to the surface layer. For untreated samples, predicted Ef values were 175 and 422 MPa for superficial (S) and middle (M) zone layers, respectively. TGF-β1 treatment was predicted to increase and decrease Ef values for the S and M layers to 281 and 309 MPa, respectively. IGF-1 treatment was predicted to decrease Ef values for the S and M layers to 22 and 26 MPa, respectively. A novel finding was that distinct native depth-dependent fiber modulus properties were modulated to nearly homogeneous values by TGF-β1 and IGF-1 treatments, with modulated values strongly dependent on treatment. PMID:23266906
Arik, Sabri
2005-05-01
This paper presents a sufficient condition for the existence, uniqueness and global asymptotic stability of the equilibrium point for bidirectional associative memory (BAM) neural networks with distributed time delays. The results impose constraint conditions on the network parameters of neural system independently of the delay parameter, and they are applicable to all continuous nonmonotonic neuron activation functions. It is shown that in some special cases of the results, the stability criteria can be easily checked. Some examples are also given to compare the results with the previous results derived in the literature.
The acceleration of charged particles in interplanetary shock waves
NASA Technical Reports Server (NTRS)
Pesses, M. E.; Decker, R. B.; Armstrong, T. P.
1982-01-01
Consideration of the theoretical and observational literature on energetic ion acceleration in interplanetary shock waves is the basis for the present discussion of the shock acceleration of the solar wind plasma and particle transport effects. It is suggested that ISEE data be used to construct data sets for shock events that extend continuously from solar wind to galactic cosmic ray energies, including data for electrons, protons, alphas and ions with Z values greater than 2.0, and that the temporal and spatial evolution of two- and three-dimensional particle distribution functions be studied by means of two or more spacecraft.
Capstone Depleted Uranium Aerosols: Generation and Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkhurst, MaryAnn; Szrom, Fran; Guilmette, Ray
2004-10-19
In a study designed to provide an improved scientific basis for assessing possible health effects from inhaling depleted uranium (DU) aerosols, a series of DU penetrators was fired at an Abrams tank and a Bradley fighting vehicle. A robust sampling system was designed to collect aerosols in this difficult environment and continuously monitor the sampler flow rates. Aerosols collected were analyzed for uranium concentration and particle size distribution as a function of time. They were also analyzed for uranium oxide phases, particle morphology, and dissolution in vitro. The resulting data provide input useful in human health risk assessments.
Dynamic modeling and parameter estimation of a radial and loop type distribution system network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Qui; Heng Chen; Girgis, A.A.
1993-05-01
This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.
The 1983 tail-era series. Volume 1: ISEE 3 plasma
NASA Technical Reports Server (NTRS)
Fairfield, D. H.; Phillips, J. L.
1991-01-01
Observations from the ISEE 3 electron analyzer are presented in plots. Electrons were measured in 15 continuous energy levels between 8.5 and 1140 eV during individual 3-sec spacecraft spins. Times associated with each data point are the beginning time of the 3 sec data collection interval. Moments calculated from the measured distribution function are shown as density, temperature, velocity, and velocity azimuthal angle. Spacecraft ephemeris is shown at the bottom in GSE and GSM coordinates in units of Earth radii, with vertical ticks on the time axis corresponding to the printed positions.
Electronic structures of graphane with vacancies and graphene adsorbed with fluorine atoms
NASA Astrophysics Data System (ADS)
Wu, Bi-Ru; Yang, Chih-Kai
2012-03-01
We investigate the electronic structure of graphane with hydrogen vacancies, which are supposed to occur in the process of hydrogenation of graphene. A variety of configurations is considered and defect states are derived by density functional calculation. We find that a continuous chain-like distribution of hydrogen vacancies will result in conduction of linear dispersion, much like the transport on a superhighway cutting through the jungle of hydrogen. The same conduction also occurs for chain-like vacancies in an otherwise fully fluorine-adsorbed graphene. These results should be very useful in the design of graphene-based electronic circuits.
General design method for three-dimensional potential flow fields. 1: Theory
NASA Technical Reports Server (NTRS)
Stanitz, J. D.
1980-01-01
A general design method was developed for steady, three dimensional, potential, incompressible or subsonic-compressible flow. In this design method, the flow field, including the shape of its boundary, was determined for arbitrarily specified, continuous distributions of velocity as a function of arc length along the boundary streamlines. The method applied to the design of both internal and external flow fields, including, in both cases, fields with planar symmetry. The analytic problems associated with stagnation points, closure of bodies in external flow fields, and prediction of turning angles in three dimensional ducts were reviewed.
Continuation Power Flow Analysis for PV Integration Studies at Distribution Feeders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiyu; Zhu, Xiangqi; Lubkeman, David L.
2017-10-30
This paper presents a method for conducting continuation power flow simulation on high-solar penetration distribution feeders. A load disaggregation method is developed to disaggregate the daily feeder load profiles collected in substations down to each load node, where the electricity consumption of residential houses and commercial buildings are modeled using actual data collected from single family houses and commercial buildings. This allows the modeling of power flow and voltage profile along a distribution feeder on a continuing fashion for a 24- hour period at minute-by-minute resolution. By separating the feeder into load zones based on the distance between the loadmore » node and the feeder head, we studied the impact of PV penetration on distribution grid operation in different seasons and under different weather conditions for different PV placements.« less
NASA Astrophysics Data System (ADS)
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
NASA Astrophysics Data System (ADS)
Howitt, R. E.
2016-12-01
Hydro-economic models have been used to analyze optimal supply management and groundwater use for the past 25 years. They are characterized by an objective function that usually maximizes economic measures such as consumer and producer surplus subject to hydrologic equations of motion or water distribution systems. The hydrologic and economic components are sometimes fully integrated. Alternatively they may use an iterative interactive process. Environmental considerations have been included in hydro-economic models as inequality constraints. Representing environmental requirements as constraints is a rigid approximation of the range of management alternatives that could be used to implement environmental objectives. The next generation of hydro-economic models, currently being developed, require that the environmental alternatives be represented by continuous or semi-continuous functions which relate water resource use allocated to the environment with the probabilities of achieving environmental objectives. These functions will be generated by process models of environmental and biological systems which are now advanced to the state that they can realistically represent environmental systems and flexibility to interact with economic models. Examples are crop growth models, climate modeling, and biological models of forest, fish, and fauna systems. These process models can represent environmental outcomes in a form that is similar to economic production functions. When combined with economic models the interacting process models can reproduce a range of trade-offs between economic and environmental objectives, and thus optimize social value of many water and environmental resources. Some examples of this next-generation of hydro-enviro- economic models are reviewed. In these models implicit production functions for environmental goods are combined with hydrologic equations of motion and economic response functions. We discuss models that show interaction between environmental goods and agricultural production, and others that address alternative climate change policies, or habitat provision.
NASA Astrophysics Data System (ADS)
Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu
2017-04-01
Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and underestimation of the combined effect of sea level variations and wind waves, and to help coastal infrastructure planning and support smooth and safe operation of coastal cities in a changing climate.
European seaweeds under pressure: Consequences for communities and ecosystem functioning
NASA Astrophysics Data System (ADS)
Mineur, Frédéric; Arenas, Francisco; Assis, Jorge; Davies, Andrew J.; Engelen, Aschwin H.; Fernandes, Francisco; Malta, Erik-jan; Thibaut, Thierry; Van Nguyen, Tu; Vaz-Pinto, Fátima; Vranken, Sofie; Serrão, Ester A.; De Clerck, Olivier
2015-04-01
Seaweed assemblages represent the dominant autotrophic biomass in many coastal environments, playing a central structural and functional role in several ecosystems. In Europe, seaweed assemblages are highly diverse systems. The combined seaweed flora of different European regions hold around 1550 species (belonging to nearly 500 genera), with new species continuously uncovered, thanks to the emergence of molecular tools. In this manuscript we review the effects of global and local stressors on European seaweeds, their communities, and ecosystem functioning. Following a brief review on the present knowledge on European seaweed diversity and distribution, and the role of seaweed communities in biodiversity and ecosystem functioning, we discuss the effects of biotic homogenization (invasive species) and global climate change (shifts in bioclimatic zones and ocean acidification) on the distribution of individual species and their effect on the structure and functioning of seaweed communities. The arrival of new introduced species (that already account for 5-10% of the European seaweeds) and the regional extirpation of native species resulting from oceans' climate change are creating new diversity scenarios with undetermined functional consequences. Anthropogenic local stressors create additional disruption often altering dramatically assemblage's structure. Hence, we discuss ecosystem level effects of such stressors like harvesting, trampling, habitat modification, overgrazing and eutrophication that impact coastal communities at local scales. Last, we conclude by highlighting significant knowledge gaps that need to be addressed to anticipate the combined effects of global and local stressors on seaweed communities. With physical and biological changes occurring at unexpected pace, marine phycologists should now integrate and join their research efforts to be able to contribute efficiently for the conservation and management of coastal systems.
Robustness of quantum key distribution with discrete and continuous variables to channel noise
NASA Astrophysics Data System (ADS)
Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.
2017-06-01
We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.
Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu
2017-05-24
In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.