An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
A study on some urban bus transport networks
NASA Astrophysics Data System (ADS)
Chen, Yong-Zhou; Li, Nan; He, Da-Ren
2007-03-01
In this paper, we present the empirical investigation results on the urban bus transport networks (BTNs) of four major cities in China. In BTN, nodes are bus stops. Two nodes are connected by an edge when the stops are serviced by a common bus route. The empirical results show that the degree distributions of BTNs take exponential function forms. Other two statistical properties of BTNs are also considered, and they are suggested as the distributions of so-called “the number of stops in a bus route” (represented by S) and “the number of bus routes a stop joins” (by R). The distributions of R also show exponential function forms, while the distributions of S follow asymmetric, unimodal functions. To explain these empirical results and attempt to simulate a possible evolution process of BTN, we introduce a model by which the analytic and numerical result obtained agrees well with the empirical facts. Finally, we also discuss some other possible evolution cases, where the degree distribution shows a power law or an interpolation between the power law and the exponential decay.
An Empirical Bayes Approach to Mantel-Haenszel DIF Analysis.
ERIC Educational Resources Information Center
Zwick, Rebecca; Thayer, Dorothy T.; Lewis, Charles
1999-01-01
Developed an empirical Bayes enhancement to Mantel-Haenszel (MH) analysis of differential item functioning (DIF) in which it is assumed that the MH statistics are normally distributed and that the prior distribution of underlying DIF parameters is also normal. (Author/SLD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchings, L.; Wu, F.
1990-02-10
Seismograms from 52 aftershocks of the 1971 San Fernando earthquake recorded at 25 stations distributed across the San Fernando Valley are examined to identify empirical Green's functions, and characterize the dependence of their waveforms on moment, focal mechanism, source and recording site spatial variations, recording site geology, and recorded frequency band. Recording distances ranged from 3.0 to 33.0 km, hypocentral separations ranged from 0.22 to 28.4 km, and recording site separations ranged from 0.185 to 24.2 km. The recording site geologies are diorite gneiss, marine and nonmarine sediments, and alluvium of varying thicknesses. Waveforms of events with moment below aboutmore » 1.5 {times} 10{sup 21} dyn cm are independent of the source-time function and are termed empirical Green's functions. Waveforms recorded at a particular station from events located within 1.0 to 3.0 km of each other, depending upon site geology, with very similar focal mechanism solutions are nearly identical for frequencies up to 10 Hz. There is no correlation to waveforms between recording sites at least 1.2 km apart, and waveforms are clearly distinctive for two sites 0.185 km apart. The geologic conditions of the recording site dominate the character of empirical Green's functions. Even for source separations of up to 20.0 km, the empirical Green's functions at a particular site are consistent in frequency content, amplification, and energy distribution. Therefore, it is shown that empirical Green's functions can be used to obtain site response functions. The observations of empirical Green's functions are used as a basis for developing the theory for using empirical Green's functions in deconvolution for source pulses and synthesis of seismograms of larger earthquakes.« less
NASA Astrophysics Data System (ADS)
Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team
2016-10-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
Alternative Approaches to Evaluation in Empirical Microeconomics
ERIC Educational Resources Information Center
Blundell, Richard; Dias, Monica Costa
2009-01-01
This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…
Estimation of Reliability Coefficients Using the Test Information Function and Its Modifications.
ERIC Educational Resources Information Center
Samejima, Fumiko
1994-01-01
The reliability coefficient is predicted from the test information function (TIF) or two modified TIF formulas and a specific trait distribution. Examples illustrate the variability of the reliability coefficient across different trait distributions, and results are compared with empirical reliability coefficients. (SLD)
Ehrenfest model with large jumps in finance
NASA Astrophysics Data System (ADS)
Takahashi, Hisanao
2004-02-01
Changes (returns) in stock index prices and exchange rates for currencies are argued, based on empirical data, to obey a stable distribution with characteristic exponent α<2 for short sampling intervals and a Gaussian distribution for long sampling intervals. In order to explain this phenomenon, an Ehrenfest model with large jumps (ELJ) is introduced to explain the empirical density function of price changes for both short and long sampling intervals.
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
NASA Astrophysics Data System (ADS)
Butler, Samuel D.; Marciniak, Michael A.
2014-09-01
Since the development of the Torrance-Sparrow bidirectional re ectance distribution function (BRDF) model in 1967, several BRDF models have been created. Previous attempts to categorize BRDF models have relied upon somewhat vague descriptors, such as empirical, semi-empirical, and experimental. Our approach is to instead categorize BRDF models based on functional form: microfacet normal distribution, geometric attenua- tion, directional-volumetric and Fresnel terms, and cross section conversion factor. Several popular microfacet models are compared to a standardized notation for a microfacet BRDF model. A library of microfacet model components is developed, allowing for creation of unique microfacet models driven by experimentally measured BRDFs.
Systematics of capture and fusion dynamics in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui
2017-03-01
We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.
Application of the weibull distribution function to the molecular weight distribution of cellulose
A. Broido; Hsiukang Yow
1977-01-01
The molecular weight distribution of a linear homologous polymer is usually obtained empirically for any particular sample. Sample-to-sample comparisons are made in terms of the weight- or number-average molecular weights and graphic displays of the distribution curves. Such treatment generally precludes data interpretations in which a distribution can be described in...
FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing
2010-01-01
Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.
Benford's law and the FSD distribution of economic behavioral micro data
NASA Astrophysics Data System (ADS)
Villas-Boas, Sofia B.; Fu, Qiuzi; Judge, George
2017-11-01
In this paper, we focus on the first significant digit (FSD) distribution of European micro income data and use information theoretic-entropy based methods to investigate the degree to which Benford's FSD law is consistent with the nature of these economic behavioral systems. We demonstrate that Benford's law is not an empirical phenomenon that occurs only in important distributions in physical statistics, but that it also arises in self-organizing dynamic economic behavioral systems. The empirical likelihood member of the minimum divergence-entropy family, is used to recover country based income FSD probability density functions and to demonstrate the implications of using a Benford prior reference distribution in economic behavioral system information recovery.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
Dominant role of many-body effects on the carrier distribution function of quantum dot lasers
NASA Astrophysics Data System (ADS)
Peyvast, Negin; Zhou, Kejia; Hogg, Richard A.; Childs, David T. D.
2016-03-01
The effects of free-carrier-induced shift and broadening on the carrier distribution function are studied considering different extreme cases for carrier statistics (Fermi-Dirac and random carrier distributions) as well as quantum dot (QD) ensemble inhomogeneity and state separation using a Monte Carlo model. Using this model, we show that the dominant factor determining the carrier distribution function is the free carrier effects and not the choice of carrier statistics. By using empirical values of the free-carrier-induced shift and broadening, good agreement is obtained with experimental data of QD materials obtained under electrical injection for both extreme cases of carrier statistics.
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
NASA Astrophysics Data System (ADS)
Fioretti, Guido
2007-02-01
The productions function maps the inputs of a firm or a productive system onto its outputs. This article expounds generalizations of the production function that include state variables, organizational structures and increasing returns to scale. These extensions are needed in order to explain the regularities of the empirical distributions of certain economic variables.
Farrance, Ian; Frenkel, Robert
2014-01-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.
Characterizing short-term stability for Boolean networks over any distribution of transfer functions
Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; ...
2016-07-05
Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Foundationalism and Neuroscience; Silence and Language
ERIC Educational Resources Information Center
Keestra, Machiel; Cowley, Stephen J.
2009-01-01
Neuroscience offers more than new empirical evidence about the details of cognitive functions such as language, perception and action. Since it also shows many functions to be highly distributed, interconnected and dependent on mechanisms at different levels of processing, it challenges concepts that are traditionally used to describe these…
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
Kendler, K S
2012-04-01
Our tendency to see the world of psychiatric illness in dichotomous and opposing terms has three major sources: the philosophy of Descartes, the state of neuropathology in late nineteenth century Europe (when disorders were divided into those with and without demonstrable pathology and labeled, respectively, organic and functional), and the influential concept of computer functionalism wherein the computer is viewed as a model for the human mind-brain system (brain=hardware, mind=software). These mutually re-enforcing dichotomies, which have had a pernicious influence on our field, make a clear prediction about how 'difference-makers' (aka causal risk factors) for psychiatric disorders should be distributed in nature. In particular, are psychiatric disorders like our laptops, which when they dysfunction, can be cleanly divided into those with software versus hardware problems? I propose 11 categories of difference-makers for psychiatric illness from molecular genetics through culture and review their distribution in schizophrenia, major depression and alcohol dependence. In no case do these distributions resemble that predicted by the organic-functional/hardware-software dichotomy. Instead, the causes of psychiatric illness are dappled, distributed widely across multiple categories. We should abandon Cartesian and computer-functionalism-based dichotomies as scientifically inadequate and an impediment to our ability to integrate the diverse information about psychiatric illness our research has produced. Empirically based pluralism provides a rigorous but dappled view of the etiology of psychiatric illness. Critically, it is based not on how we wish the world to be but how the difference-makers for psychiatric illness are in fact distributed.
NASA Technical Reports Server (NTRS)
Crane, R. K.
1975-01-01
An experiment was conducted to study the relations between the empirical distribution functions of reflectivity at specified locations above the surface and the corresponding functions at the surface. A bistatic radar system was used to measure continuously the scattering cross section per unit volume at heights of 3 and 6 km. A frequency of 3.7 GHz was used in the tests. It was found that the distribution functions for reflectivity may significantly change with height at heights below the level of the melting layer.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
NASA Astrophysics Data System (ADS)
Nerantzaki, Sofia; Papalexiou, Simon Michael
2017-04-01
Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.
Ozone data and mission sampling analysis
NASA Technical Reports Server (NTRS)
Robbins, J. L.
1980-01-01
A methodology was developed to analyze discrete data obtained from the global distribution of ozone. Statistical analysis techniques were applied to describe the distribution of data variance in terms of empirical orthogonal functions and components of spherical harmonic models. The effects of uneven data distribution and missing data were considered. Data fill based on the autocorrelation structure of the data is described. Computer coding of the analysis techniques is included.
Scaling in the distribution of intertrade durations of Chinese stocks
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing
2008-10-01
The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Vincent, Brett T.
1993-01-01
The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.
ERIC Educational Resources Information Center
Fidalgo, Angel M.; Ferreres, Doris; Muniz, Jose
2004-01-01
Sample-size restrictions limit the contingency table approaches based on asymptotic distributions, such as the Mantel-Haenszel (MH) procedure, for detecting differential item functioning (DIF) in many practical applications. Within this framework, the present study investigated the power and Type I error performance of empirical and inferential…
Flavor dependence of the pion and kaon form factors and parton distribution functions
Hutauruk, Parada T. P.; Cloët, Ian C.; Thomas, Anthony W.
2016-09-01
The separate quark flavor contributions to the pion and kaon valence quark distribution functions are studied, along with the corresponding electromagnetic form factors in the space-like region. The calculations are made using the solution of the Bethe-Salpeter equation for the model of Nambu and Jona-Lasinio with proper-time regularization. Both the pion and kaon form factors and the valence quark distribution functions reproduce many features of the available empirical data. The larger mass of the strange quark naturally explains the empirical fact that the ratio u(K) + (x)/u(pi) + (x) drops below unity at large x, with a value of approximately Mmore » $$2\\atop{u}$$/Ms$$2\\atop{s}$$ as x → 1. With regard to the elastic form factors we report a large flavor dependence, with the u-quark contribution to the kaon form factor being an order of magnitude smaller than that of the s-quark at large Q 2, which may be a sensitive measure of confinement effects in QCD. Surprisingly though, the total K + and π + form factors differ by only 10%. Lastly, in general we find that flavor breaking effects are typically around 20%.« less
Flavor dependence of the pion and kaon form factors and parton distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutauruk, Parada T. P.; Cloët, Ian C.; Thomas, Anthony W.
The separate quark flavor contributions to the pion and kaon valence quark distribution functions are studied, along with the corresponding electromagnetic form factors in the space-like region. The calculations are made using the solution of the Bethe-Salpeter equation for the model of Nambu and Jona-Lasinio with proper-time regularization. Both the pion and kaon form factors and the valence quark distribution functions reproduce many features of the available empirical data. The larger mass of the strange quark naturally explains the empirical fact that the ratio u(K) + (x)/u(pi) + (x) drops below unity at large x, with a value of approximately Mmore » $$2\\atop{u}$$/Ms$$2\\atop{s}$$ as x → 1. With regard to the elastic form factors we report a large flavor dependence, with the u-quark contribution to the kaon form factor being an order of magnitude smaller than that of the s-quark at large Q 2, which may be a sensitive measure of confinement effects in QCD. Surprisingly though, the total K + and π + form factors differ by only 10%. Lastly, in general we find that flavor breaking effects are typically around 20%.« less
Estimation of two ordered mean residual lifetime functions.
Ebrahimi, N
1993-06-01
In many statistical studies involving failure data, biometric mortality data, and actuarial data, mean residual lifetime (MRL) function is of prime importance. In this paper we introduce the problem of nonparametric estimation of a MRL function on an interval when this function is bounded from below by another such function (known or unknown) on that interval, and derive the corresponding two functional estimators. The first is to be used when there is a known bound, and the second when the bound is another MRL function to be estimated independently. Both estimators are obtained by truncating the empirical estimator discussed by Yang (1978, Annals of Statistics 6, 112-117). In the first case, it is truncated at a known bound; in the second, at a point somewhere between the two empirical estimates. Consistency of both estimators is proved, and a pointwise large-sample distribution theory of the first estimator is derived.
Long memory behavior of returns after intraday financial jumps
NASA Astrophysics Data System (ADS)
Behfar, Stefan Kambiz
2016-11-01
In this paper, characterization of intraday financial jumps and time dynamics of returns after jumps is investigated, and will be analytically and empirically shown that intraday jumps are power-law distributed with the exponent 1 < μ < 2; in addition, returns after jumps show long-memory behavior. In the theory of finance, it is important to be able to distinguish between jumps and continuous sample path price movements, and this can be achieved by introducing a statistical test via calculating sums of products of returns over small period of time. In the case of having jump, the null hypothesis for normality test is rejected; this is based on the idea that returns are composed of mixture of normally-distributed and power-law distributed data (∼ 1 /r 1 + μ). Probability of rejection of null hypothesis is a function of μ, which is equal to one for 1 < μ < 2 within large intraday sample size M. To test this idea empirically, we downloaded S&P500 index data for both periods of 1997-1998 and 2014-2015, and showed that the Complementary Cumulative Distribution Function of jump return is power-law distributed with the exponent 1 < μ < 2. There are far more jumps in 1997-1998 as compared to 2015-2016; and it represents a power law exponent in 2015-2016 greater than one in 1997-1998. Assuming that i.i.d returns generally follow Poisson distribution, if the jump is a causal factor, high returns after jumps are the effect; we show that returns caused by jump decay as power-law distribution. To test this idea empirically, we average over the time dynamics of all days; therefore the superposed time dynamics after jump represent a power-law, which indicates that there is a long memory with a power-law distribution of return after jump.
Observations of the directional distribution of the wind energy input function over swell waves
NASA Astrophysics Data System (ADS)
Shabani, Behnam; Babanin, Alex V.; Baldock, Tom E.
2016-02-01
Field measurements of wind stress over shallow water swell traveling in different directions relative to the wind are presented. The directional distribution of the measured stresses is used to confirm the previously proposed but unverified directional distribution of the wind energy input function. The observed wind energy input function is found to follow a much narrower distribution (β∝cos3.6θ) than the Plant (1982) cosine distribution. The observation of negative stress angles at large wind-wave angles, however, indicates that the onset of negative wind shearing occurs at about θ≈ 50°, and supports the use of the Snyder et al. (1981) directional distribution. Taking into account the reverse momentum transfer from swell to the wind, Snyder's proposed parameterization is found to perform exceptionally well in explaining the observed narrow directional distribution of the wind energy input function, and predicting the wind drag coefficients. The empirical coefficient (ɛ) in Snyder's parameterization is hypothesised to be a function of the wave shape parameter, with ɛ value increasing as the wave shape changes between sinusoidal, sawtooth, and sharp-crested shoaling waves.
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
Systems of frequency distributions for water and environmental engineering
NASA Astrophysics Data System (ADS)
Singh, Vijay P.
2018-09-01
A wide spectrum of frequency distributions are used in hydrologic, hydraulic, environmental and water resources engineering. These distributions may have different origins, are based on different hypotheses, and belong to different generating systems. Review of literature suggests that different systems of frequency distributions employed in science and engineering in general and environmental and water engineering in particular have been derived using different approaches which include (1) differential equations, (2) distribution elasticity, (3) genetic theory, (4) generating functions, (5) transformations, (6) Bessel function, (7) expansions, and (8) entropy maximization. This paper revisits these systems of distributions and discusses the hypotheses that are used for deriving these systems. It also proposes, based on empirical evidence, another general system of distributions and derives a number of distributions from this general system that are used in environmental and water engineering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.R.; Heger, A.S.; Koen, B.V.
1984-04-01
This report is the result of a preliminary feasibility study of the applicability of Stein and related parametric empirical Bayes (PEB) estimators to the Nuclear Plant Reliability Data System (NPRDS). A new estimator is derived for the means of several independent Poisson distributions with different sampling times. This estimator is applied to data from NPRDS in an attempt to improve failure rate estimation. Theoretical and Monte Carlo results indicate that the new PEB estimator can perform significantly better than the standard maximum likelihood estimator if the estimation of the individual means can be combined through the loss function or throughmore » a parametric class of prior distributions.« less
Methodological Article: A Brief Taxometrics Primer
ERIC Educational Resources Information Center
Beauchaine, Theodore P.
2007-01-01
Taxometric procedures provide an empirical means of determining which psychiatric disorders are typologically distinct from normal behavioral functioning. Although most disorders reflect extremes along continuously distributed behavioral traits, identifying those that are discrete has important implications for accurate diagnosis, effective…
A Geometric View of the Mean of a Set of Numbers
ERIC Educational Resources Information Center
Sarkar, Jyotirmoy; Rashid, Mamunur
2016-01-01
The sample mean is sometimes depicted as a fulcrum placed under the Dot plot. We provide an alternative geometric visualization of the sample mean using the empirical cumulative distribution function or the cumulative histogram data.
Model-Free Feature Screening for Ultrahigh Dimensional Discriminant Analysis
Cui, Hengjian; Li, Runze
2014-01-01
This work is concerned with marginal sure independence feature screening for ultra-high dimensional discriminant analysis. The response variable is categorical in discriminant analysis. This enables us to use conditional distribution function to construct a new index for feature screening. In this paper, we propose a marginal feature screening procedure based on empirical conditional distribution function. We establish the sure screening and ranking consistency properties for the proposed procedure without assuming any moment condition on the predictors. The proposed procedure enjoys several appealing merits. First, it is model-free in that its implementation does not require specification of a regression model. Second, it is robust to heavy-tailed distributions of predictors and the presence of potential outliers. Third, it allows the categorical response having a diverging number of classes in the order of O(nκ) with some κ ≥ 0. We assess the finite sample property of the proposed procedure by Monte Carlo simulation studies and numerical comparison. We further illustrate the proposed methodology by empirical analyses of two real-life data sets. PMID:26392643
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Empirical study on dyad act-degree distribution in some collaboration networks
NASA Astrophysics Data System (ADS)
Chang, Hui; Zhang, Pei-Pei; He, Yue; He, Da-Ren
2006-03-01
We (and cooperators) suggest studying the evolution of the extended collaboration networks by a dyad-act organizing model. The analytic and numeric studies of the model lead to a conclusion that most of the collaboration networks should show a dyad act-degree distribution (how many acts a dyad belongs to) between a power law and an exponential function, which can be described by a shifted power law. We have done an empirical study on dyad act-degree distribution in some collaboration networks. They are: the train networks in China, the bus network of Beijing, and traditional Chinese medical prescription formulation network. The results show good agreement with this conclusion. We also discuss what dyad act-degree implies in these networks and what are the possible applications of the study. The details will be published elsewhere.
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Bounds on quantum confinement effects in metal nanoparticles
NASA Astrophysics Data System (ADS)
Blackman, G. Neal; Genov, Dentcho A.
2018-03-01
Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.
2012-03-01
EMPIRICAL ANALYSIS OF OPTICAL ATTENUATOR PERFORMANCE IN QUANTUM KEY DISTRIBUTION SYSTEMS USING A...DISTRIBUTION IS UNLIMITED AFIT/GCS/ENG/12-01 EMPIRICAL ANALYSIS OF OPTICAL ATTENUATOR PERFORMANCE IN QUANTUM KEY DISTRIBUTION SYSTEMS USING ...challenging as the complexity of actual implementation specifics are considered. Two components common to most quantum key distribution
[Rank distributions in community ecology from the statistical viewpoint].
Maksimov, V N
2004-01-01
Traditional statistical methods for definition of empirical functions of abundance distribution (population, biomass, production, etc.) of species in a community are applicable for processing of multivariate data contained in the above quantitative indices of the communities. In particular, evaluation of moments of distribution suffices for convolution of the data contained in a list of species and their abundance. At the same time, the species should be ranked in the list in ascending rather than descending population and the distribution models should be analyzed on the basis of the data on abundant species only.
NASA Technical Reports Server (NTRS)
Freilich, Michael H.; Dunbar, R. Scott
1993-01-01
Calculation of accurate vector winds from scatterometers requires knowledge of the relationship between backscatter cross-section and the geophysical variable of interest. As the detailed dynamics of wind generation of centimetric waves and radar-sea surface scattering at moderate incidence angles are not well known, empirical scatterometer model functions relating backscatter to winds must be developed. Less well appreciated is the fact that, given an accurate model function and some knowledge of the dominant scattering mechanisms, significant information on the amplitudes and directional distributions of centimetric roughness elements on the sea surface can be inferred. accurate scatterometer model functions can thus be used to investigate wind generation of short waves under realistic conditions. The present investigation involves developing an empirical model function for the C-band (5.3 GHz) ERS-1 scatterometer and comparing Ku-band model functions with the C-band model to infer information on the two-dimensional spectrum of centimetric roughness elements in the ocean. The C-band model function development is based on collocations of global backscatter measurements with operational surface analyses produced by meteorological agencies. Strengths and limitations of the method are discussed, and the resulting model function is validated in part through comparison with the actual distributions of backscatter cross-section triplets. Details of the directional modulation as well as the wind speed sensitivity at C-band are investigated. Analysis of persistent outliers in the data is used to infer the magnitudes of non-wind effects (such as atmospheric stratification, swell, etc.). The ERS-1 C-band instrument and the Seasat Ku-band (14.6 GHz) scatterometer both imaged waves of approximately 3.4 cm wavelength assuming that Bragg scattering is the dominant mechanism. Comparisons of the C-band and Ku-band model functions are used both to test the validity of the postulated Bragg mechanism and to investigate the directional distribution of the imaged waves under a variety of conditions where Bragg scatter is dominant.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning
NASA Astrophysics Data System (ADS)
Sagun, Levent
This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would explore such large dimensional, degenerate landscapes to locate a solution with decent generalization properties. Finally, a demonstration of how the new method can explain the empirical success of some of the recent methods that have been proposed for distributed deep learning. In the second part, two applied machine learning problems are studied that are complementary to the machine learning problems of part I. First, US asylum applications cases are studied using random forests on the data of past twenty years. Using only features up to when the case opens, the algorithm can predict the outcome of the case with 80% accuracy. Next, a particular question and answer system has been studied. The questions are collected from Jeopardy! show and they fed to Google, then the results are parsed into a recurrent neural network to come up with a system that would outcome the answer to the original question. Close to 50% accuracy is achieved where human level benchmark is just a little above 60%.
Pion and kaon valence-quark parton quasidistributions
NASA Astrophysics Data System (ADS)
Xu, Shu-Sheng; Chang, Lei; Roberts, Craig D.; Zong, Hong-Shi
2018-05-01
Algebraic Ansätze for the Poincaré-covariant Bethe-Salpeter wave functions of the pion and kaon are used to calculate their light-front wave functions, parton distribution amplitudes, parton quasidistribution amplitudes, valence parton distribution functions, and parton quasidistribution functions (PqDFs). The light-front wave functions are broad, concave functions, and the scale of flavor-symmetry violation in the kaon is roughly 15%, being set by the ratio of emergent masses in the s - and u -quark sectors. Parton quasidistribution amplitudes computed with longitudinal momentum Pz=1.75 GeV provide a semiquantitatively accurate representation of the objective parton distribution amplitude, but even with Pz=3 GeV , they cannot provide information about this amplitude's end point behavior. On the valence-quark domain, similar outcomes characterize PqDFs. In this connection, however, the ratio of kaon-to-pion u -quark PqDFs is found to provide a good approximation to the true parton distribution function ratio on 0.4 ≲x ≲0.8 , suggesting that with existing resources computations of ratios of parton quasidistributions can yield results that support empirical comparison.
Bounds of memory strength for power-law series.
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α. By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α, which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1<α≤3, as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α>3, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Bounds of memory strength for power-law series
NASA Astrophysics Data System (ADS)
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α . By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α , which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1 <α ≤3 , as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α >3 , the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Tests of Fit for Asymmetric Laplace Distributions with Applications on Financial Data
NASA Astrophysics Data System (ADS)
Fragiadakis, Kostas; Meintanis, Simos G.
2008-11-01
New goodness-of-fit tests for the family of asymmetric Laplace distributions are constructed. The proposed tests are based on a weighted integral incorporating the empirical characteristic function of suitably standardized data, and can be written in a closed form appropriate for computer implementation. Monte Carlo results show that the new procedure are competitive with classical goodness-of-fit methods. Applications with financial data are also included.
Statistical Tests Black swans or dragon-kings? A simple test for deviations from the power law★
NASA Astrophysics Data System (ADS)
Janczura, J.; Weron, R.
2012-05-01
We develop a simple test for deviations from power law tails. Actually, from the tails of any distribution. We use this test - which is based on the asymptotic properties of the empirical distribution function - to answer the question whether great natural disasters, financial crashes or electricity price spikes should be classified as dragon-kings or `only' as black swans.
Impact and explosion crater ejecta, fragment size, and velocity
NASA Technical Reports Server (NTRS)
Okeefe, J. D.; Ahrens, T. J.
1983-01-01
A model was developed for the mass distribution of fragments that are ejected at a given velocity for impact and explosion craters. The model is semi-empirical in nature and is derived from (1) numerical calculations of cratering and the resultant mass versus ejection velocity, (2) observed ejecta blanket particle size distributions, (3) an empirical relationship between maximum ejecta fragment size and crater diameter and an assumption on the functional form for the distribution of fragements ejected at a given velocity. This model implies that for planetary impacts into competent rock, the distribution of fragments ejected at a given velocity are nearly monodisperse, e.g., 20% of the mass of the ejecta at a given velocity contain fragments having a mass less than 0.1 times a mass of the largest fragment moving at that velocity. Using this model, the largest fragment that can be ejected from asteroids, the moon, Mars, and Earth is calculated as a function of crater diameter. In addition, the internal energy of ejecta versus ejecta velocity is found. The internal energy of fragments having velocities exceeding the escape velocity of the moon will exceed the energy required for incipient melting for solid silicates and thus, constrains the maximum ejected solid fragment size.
Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process
Finley, Benjamin J.; Kilkki, Kalevi
2014-01-01
The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621
Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.
Finley, Benjamin J; Kilkki, Kalevi
2014-01-01
The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.
Stevens, Richard D; Tello, J Sebastián; Gavilanez, María Mercedes
2013-01-01
Inference involving diversity gradients typically is gathered by mechanistic tests involving single dimensions of biodiversity such as species richness. Nonetheless, because traits such as geographic range size, trophic status or phenotypic characteristics are tied to a particular species, mechanistic effects driving broad diversity patterns should manifest across numerous dimensions of biodiversity. We develop an approach of stronger inference based on numerous dimensions of biodiversity and apply it to evaluate one such putative mechanism: the mid-domain effect (MDE). Species composition of 10,000-km(2) grid cells was determined by overlaying geographic range maps of 133 noctilionoid bat taxa. We determined empirical diversity gradients in the Neotropics by calculating species richness and three indices each of phylogenetic, functional and phenetic diversity for each grid cell. We also created 1,000 simulated gradients of each examined metric of biodiversity based on a MDE model to estimate patterns expected if species distributions were randomly placed within the Neotropics. For each simulation run, we regressed the observed gradient onto the MDE-expected gradient. If a MDE drives empirical gradients, then coefficients of determination from such an analysis should be high, the intercept no different from zero and the slope no different than unity. Species richness gradients predicted by the MDE fit empirical patterns. The MDE produced strong spatially structured gradients of taxonomic, phylogenetic, functional and phenetic diversity. Nonetheless, expected values generated from the MDE for most dimensions of biodiversity exhibited poor fit to most empirical patterns. The MDE cannot account for most empirical patterns of biodiversity. Fuller understanding of latitudinal gradients will come from simultaneous examination of relative effects of random, environmental and historical mechanisms to better understand distribution and abundance of the current biota.
Stevens, Richard D.; Tello, J. Sebastián; Gavilanez, María Mercedes
2013-01-01
Inference involving diversity gradients typically is gathered by mechanistic tests involving single dimensions of biodiversity such as species richness. Nonetheless, because traits such as geographic range size, trophic status or phenotypic characteristics are tied to a particular species, mechanistic effects driving broad diversity patterns should manifest across numerous dimensions of biodiversity. We develop an approach of stronger inference based on numerous dimensions of biodiversity and apply it to evaluate one such putative mechanism: the mid-domain effect (MDE). Species composition of 10,000-km2 grid cells was determined by overlaying geographic range maps of 133 noctilionoid bat taxa. We determined empirical diversity gradients in the Neotropics by calculating species richness and three indices each of phylogenetic, functional and phenetic diversity for each grid cell. We also created 1,000 simulated gradients of each examined metric of biodiversity based on a MDE model to estimate patterns expected if species distributions were randomly placed within the Neotropics. For each simulation run, we regressed the observed gradient onto the MDE-expected gradient. If a MDE drives empirical gradients, then coefficients of determination from such an analysis should be high, the intercept no different from zero and the slope no different than unity. Species richness gradients predicted by the MDE fit empirical patterns. The MDE produced strong spatially structured gradients of taxonomic, phylogenetic, functional and phenetic diversity. Nonetheless, expected values generated from the MDE for most dimensions of biodiversity exhibited poor fit to most empirical patterns. The MDE cannot account for most empirical patterns of biodiversity. Fuller understanding of latitudinal gradients will come from simultaneous examination of relative effects of random, environmental and historical mechanisms to better understand distribution and abundance of the current biota. PMID:23451099
Sun, J
1995-09-01
In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
An exactly solvable coarse-grained model for species diversity
NASA Astrophysics Data System (ADS)
Suweis, Samir; Rinaldo, Andrea; Maritan, Amos
2012-07-01
We present novel analytical results concerning ecosystem species diversity that stem from a proposed coarse-grained neutral model based on birth-death processes. The relevance of the problem lies in the urgency for understanding and synthesizing both theoretical results from ecological neutral theory and empirical evidence on species diversity preservation. The neutral model of biodiversity deals with ecosystems at the same trophic level, where per capita vital rates are assumed to be species independent. Closed-form analytical solutions for the neutral theory are obtained within a coarse-grained model, where the only input is the species persistence time distribution. Our results pertain to: the probability distribution function of the number of species in the ecosystem, both in transient and in stationary states; the n-point connected time correlation function; and the survival probability, defined as the distribution of time spans to local extinction for a species randomly sampled from the community. Analytical predictions are also tested on empirical data from an estuarine fish ecosystem. We find that emerging properties of the ecosystem are very robust and do not depend on specific details of the model, with implications for biodiversity and conservation biology.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
Empirical likelihood-based tests for stochastic ordering
BARMI, HAMMOU EL; MCKEAGUE, IAN W.
2013-01-01
This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Reconstruction of far-field tsunami amplitude distributions from earthquake sources
Geist, Eric L.; Parsons, Thomas E.
2016-01-01
The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1986-01-01
Directional ocean wave spectra were derived from Shuttle Imaging Radar (SIR-B) imagery in regions where nearly simultaneous aircraft-based measurements of the wave spectra were also available as part of the NASA Shuttle Mission 41G experiments. The SIR-B response to a coherently speckled scene is used to estimate the stationary system transfer function in the 15 even terms of an eighth-order two-dimensional polynomial. Surface elevation contours are assigned to SIR-B ocean scenes Fourier filtered using a empirical model of the modulation transfer function calibrated with independent measurements of wave height. The empirical measurements of the wave height distribution are illustrated for a variety of sea states.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Staver, A Carla; Archibald, Sally; Levin, Simon
2011-05-01
Savannas are known as ecosystems with tree cover below climate-defined equilibrium values. However, a predictive framework for understanding constraints on tree cover is lacking. We present (a) a spatially extensive analysis of tree cover and fire distribution in sub-Saharan Africa, and (b) a model, based on empirical results, demonstrating that savanna and forest may be alternative stable states in parts of Africa, with implications for understanding savanna distributions. Tree cover does not increase continuously with rainfall, but rather is constrained to low (<50%, "savanna") or high tree cover (>75%, "forest"). Intermediate tree cover rarely occurs. Fire, which prevents trees from establishing, differentiates high and low tree cover, especially in areas with rainfall between 1000 mm and 2000 mm. Fire is less important at low rainfall (<1000 mm), where rainfall limits tree cover, and at high rainfall (>2000 mm), where fire is rare. This pattern suggests that complex interactions between climate and disturbance produce emergent alternative states in tree cover. The relationship between tree cover and fire was incorporated into a dynamic model including grass, savanna tree saplings, and savanna trees. Only recruitment from sapling to adult tree varied depending on the amount of grass in the system. Based on our empirical analysis and previous work, fires spread only at tree cover of 40% or less, producing a sigmoidal fire probability distribution as a function of grass cover and therefore a sigmoidal sapling to tree recruitment function. This model demonstrates that, given relatively conservative and empirically supported assumptions about the establishment of trees in savannas, alternative stable states for the same set of environmental conditions (i.e., model parameters) are possible via a fire feedback mechanism. Integrating alternative stable state dynamics into models of biome distributions could improve our ability to predict changes in biome distributions and in carbon storage under climate and global change scenarios.
NASA Astrophysics Data System (ADS)
Cianciara, Aleksander
2016-09-01
The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.
NASA Astrophysics Data System (ADS)
Contreras, Carlos; Blake, Chris; Poole, Gregory B.; Marin, Felipe
2013-04-01
We use high-resolution N-body simulations to develop a new, flexible empirical approach for measuring the growth rate from redshift-space distortions in the 2-point galaxy correlation function. We quantify the systematic error in measuring the growth rate in a 1 h-3 Gpc3 volume over a range of redshifts, from the dark matter particle distribution and a range of halo-mass catalogues with a number density comparable to the latest large-volume galaxy surveys such as the WiggleZ Dark Energy Survey and the Baryon Oscillation Spectroscopic Survey. Our simulations allow us to span halo masses with bias factors ranging from unity (probed by emission-line galaxies) to more massive haloes hosting luminous red galaxies. We show that the measured growth rate is sensitive to the model adopted for the small-scale real-space correlation function, and in particular that the `standard' assumption of a power-law correlation function can result in a significant systematic error in the growth-rate determination. We introduce a new, empirical fitting function that produces results with a lower (5-10 per cent) amplitude of systematic error. We also introduce a new technique which permits the galaxy pairwise velocity distribution, the quantity which drives the non-linear growth of structure, to be measured as a non-parametric stepwise function. Our (model-independent) results agree well with an exponential pairwise velocity distribution, expected from theoretical considerations, and are consistent with direct measurements of halo velocity differences from the parent catalogues. In a companion paper, we present the application of our new methodology to the WiggleZ Survey data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plavchan, Peter; Bilinski, Christopher
The discovery of ''hot Jupiters'' very close to their parent stars confirmed that Jovian planets migrate inward via several potential mechanisms. We present empirical constraints on planet migration halting mechanisms. We compute model density functions of close-in exoplanets in the orbital semi-major axis-stellar mass plane to represent planet migration that is halted via several mechanisms, including the interior 1:2 resonance with the magnetospheric disk truncation radius, the interior 1:2 resonance with the dust sublimation radius, and several scenarios for tidal halting. The models differ in the predicted power-law dependence of the exoplanet orbital semi-major axis as a function of stellarmore » mass, and thus we also include a power-law model with the exponent as a free parameter. We use a Bayesian analysis to assess the model success in reproducing empirical distributions of confirmed exoplanets and Kepler candidates that orbit interior to 0.1 AU. Our results confirm a correlation of the halting distance with stellar mass. Tidal halting provides the best fit to the empirical distribution of confirmed Jovian exoplanets at a statistically robust level, consistent with the Kozai mechanism and the spin-orbit misalignment of a substantial fraction of hot Jupiters. We can rule out migration halting at the interior 1:2 resonances with the magnetospheric disk truncation radius and the interior 1:2 resonance with the dust disk sublimation radius, a uniform random distribution, and a distribution with no dependence on stellar mass. Note that our results do not rule out Type-II migration, but rather eliminate the role of a circumstellar disk in stopping exoplanet migration. For Kepler candidates, which have a more restricted range in stellar mass compared to confirmed planets, we are unable to discern between the tidal dissipation and magnetospheric disk truncation braking mechanisms at a statistically significant level. The power-law model favors exponents in the range of 0.38-0.9. This is larger than that predicted for tidal halting (0.23-0.33), which suggests that additional physics may be missing in the tidal halting theory.« less
NASA Astrophysics Data System (ADS)
Moon, Joon-Young; Kim, Junhyeok; Ko, Tae-Wook; Kim, Minkyung; Iturria-Medina, Yasser; Choi, Jee-Hyun; Lee, Joseph; Mashour, George A.; Lee, Uncheol
2017-04-01
Identifying how spatially distributed information becomes integrated in the brain is essential to understanding higher cognitive functions. Previous computational and empirical studies suggest a significant influence of brain network structure on brain network function. However, there have been few analytical approaches to explain the role of network structure in shaping regional activities and directionality patterns. In this study, analytical methods are applied to a coupled oscillator model implemented in inhomogeneous networks. We first derive a mathematical principle that explains the emergence of directionality from the underlying brain network structure. We then apply the analytical methods to the anatomical brain networks of human, macaque, and mouse, successfully predicting simulation and empirical electroencephalographic data. The results demonstrate that the global directionality patterns in resting state brain networks can be predicted solely by their unique network structures. This study forms a foundation for a more comprehensive understanding of how neural information is directed and integrated in complex brain networks.
An Empirical Mass Function Distribution
NASA Astrophysics Data System (ADS)
Murray, S. G.; Robotham, A. S. G.; Power, C.
2018-03-01
The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.
EMD-WVD time-frequency distribution for analysis of multi-component signals
NASA Astrophysics Data System (ADS)
Chai, Yunzi; Zhang, Xudong
2016-10-01
Time-frequency distribution (TFD) is two-dimensional function that indicates the time-varying frequency content of one-dimensional signals. And The Wigner-Ville distribution (WVD) is an important and effective time-frequency analysis method. The WVD can efficiently show the characteristic of a mono-component signal. However, a major drawback is the extra cross-terms when multi-component signals are analyzed by WVD. In order to eliminating the cross-terms, we decompose signals into single frequency components - Intrinsic Mode Function (IMF) - by using the Empirical Mode decomposition (EMD) first, then use WVD to analyze each single IMF. In this paper, we define this new time-frequency distribution as EMD-WVD. And the experiment results show that the proposed time-frequency method can solve the cross-terms problem effectively and improve the accuracy of WVD time-frequency analysis.
The size distributions of fragments ejected at a given velocity from impact craters
NASA Technical Reports Server (NTRS)
O'Keefe, John D.; Ahrens, Thomas J.
1987-01-01
The mass distribution of fragments that are ejected at a given velocity for impact craters is modeled to allow extrapolation of laboratory, field, and numerical results to large scale planetary events. The model is semi-empirical in nature and is derived from: (1) numerical calculations of cratering and the resultant mass versus ejection velocity, (2) observed ejecta blanket particle size distributions, (3) an empirical relationship between maximum ejecta fragment size and crater diameter, (4) measurements and theory of maximum ejecta size versus ejecta velocity, and (5) an assumption on the functional form for the distribution of fragments ejected at a given velocity. This model implies that for planetary impacts into competent rock, the distribution of fragments ejected at a given velocity is broad, e.g., 68 percent of the mass of the ejecta at a given velocity contains fragments having a mass less than 0.1 times a mass of the largest fragment moving at that velocity. The broad distribution suggests that in impact processes, additional comminution of ejecta occurs after the upward initial shock has passed in the process of the ejecta velocity vector rotating from an initially downward orientation. This additional comminution produces the broader size distribution in impact ejecta as compared to that obtained in simple brittle failure experiments.
Covariations in ecological scaling laws fostered by community dynamics.
Zaoli, Silvia; Giometto, Andrea; Maritan, Amos; Rinaldo, Andrea
2017-10-03
Scaling laws in ecology, intended both as functional relationships among ecologically relevant quantities and the probability distributions that characterize their occurrence, have long attracted the interest of empiricists and theoreticians. Empirical evidence exists of power laws associated with the number of species inhabiting an ecosystem, their abundances, and traits. Although their functional form appears to be ubiquitous, empirical scaling exponents vary with ecosystem type and resource supply rate. The idea that ecological scaling laws are linked has been entertained before, but the full extent of macroecological pattern covariations, the role of the constraints imposed by finite resource supply, and a comprehensive empirical verification are still unexplored. Here, we propose a theoretical scaling framework that predicts the linkages of several macroecological patterns related to species' abundances and body sizes. We show that such a framework is consistent with the stationary-state statistics of a broad class of resource-limited community dynamics models, regardless of parameterization and model assumptions. We verify predicted theoretical covariations by contrasting empirical data and provide testable hypotheses for yet unexplored patterns. We thus place the observed variability of ecological scaling exponents into a coherent statistical framework where patterns in ecology embed constrained fluctuations.
Dolan, Paul; Tsuchiya, Aki
2009-01-01
The literature on income distribution has attempted to evaluate different degrees of inequality using a social welfare function (SWF) approach. However, it has largely ignored the source of such inequalities, and has thus failed to consider different degrees of inequity. The literature on egalitarianism has addressed issues of equity, largely in relation to individual responsibility. This paper builds upon these two literatures, and introduces individual responsibility into the SWF. Results from a small-scale study of people's preferences in relation to the distribution of health benefits are presented to illustrate how the parameter values of a SWF might be determined.
Effects of the crustal magnetic fields on the Martian atmospheric ion escape rate
NASA Astrophysics Data System (ADS)
Ramstad, R.; Barbash, S.; Futaana, Y.; Nilsson, H.; Holmstrom, M.
2015-12-01
Eight years (2007-2015) of ion flux measurements from Mars Express are used to empirically investigate the influence of the Martian crustal magnetic fields on the atmospheric ion escape rate. We combine ASPERA-3/IMA (Analyzer of Space Plasmas and Energetic Atoms/Ion Mass Analyzer) measurements taken during nominal upstream solar wind and solar Extreme Ultraviolet (EUV) conditions to compute global average ion distribution functions for varying solar zenith angles (SZA) of the strongest crustal field. Escape rates are subsequently calculated from each of the average distribution functions. A statistically significant increase in escape rate is found for high dayside SZA, compared to low SZA.
Default contagion risks in Russian interbank market
NASA Astrophysics Data System (ADS)
Leonidov, A. V.; Rumyantsev, E. L.
2016-06-01
Systemic risks of default contagion in the Russian interbank market are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical data is developed. The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests.
The Self-Organization of a Spoken Word
Holden, John G.; Rajaraman, Srinivasan
2012-01-01
Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213
NASA Astrophysics Data System (ADS)
Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.
2015-12-01
We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).
Survivorship analysis when cure is a possibility: a Monte Carlo study.
Goldman, A I
1984-01-01
Parametric survivorship analyses of clinical trials commonly involves the assumption of a hazard function constant with time. When the empirical curve obviously levels off, one can modify the hazard function model by use of a Gompertz or Weibull distribution with hazard decreasing over time. Some cancer treatments are thought to cure some patients within a short time of initiation. Then, instead of all patients having the same hazard, decreasing over time, a biologically more appropriate model assumes that an unknown proportion (1 - pi) have constant high risk whereas the remaining proportion (pi) have essentially no risk. This paper discusses the maximum likelihood estimation of pi and the power curves of the likelihood ratio test. Monte Carlo studies provide results for a variety of simulated trials; empirical data illustrate the methods.
NASA Astrophysics Data System (ADS)
Zerpa, L.; Gao, F.; Wang, S.
2017-12-01
There are two major types of natural gas hydrate distributions in porous media: pore filling and contact cementing. The difference between these two distribution types is related to hydrate nucleation and growth processes. In the pore filling distribution, hydrate nucleates from a gas-dissolved aqueous phase at the grain boundary and grows away from grain contacts and surfaces into the pore space. In the contact cementing distribution, hydrate nucleates and grows at the gas-water interface and at intergranular contacts. Previous attempts to correlate changes on porosity and permeability during hydrate formation/dissociation were based on the length difference between the pore body and pore throat, and only considered contact cementing hydrate distribution. This work consists of a study of mathematical models of permeability and porosity as a function of gas hydrate saturation during formation and dissociation of gas hydrates in porous media. In this work, first we derive the permeability equation for the pore filling hydrate deposition as a function of hydrate saturation. Then, a more comprehensive model considering both types of gas hydrate deposition is developed to represent changes in permeability and porosity during hydrate formation and dissociation. This resulted in a model that combines pore filling and contact cementing deposition types in the same reservoir. Finally, the TOUGH+Hydrate numerical reservoir simulator was modified to include these models to analyze the response of production and saturation during a depressurization process, considering different combinations of pore filling and contact cementing hydrate distributions. The empirical exponent used in the permeability adjustment factor model influences both production profile and saturation results. This empirical factor describes the permeability dependence to changes in porosity caused by solid phase formation in the porous medium. The use of the permeability exponent decreases the permeability of the system for a given hydrate saturation, which affects the hydraulic performance of the system. However, from published experimental work, there is only a rough estimation of this permeability exponent. This factor could be represented with an empirical equation if more laboratory and field data becomes available.
Predicting protein complex geometries with a neural network.
Chae, Myong-Ho; Krull, Florian; Lorenzen, Stephan; Knapp, Ernst-Walter
2010-03-01
A major challenge of the protein docking problem is to define scoring functions that can distinguish near-native protein complex geometries from a large number of non-native geometries (decoys) generated with noncomplexed protein structures (unbound docking). In this study, we have constructed a neural network that employs the information from atom-pair distance distributions of a large number of decoys to predict protein complex geometries. We found that docking prediction can be significantly improved using two different types of polar hydrogen atoms. To train the neural network, 2000 near-native decoys of even distance distribution were used for each of the 185 considered protein complexes. The neural network normalizes the information from different protein complexes using an additional protein complex identity input neuron for each complex. The parameters of the neural network were determined such that they mimic a scoring funnel in the neighborhood of the native complex structure. The neural network approach avoids the reference state problem, which occurs in deriving knowledge-based energy functions for scoring. We show that a distance-dependent atom pair potential performs much better than a simple atom-pair contact potential. We have compared the performance of our scoring function with other empirical and knowledge-based scoring functions such as ZDOCK 3.0, ZRANK, ITScore-PP, EMPIRE, and RosettaDock. In spite of the simplicity of the method and its functional form, our neural network-based scoring function achieves a reasonable performance in rigid-body unbound docking of proteins. Proteins 2010. (c) 2009 Wiley-Liss, Inc.
Selecting a restoration technique to minimize OCR error.
Cannon, M; Fugate, M; Hush, D R; Scovel, C
2003-01-01
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.
NASA Astrophysics Data System (ADS)
Lillo, F.
2007-02-01
I consider the problem of the optimal limit order price of a financial asset in the framework of the maximization of the utility function of the investor. The analytical solution of the problem gives insight on the origin of the recently empirically observed power law distribution of limit order prices. In the framework of the model, the most likely proximate cause of this power law is a power law heterogeneity of traders' investment time horizons.
Huang, Daizheng; Wu, Zhihui
2017-01-01
Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194
Huang, Daizheng; Wu, Zhihui
2017-01-01
Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.
Kepner, Gordon R
2014-08-27
This study uses dimensional analysis to derive the general second-order differential equation that underlies numerous physical and natural phenomena described by common mathematical functions. It eschews assumptions about empirical constants and mechanisms. It relies only on the data plot's mathematical properties to provide the conditions and constraints needed to specify a second-order differential equation that is free of empirical constants for each phenomenon. A practical example of each function is analyzed using the general form of the underlying differential equation and the observable unique mathematical properties of each data plot, including boundary conditions. This yields a differential equation that describes the relationship among the physical variables governing the phenomenon's behavior. Complex phenomena such as the Standard Normal Distribution, the Logistic Growth Function, and Hill Ligand binding, which are characterized by data plots of distinctly different sigmoidal character, are readily analyzed by this approach. It provides an alternative, simple, unifying basis for analyzing each of these varied phenomena from a common perspective that ties them together and offers new insights into the appropriate empirical constants for describing each phenomenon.
Sakschewski, Boris; von Bloh, Werner; Boit, Alice; Rammig, Anja; Kattge, Jens; Poorter, Lourens; Peñuelas, Josep; Thonicke, Kirsten
2015-01-22
Functional diversity is critical for ecosystem dynamics, stability and productivity. However, dynamic global vegetation models (DGVMs) which are increasingly used to simulate ecosystem functions under global change, condense functional diversity to plant functional types (PFTs) with constant parameters. Here, we develop an individual- and trait-based version of the DGVM LPJmL (Lund-Potsdam-Jena managed Land) called LPJmL- flexible individual traits (LPJmL-FIT) with flexible individual traits) which we apply to generate plant trait maps for the Amazon basin. LPJmL-FIT incorporates empirical ranges of five traits of tropical trees extracted from the TRY global plant trait database, namely specific leaf area (SLA), leaf longevity (LL), leaf nitrogen content (N area ), the maximum carboxylation rate of Rubisco per leaf area (vcmaxarea), and wood density (WD). To scale the individual growth performance of trees, the leaf traits are linked by trade-offs based on the leaf economics spectrum, whereas wood density is linked to tree mortality. No preselection of growth strategies is taking place, because individuals with unique trait combinations are uniformly distributed at tree establishment. We validate the modeled trait distributions by empirical trait data and the modeled biomass by a remote sensing product along a climatic gradient. Including trait variability and trade-offs successfully predicts natural trait distributions and achieves a more realistic representation of functional diversity at the local to regional scale. As sites of high climatic variability, the fringes of the Amazon promote trait divergence and the coexistence of multiple tree growth strategies, while lower plant trait diversity is found in the species-rich center of the region with relatively low climatic variability. LPJmL-FIT enables to test hypotheses on the effects of functional biodiversity on ecosystem functioning and to apply the DGVM to current challenges in ecosystem management from local to global scales, that is, deforestation and climate change effects. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Duarte Queirós, S. M.
2005-08-01
This letter reports on a stochastic dynamical scenario whose associated stationary probability density function is exactly a generalised form, with a power law instead of exponencial decay, of the ubiquitous Gamma distribution. This generalisation, also known as F-distribution, was empirically proposed for the first time to adjust for high-frequency stock traded volume distributions in financial markets and verified in experiments with granular material. The dynamical assumption presented herein is based on local temporal fluctuations of the average value of the observable under study. This proposal is related to superstatistics and thus to the current nonextensive statistical mechanics framework. For the specific case of stock traded volume, we connect the local fluctuations in the mean stock traded volume with the typical herding behaviour presented by financial traders. Last of all, NASDAQ 1 and 2 minute stock traded volume sequences and probability density functions are numerically reproduced.
NASA Astrophysics Data System (ADS)
Xia, Xintao; Wang, Zhongyu
2008-10-01
For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.
Universality classes of fluctuation dynamics in hierarchical complex systems
NASA Astrophysics Data System (ADS)
Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.
2017-03-01
A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.
The Empirical Distribution of Singletons for Geographic Samples of DNA Sequences.
Cubry, Philippe; Vigouroux, Yves; François, Olivier
2017-01-01
Rare variants are important for drawing inference about past demographic events in a species history. A singleton is a rare variant for which genetic variation is carried by a unique chromosome in a sample. How singletons are distributed across geographic space provides a local measure of genetic diversity that can be measured at the individual level. Here, we define the empirical distribution of singletons in a sample of chromosomes as the proportion of the total number of singletons that each chromosome carries, and we present a theoretical background for studying this distribution. Next, we use computer simulations to evaluate the potential for the empirical distribution of singletons to provide a description of genetic diversity across geographic space. In a Bayesian framework, we show that the empirical distribution of singletons leads to accurate estimates of the geographic origin of range expansions. We apply the Bayesian approach to estimating the origin of the cultivated plant species Pennisetum glaucum [L.] R. Br . (pearl millet) in Africa, and find support for range expansion having started from Northern Mali. Overall, we report that the empirical distribution of singletons is a useful measure to analyze results of sequencing projects based on large scale sampling of individuals across geographic space.
ERIC Educational Resources Information Center
Nemova, Olga A.; Retivina, Veronika V.; Kutepova, Lubov I.; Vinnikova, Irina S.; Kuznetsova, Ekaterina A.
2016-01-01
The paper considers the issue of functioning of the mechanism of formation and translation of values of labor in family. Fundamental labor values and main channels of their distribution are revealed based on empiric material. Family influence on motivation of today's Russian youth's labor behavior was determined. An intergenerational comparative…
IRT-LR-DIF with Estimation of the Focal-Group Density as an Empirical Histogram
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Item response theory-likelihood ratio-differential item functioning (IRT-LR-DIF) is used to evaluate the degree to which items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Usually, the latent distribution is presumed normal for both…
USDA-ARS?s Scientific Manuscript database
Multi-angle remote sensing has been proved useful for mapping vegetation community types in desert regions. Based on Multi-angle Imaging Spectro-Radiometer (MISR) multi-angular images, this study compares roles played by Bidirectional Reflectance Distribution Function (BRDF) model parameters with th...
Very High-Frequency (VHF) ionospheric scintillation fading measurements at Lima, Peru
NASA Technical Reports Server (NTRS)
Blank, H. A.; Golden, T. S.
1972-01-01
During the spring equinox of 1970, scintillating signals at VHF (136.4 MHz) were observed at Lima, Peru. The transmission originated from ATS 3 and was observed through a pair of antennas spaced 1200 feet apart on an east-west baseline. The empirical data were digitized, reduced, and analyzed. The results include amplitude probability density and distribution functions, time autocorrelation functions, cross correlation functions for the spaced antennas, and appropriate spectral density functions. Results show estimates of the statistics of the ground diffraction pattern to gain insight into gross ionospheric irregularity size, and irregularity velocity in the antenna planes.
Single crystal to polycrystal neutron transmission simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dessieux, Luc Lucius; Stoica, Alexandru Dan; Bingham, Philip R.
A collection of routines for calculation of the total cross section that determines the attenuation of neutrons by crystalline solids is presented. The total cross section is calculated semi-empirically as a function of crystal structure, neutron energy, temperature, and crystal orientation. The semi-empirical formula includes the contribution of parasitic Bragg scattering to the total cross section using both the crystal’s mosaic spread value and its orientation with respect to the neutron beam direction as parameters. These routines allow users to enter a distribution of crystal orientations for calculation of total cross sections of user defined powder or pseudo powder distributions,more » which enables simulation of non-uniformities such as texture and strain. In conclusion, the spectra for neutron transmission simulations in the neutron thermal energy range (2 meV–100 meV) are presented for single crystal and polycrystal samples and compared to measurements.« less
Single crystal to polycrystal neutron transmission simulation
Dessieux, Luc Lucius; Stoica, Alexandru Dan; Bingham, Philip R.
2018-02-02
A collection of routines for calculation of the total cross section that determines the attenuation of neutrons by crystalline solids is presented. The total cross section is calculated semi-empirically as a function of crystal structure, neutron energy, temperature, and crystal orientation. The semi-empirical formula includes the contribution of parasitic Bragg scattering to the total cross section using both the crystal’s mosaic spread value and its orientation with respect to the neutron beam direction as parameters. These routines allow users to enter a distribution of crystal orientations for calculation of total cross sections of user defined powder or pseudo powder distributions,more » which enables simulation of non-uniformities such as texture and strain. In conclusion, the spectra for neutron transmission simulations in the neutron thermal energy range (2 meV–100 meV) are presented for single crystal and polycrystal samples and compared to measurements.« less
Nonparametric Bayesian inference for mean residual life functions in survival analysis.
Poynor, Valerie; Kottas, Athanasios
2018-01-19
Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
NASA Astrophysics Data System (ADS)
Sazuka, Naoya
2007-03-01
We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.
Empirical modeling ENSO dynamics with complex-valued artificial neural networks
NASA Astrophysics Data System (ADS)
Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry
2016-04-01
The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
NASA Astrophysics Data System (ADS)
Ho, Yen-Ching; Wang, Yi-Siang; Chao, Sheng D.
2017-08-01
Modeling fluid cycloalkanes with molecular dynamics simulations has proven to be a very challenging task partly because of lacking a reliable force field based on quantum chemistry calculations. In this paper, we construct an ab initio force field for fluid cyclopropane using the second-order Møller-Plesset perturbation theory. We consider 15 conformers of the cyclopropane dimer for the orientation sampling. Single-point energies at important geometries are calibrated by the coupled cluster with single, double, and perturbative triple excitation method. Dunning's correlation consistent basis sets (up to aug-cc-pVTZ) are used in extrapolating the interaction energies at the complete basis set limit. The force field parameters in a 9-site Lennard-Jones model are regressed by the calculated interaction energies without using empirical data. With this ab initio force field, we perform molecular dynamics simulations of fluid cyclopropane and calculate both the structural and dynamical properties. We compare the simulation results with those using an empirical force field and obtain a quantitative agreement for the detailed atom-wise radial distribution functions. The experimentally observed gross radial distribution function (extracted from the neutron scattering measurements) is well reproduced in our simulation. Moreover, the calculated self-diffusion coefficients and shear viscosities are in good agreement with the experimental data over a wide range of thermodynamic conditions. To the best of our knowledge, this is the first ab initio force field which is capable of competing with empirical force fields for simulating fluid cyclopropane.
NASA Astrophysics Data System (ADS)
Mu, G.-H.; Chen, W.; Kertész, J.; Zhou, W.-X.
2009-03-01
The distributions of trade sizes and trading volumes are investigated based on the limit order book data of 22 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. We observe that the size distribution of trades for individualstocks exhibits jumps, which is caused by the number preference of traders when placing orders. We analyze the applicability of the “q-Gamma” function for fitting the distribution by the Cramér-von Mises criterion. The empirical PDFs of tradingvolumes at different timescales Δt ranging from 1 min to 240 min can be well modeled. The applicability of the q-Gamma functions for multiple trades is restricted to the transaction numbers Δn≤ 8. We find that all the PDFs have power-law tails for large volumes. Using careful estimation of the average tail exponents α of the distributions of trade sizes and trading volumes, we get α> 2, well outside the Lévy regime.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Density-functional theory based on the electron distribution on the energy coordinate
NASA Astrophysics Data System (ADS)
Takahashi, Hideaki
2018-03-01
We developed an electronic density functional theory utilizing a novel electron distribution n(ɛ) as a basic variable to compute ground state energy of a system. n(ɛ) is obtained by projecting the electron density n({\\boldsymbol{r}}) defined on the space coordinate {\\boldsymbol{r}} onto the energy coordinate ɛ specified with the external potential {\\upsilon }ext}({\\boldsymbol{r}}) of interest. It was demonstrated that the Kohn-Sham equation can also be formulated with the exchange-correlation functional E xc[n(ɛ)] that employs the density n(ɛ) as an argument. It turned out an exchange functional proposed in our preliminary development suffices to describe properly the potential energies of several types of chemical bonds with comparable accuracies to the corresponding functional based on local density approximation. As a remarkable feature of the distribution n(ɛ) it inherently involves the spatially non-local information of the exchange hole at the bond dissociation limit in contrast to conventional approximate functionals. By taking advantage of this property we also developed a prototype of the static correlation functional E sc including no empirical parameters, which showed marked improvements in describing the dissociations of covalent bonds in {{{H}}}2,{{{C}}}2{{{H}}}4 and {CH}}4 molecules.
Limits of the memory coefficient in measuring correlated bursts
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Hiraoka, Takayuki
2018-03-01
Temporal inhomogeneities in event sequences of natural and social phenomena have been characterized in terms of interevent times and correlations between interevent times. The inhomogeneities of interevent times have been extensively studied, while the correlations between interevent times, often called correlated bursts, are far from being fully understood. For measuring the correlated bursts, two relevant approaches were suggested, i.e., memory coefficient and burst size distribution. Here a burst size denotes the number of events in a bursty train detected for a given time window. Empirical analyses have revealed that the larger memory coefficient tends to be associated with the heavier tail of the burst size distribution. In particular, empirical findings in human activities appear inconsistent, such that the memory coefficient is close to 0, while burst size distributions follow a power law. In order to comprehend these observations, by assuming the conditional independence between consecutive interevent times, we derive the analytical form of the memory coefficient as a function of parameters describing interevent time and burst size distributions. Our analytical result can explain the general tendency of the larger memory coefficient being associated with the heavier tail of burst size distribution. We also find that the apparently inconsistent observations in human activities are compatible with each other, indicating that the memory coefficient has limits to measure the correlated bursts.
Limits of Predictability in Commuting Flows in the Absence of Data for Calibration
Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.
2014-01-01
The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599
Invariance in the recurrence of large returns and the validation of models of price dynamics
NASA Astrophysics Data System (ADS)
Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey
2013-08-01
Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.
Gauran, Iris Ivy M; Park, Junyong; Lim, Johan; Park, DoHwan; Zylstra, John; Peterson, Thomas; Kann, Maricel; Spouge, John L
2017-09-22
In recent mutation studies, analyses based on protein domain positions are gaining popularity over gene-centric approaches since the latter have limitations in considering the functional context that the position of the mutation provides. This presents a large-scale simultaneous inference problem, with hundreds of hypothesis tests to consider at the same time. This article aims to select significant mutation counts while controlling a given level of Type I error via False Discovery Rate (FDR) procedures. One main assumption is that the mutation counts follow a zero-inflated model in order to account for the true zeros in the count model and the excess zeros. The class of models considered is the Zero-inflated Generalized Poisson (ZIGP) distribution. Furthermore, we assumed that there exists a cut-off value such that smaller counts than this value are generated from the null distribution. We present several data-dependent methods to determine the cut-off value. We also consider a two-stage procedure based on screening process so that the number of mutations exceeding a certain value should be considered as significant mutations. Simulated and protein domain data sets are used to illustrate this procedure in estimation of the empirical null using a mixture of discrete distributions. Overall, while maintaining control of the FDR, the proposed two-stage testing procedure has superior empirical power. 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
NASA Astrophysics Data System (ADS)
Perrone, D.; Bourouaine, S.; Valentini, F.; Marsch, E.; Veltri, P.
2014-04-01
Solar wind "in situ" measurements from the Helios spacecraft in regions of the Heliosphere close to the Sun (˜0.3 AU), at which typical values of the proton plasma beta are observed to be lower than unity, show that the alpha particle distribution functions depart from the equilibrium Maxwellian configuration, displaying significant elongations in the direction perpendicular to the background magnetic field. In the present work, we made use of multi-ion hybrid Vlasov-Maxwell simulations to provide theoretical support and interpretation to the empirical evidences above. Our numerical results show that, at variance with the case of βp≃1 discussed in Perrone et al. (2011), for βp=0.1 the turbulent cascade in the direction parallel to the ambient magnetic field is not efficient in transferring energy toward scales shorter than the proton inertial length. Moreover, our numerical analysis provides new insights for the theoretical interpretation of the empirical evidences obtained from the Helios spacecraft, concerning the generation of temperature anisotropy in the particle velocity distributions.
A unified econophysics explanation for the power-law exponents of stock market activity
NASA Astrophysics Data System (ADS)
Gabaix, Xavier; Gopikrishnan, Parameswaran; Plerou, Vasiliki; Stanley, Eugene
2007-08-01
We survey a theory (first sketched in Nature in 2003, then fleshed out in the Quarterly Journal of Economics in 2006) of the economic underpinnings of the fat-tailed distributions of a number of financial variables, such as returns and trading volume. Our theory posits that they have a common origin in the strategic trading behavior of very large financial institutions in a relatively illiquid market. We show how the fat-tailed distribution of fund sizes can indeed generate extreme returns and volumes, even in the absence of fundamental news. Moreover, we are able to replicate the individually different empirical values of the power-law exponents for each distribution: 3 for returns, 3/2 for volumes, 1 for the assets under management of large investors. Large investors moderate their trades to reduce their price impact; coupled with a concave price impact function, this leads to volumes being more fat-tailed than returns but less fat-tailed than fund sizes. The trades of large institutions also offer a unified explanation for apparently disconnected empirical regularities that are otherwise a challenge for economic theory.
NASA Astrophysics Data System (ADS)
Nesbet, Robert K.
2018-05-01
Velocities in stable circular orbits about galaxies, a measure of centripetal gravitation, exceed the expected Kepler/Newton velocity as orbital radius increases. Standard Λ cold dark matter (ΛCDM) attributes this anomaly to galactic dark matter. McGaugh et al. have recently shown for 153 disc galaxies that observed radial acceleration is an apparently universal function of classical acceleration computed for observed galactic baryonic mass density. This is consistent with the empirical modified Newtonian dynamics (MOND) model, not requiring dark matter. It is shown here that suitably constrained ΛCDM and conformal gravity (CG) also produce such a universal correlation function. ΛCDM requires a very specific dark matter distribution, while the implied CG non-classical acceleration must be independent of galactic mass. All three constrained radial acceleration functions agree with the empirical baryonic v4 Tully-Fisher relation. Accurate rotation data in the nominally flat velocity range could distinguish between MOND, ΛCDM, and CG.
Source analysis using regional empirical Green's functions: The 2008 Wells, Nevada, earthquake
Mendoza, C.; Hartzell, S.
2009-01-01
We invert three-component, regional broadband waveforms recorded for the 21 February 2008 Wells, Nevada, earthquake using a finite-fault methodology that prescribes subfault responses using eight MW∼4 aftershocks as empirical Green's functions (EGFs) distributed within a 20-km by 21.6-km fault area. The inversion identifies a seismic moment of 6.2 x 1024 dyne-cm (5.8 MW) with slip concentrated in a compact 6.5-km by 4-km region updip from the hypocenter. The peak slip within this localized area is 88 cm and the stress drop is 72 bars, which is higher than expected for Basin and Range normal faults in the western United States. The EGF approach yields excellent fits to the complex regional waveforms, accounting for strong variations in wave propagation and site effects. This suggests that the procedure is useful for studying moderate-size earthquakes with limited teleseismic or strong-motion data and for examining uncertainties in slip models obtained using theoretical Green's functions.
Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data
NASA Astrophysics Data System (ADS)
Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.
2016-12-01
We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
[EMD Time-Frequency Analysis of Raman Spectrum and NIR].
Zhao, Xiao-yu; Fang, Yi-ming; Tan, Feng; Tong, Liang; Zhai, Zhe
2016-02-01
This paper analyzes the Raman spectrum and Near Infrared Spectrum (NIR) with time-frequency method. The empirical mode decomposition spectrum becomes intrinsic mode functions, which the proportion calculation reveals the Raman spectral energy is uniform distributed in each component, while the NIR's low order intrinsic mode functions only undertakes fewer primary spectroscopic effective information. Both the real spectrum and numerical experiments show that the empirical mode decomposition (EMD) regard Raman spectrum as the amplitude-modulated signal, which possessed with high frequency adsorption property; and EMD regards NIR as the frequency-modulated signal, which could be preferably realized high frequency narrow-band demodulation during first-order intrinsic mode functions. The first-order intrinsic mode functions Hilbert transform reveals that during the period of empirical mode decomposes Raman spectrum, modal aliasing happened. Through further analysis of corn leaf's NIR in time-frequency domain, after EMD, the first and second orders components of low energy are cut off, and reconstruct spectral signal by using the remaining intrinsic mode functions, the root-mean-square error is 1.001 1, and the correlation coefficient is 0.981 3, both of these two indexes indicated higher accuracy in re-construction; the decomposition trend term indicates the absorbency is ascending along with the decreasing to wave length in the near-infrared light wave band; and the Hilbert transform of characteristic modal component displays, 657 cm⁻¹ is the specific frequency by the corn leaf stress spectrum, which could be regarded as characteristic frequency for identification.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
DOT National Transportation Integrated Search
2009-06-25
Over the last decades the Inland Empire has emerged as a global distribution center with over 700 million square feet of distribution and warehouses under roof. Along with this phenomenal growth, the transportation infrastructure of the region has be...
Risk and utility in portfolio optimization
NASA Astrophysics Data System (ADS)
Cohen, Morrel H.; Natoli, Vincent D.
2003-06-01
Modern portfolio theory (MPT) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets. In the original mean-variance approach of Markowitz, volatility is taken as a proxy for risk, conflating uncertainty with risk. There have been many subsequent attempts to alleviate that weakness which, typically, combine utility and risk. We present here a modification of MPT based on the inclusion of separate risk and utility criteria. We define risk as the probability of failure to meet a pre-established investment goal. We define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield. The emphasis throughout is on long investment horizons for which risk-free assets do not exist. Analytic results are presented for a Gaussian probability distribution. Risk-utility relations are explored via empirical stock-price data, and an illustrative portfolio is optimized using the empirical data.
Hydration of Caffeine at High Temperature by Neutron Scattering and Simulation Studies.
Tavagnacco, L; Brady, J W; Bruni, F; Callear, S; Ricci, M A; Saboungi, M L; Cesàro, A
2015-10-22
The solvation of caffeine in water is examined with neutron diffraction experiments at 353 K. The experimental data, obtained by taking advantage of isotopic H/D substitution in water, were analyzed by empirical potential structure refinement (EPSR) in order to extract partial structure factors and site-site radial distribution functions. In parallel, molecular dynamics (MD) simulations were carried out to interpret the data and gain insight into the intermolecular interactions in the solutions and the solvation process. The results obtained with the two approaches evidence differences in the individual radial distribution functions, although both confirm the presence of caffeine stacks at this temperature. The two approaches point to different accessibility of water to the caffeine sites due to different stacking configurations.
Statistical self-similarity of width function maxima with implications to floods
Veitzer, S.A.; Gupta, V.K.
2001-01-01
Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.
A generalization of the power law distribution with nonlinear exponent
NASA Astrophysics Data System (ADS)
Prieto, Faustino; Sarabia, José María
2017-01-01
The power law distribution is usually used to fit data in the upper tail of the distribution. However, commonly it is not valid to model data in all the range. In this paper, we present a new family of distributions, the so-called Generalized Power Law (GPL), which can be useful for modeling data in all the range and possess power law tails. To do that, we model the exponent of the power law using a non-linear function which depends on data and two parameters. Then, we provide some basic properties and some specific models of that new family of distributions. After that, we study a relevant model of the family, with special emphasis on the quantile and hazard functions, and the corresponding estimation and testing methods. Finally, as an empirical evidence, we study how the debt is distributed across municipalities in Spain. We check that power law model is only valid in the upper tail; we show analytically and graphically the competence of the new model with municipal debt data in the whole range; and we compare the new distribution with other well-known distributions including the Lognormal, the Generalized Pareto, the Fisk, the Burr type XII and the Dagum models.
Connecting Research to Teaching: Using Data to Motivate the Use of Empirical Sampling Distributions
ERIC Educational Resources Information Center
Lee, Hollylynne S.; Starling, Tina T.; Gonzalez, Marggie D.
2014-01-01
Research shows that students often struggle with understanding empirical sampling distributions. Using hands-on and technology models and simulations of problems generated by real data help students begin to make connections between repeated sampling, sample size, distribution, variation, and center. A task to assist teachers in implementing…
Detection of Person Misfit in Computerized Adaptive Tests with Polytomous Items.
ERIC Educational Resources Information Center
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.
2002-01-01
Compared the nominal and empirical null distributions of the standardized log-likelihood statistic for polytomous items for paper-and-pencil (P&P) and computerized adaptive tests (CATs). Results show that the empirical distribution of the statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Also…
ERIC Educational Resources Information Center
HJELM, HOWARD; NORRIS, RAYMOND C.
THE STUDY EMPIRICALLY DETERMINED THE EFFECTS OF NONNORMALITY UPON SOME SAMPLING DISTRIBUTIONS OF THE PRODUCT MOMENT CORRELATION COEFFICIENT (PMCC). SAMPLING DISTRIBUTIONS OF THE PMCC WERE OBTAINED BY DRAWING NUMEROUS SAMPLES FROM CONTROL AND EXPERIMENTAL POPULATIONS HAVING VARIOUS DEGREES OF NONNORMALITY AND BY CALCULATING CORRELATION COEFFICIENTS…
Selection biases in empirical p(z) methods for weak lensing
Gruen, D.; Brimioulle, F.
2017-02-23
To measure the mass of foreground objects with weak gravitational lensing, one needs to estimate the redshift distribution of lensed background sources. This is commonly done in an empirical fashion, i.e. with a reference sample of galaxies of known spectroscopic redshift, matched to the source population. In this paper, we develop a simple decision tree framework that, under the ideal conditions of a large, purely magnitude-limited reference sample, allows an unbiased recovery of the source redshift probability density function p(z), as a function of magnitude and colour. We use this framework to quantify biases in empirically estimated p(z) caused bymore » selection effects present in realistic reference and weak lensing source catalogues, namely (1) complex selection of reference objects by the targeting strategy and success rate of existing spectroscopic surveys and (2) selection of background sources by the success of object detection and shape measurement at low signal to noise. For intermediate-to-high redshift clusters, and for depths and filter combinations appropriate for ongoing lensing surveys, we find that (1) spectroscopic selection can cause biases above the 10 per cent level, which can be reduced to ≈5 per cent by optimal lensing weighting, while (2) selection effects in the shape catalogue bias mass estimates at or below the 2 per cent level. Finally, this illustrates the importance of completeness of the reference catalogues for empirical redshift estimation.« less
Modeling species-abundance relationships in multi-species collections
Peng, S.; Yin, Z.; Ren, H.; Guo, Q.
2003-01-01
Species-abundance relationship is one of the most fundamental aspects of community ecology. Since Motomura first developed the geometric series model to describe the feature of community structure, ecologists have developed many other models to fit the species-abundance data in communities. These models can be classified into empirical and theoretical ones, including (1) statistical models, i.e., negative binomial distribution (and its extension), log-series distribution (and its extension), geometric distribution, lognormal distribution, Poisson-lognormal distribution, (2) niche models, i.e., geometric series, broken stick, overlapping niche, particulate niche, random assortment, dominance pre-emption, dominance decay, random fraction, weighted random fraction, composite niche, Zipf or Zipf-Mandelbrot model, and (3) dynamic models describing community dynamics and restrictive function of environment on community. These models have different characteristics and fit species-abundance data in various communities or collections. Among them, log-series distribution, lognormal distribution, geometric series, and broken stick model have been most widely used.
Shizgal, Bernie D
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].
NASA Astrophysics Data System (ADS)
Shizgal, Bernie D.
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].
Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference
Olea, R.A.; Pardo-Iguzquiza, E.
2011-01-01
The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.
On precipitation monitoring with theoretical statistical distributions
NASA Astrophysics Data System (ADS)
Cindrić, Ksenija; Juras, Josip; Pasarić, Zoran
2018-04-01
A common practice in meteorological drought monitoring is to transform the observed precipitation amounts to the standardised precipitation index (SPI). Though the gamma distribution is usually employed for this purpose, some other distribution may be used, particularly in regions where zero precipitation amounts are recorded frequently. In this study, two distributions are considered alongside with the gamma distribution: the compound Poisson exponential distribution (CPE) and the square root normal distribution (SRN). They are fitted to monthly precipitation amounts measured at 24 stations in Croatia in the 55-year-long period (1961-2015). At five stations, long-term series (1901-2015) are available and they have been used for a more detailed investigation. The accommodation of the theoretical distributions to empirical ones is tested by comparison of the corresponding empirical and theoretical ratios of the skewness and the coefficient of variation. Furthermore, following the common approach to precipitation monitoring (CLIMAT reports), the comparison of the empirical and theoretical quintiles in the two periods (1961-1990 and 1991-2015) is examined. The results from the present study reveal that it would be more appropriate to implement theoretical distributions in such climate reports, since they provide better evaluation for monitoring purposes than the current empirical distribution. Nevertheless, deciding on an optimal theoretical distribution for different climate regimes and for different time periods is not easy to accomplish. With regard to Croatian stations (covering different climate regimes), the CPE or SRN distribution could also be the right choice in the climatological practice, in addition to the gamma distribution.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Distribution of tsunami interevent times
NASA Astrophysics Data System (ADS)
Geist, Eric L.; Parsons, Tom
2008-01-01
The distribution of tsunami interevent times is analyzed using global and site-specific (Hilo, Hawaii) tsunami catalogs. An empirical probability density distribution is determined by binning the observed interevent times during a period in which the observation rate is approximately constant. The empirical distributions for both catalogs exhibit non-Poissonian behavior in which there is an abundance of short interevent times compared to an exponential distribution. Two types of statistical distributions are used to model this clustering behavior: (1) long-term clustering described by a universal scaling law, and (2) Omori law decay of aftershocks and triggered sources. The empirical and theoretical distributions all imply an increased hazard rate after a tsunami, followed by a gradual decrease with time approaching a constant hazard rate. Examination of tsunami sources suggests that many of the short interevent times are caused by triggered earthquakes, though the triggered events are not necessarily on the same fault.
Taylor, Adam G.
2018-01-01
New solutions of potential functions for the bilinear vertical traction boundary condition are derived and presented. The discretization and interpolation of higher-order tractions and the superposition of the bilinear solutions provide a method of forming approximate and continuous solutions for the equilibrium state of a homogeneous and isotropic elastic half-space subjected to arbitrary normal surface tractions. Past experimental measurements of contact pressure distributions in granular media are reviewed in conjunction with the application of the proposed solution method to analysis of elastic settlement in shallow foundations. A numerical example is presented for an empirical ‘saddle-shaped’ traction distribution at the contact interface between a rigid square footing and a supporting soil medium. Non-dimensional soil resistance is computed as the reciprocal of normalized surface displacements under this empirical traction boundary condition, and the resulting internal stresses are compared to classical solutions to uniform traction boundary conditions. PMID:29892456
Zipf's law from scale-free geometry.
Lin, Henry W; Loeb, Abraham
2016-03-01
The spatial distribution of people exhibits clustering across a wide range of scales, from household (∼10(-2) km) to continental (∼10(4) km) scales. Empirical data indicate simple power-law scalings for the size distribution of cities (known as Zipf's law) and the population density fluctuations as a function of scale. Using techniques from random field theory and statistical physics, we show that these power laws are fundamentally a consequence of the scale-free spatial clustering of human populations and the fact that humans inhabit a two-dimensional surface. In this sense, the symmetries of scale invariance in two spatial dimensions are intimately connected to urban sociology. We test our theory by empirically measuring the power spectrum of population density fluctuations and show that the logarithmic slope α=2.04 ± 0.09, in excellent agreement with our theoretical prediction α=2. The model enables the analytic computation of many new predictions by importing the mathematical formalism of random fields.
Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure
Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas
2015-01-01
Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014
Origins and properties of kappa distributions in space plasmas
NASA Astrophysics Data System (ADS)
Livadiotis, George
2016-07-01
Classical particle systems reside at thermal equilibrium with their velocity distribution function stabilized into a Maxwell distribution. On the contrary, collisionless and correlated particle systems, such as the space and astrophysical plasmas, are characterized by a non-Maxwellian behavior, typically described by the so-called kappa distributions. Empirical kappa distributions have become increasingly widespread across space and plasma physics. However, a breakthrough in the field came with the connection of kappa distributions to the solid statistical framework of Tsallis non-extensive statistical mechanics. Understanding the statistical origin of kappa distributions was the cornerstone of further theoretical developments and applications, some of which will be presented in this talk: (i) The physical meaning of thermal parameters, e.g., temperature and kappa index; (ii) the multi-particle description of kappa distributions; (iii) the phase-space kappa distribution of a Hamiltonian with non-zero potential; (iv) the Sackur-Tetrode entropy for kappa distributions, and (v) the new quantization constant, h _{*}˜10 ^{-22} Js.
Synoptic, Global Mhd Model For The Solar Corona
NASA Astrophysics Data System (ADS)
Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.
2007-05-01
The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.
Role of Demographic Dynamics and Conflict in the Population-Area Relationship for Human Languages
Manrubia, Susanna C.; Axelsen, Jacob B.; Zanette, Damián H.
2012-01-01
Many patterns displayed by the distribution of human linguistic groups are similar to the ecological organization described for biological species. It remains a challenge to identify simple and meaningful processes that describe these patterns. The population size distribution of human linguistic groups, for example, is well fitted by a log-normal distribution that may arise from stochastic demographic processes. As we show in this contribution, the distribution of the area size of home ranges of those groups also agrees with a log-normal function. Further, size and area are significantly correlated: the number of speakers and the area spanned by linguistic groups follow the allometric relation , with an exponent varying accross different world regions. The empirical evidence presented leads to the hypothesis that the distributions of and , and their mutual dependence, rely on demographic dynamics and on the result of conflicts over territory due to group growth. To substantiate this point, we introduce a two-variable stochastic multiplicative model whose analytical solution recovers the empirical observations. Applied to different world regions, the model reveals that the retreat in home range is sublinear with respect to the decrease in population size, and that the population-area exponent grows with the typical strength of conflicts. While the shape of the population size and area distributions, and their allometric relation, seem unavoidable outcomes of demography and inter-group contact, the precise value of could give insight on the cultural organization of those human groups in the last thousand years. PMID:22815726
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
2016-01-01
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345
Earth Observing System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Compounding approach for univariate time series with nonstationary variances.
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Marko, Nicholas F.; Weil, Robert J.
2012-01-01
Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
An Empirical Examination of Counterdrug Interdiction Program Effectiveness.
1997-01-01
inversely correlated with the street price index. Chapter IV examines the time dependence of the street price index and argues that interdiction activities...essentially asymptotic behavior in which the cumulative distribution function, for large values of the independent variable, converges to an inverse power-law...log(S) /log(M). Such an inverse power-law relation between unit purchase price and purchase volume is indeed observed within the STRIDE data
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
ERIC Educational Resources Information Center
Xiaoyu, Chen
2015-01-01
The author uses sampling survey data from students at higher education institutions and performs an empirical analysis of the distribution of different types of higher education opportunities for residents with different socioeconomic backgrounds. The study finds that there is a significant relationship between the distribution of different…
NASA Astrophysics Data System (ADS)
Farzaneh, Saeed; Forootan, Ehsan
2018-03-01
The computerized ionospheric tomography is a method for imaging the Earth's ionosphere using a sounding technique and computing the slant total electron content (STEC) values from data of the global positioning system (GPS). The most common approach for ionospheric tomography is the voxel-based model, in which (1) the ionosphere is divided into voxels, (2) the STEC is then measured along (many) satellite signal paths, and finally (3) an inversion procedure is applied to reconstruct the electron density distribution of the ionosphere. In this study, a computationally efficient approach is introduced, which improves the inversion procedure of step 3. Our proposed method combines the empirical orthogonal function and the spherical Slepian base functions to describe the vertical and horizontal distribution of electron density, respectively. Thus, it can be applied on regional and global case studies. Numerical application is demonstrated using the ground-based GPS data over South America. Our results are validated against ionospheric tomography obtained from the constellation observing system for meteorology, ionosphere, and climate (COSMIC) observations and the global ionosphere map estimated by international centers, as well as by comparison with STEC derived from independent GPS stations. Using the proposed approach, we find that while using 30 GPS measurements in South America, one can achieve comparable accuracy with those from COSMIC data within the reported accuracy (1 × 1011 el/cm3) of the product. Comparisons with real observations of two GPS stations indicate an absolute difference is less than 2 TECU (where 1 total electron content unit, TECU, is 1016 electrons/m2).
A generalised significance test for individual communities in networks.
Kojaku, Sadamori; Masuda, Naoki
2018-05-09
Many empirical networks have community structure, in which nodes are densely interconnected within each community (i.e., a group of nodes) and sparsely across different communities. Like other local and meso-scale structure of networks, communities are generally heterogeneous in various aspects such as the size, density of edges, connectivity to other communities and significance. In the present study, we propose a method to statistically test the significance of individual communities in a given network. Compared to the previous methods, the present algorithm is unique in that it accepts different community-detection algorithms and the corresponding quality function for single communities. The present method requires that a quality of each community can be quantified and that community detection is performed as optimisation of such a quality function summed over the communities. Various community detection algorithms including modularity maximisation and graph partitioning meet this criterion. Our method estimates a distribution of the quality function for randomised networks to calculate a likelihood of each community in the given network. We illustrate our algorithm by synthetic and empirical networks.
Santos, Josilene C; Tomal, Alessandra; Mariano, Leandro; Costa, Paulo R
2015-06-01
The aim of this study was to estimate barite mortar attenuation curves using X-ray spectra weighted by a workload distribution. A semi-empirical model was used for the evaluation of transmission properties of this material. Since ambient dose equivalent, H(⁎)(10), is the radiation quantity adopted by IAEA for dose assessment, the variation of the H(⁎)(10) as a function of barite mortar thickness was calculated using primary experimental spectra. A CdTe detector was used for the measurement of these spectra. The resulting spectra were adopted for estimating the optimized thickness of protective barrier needed for shielding an area in an X-ray imaging facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jackson, R. J.; Jeffries, R. D.; Lewis, J.; Koposov, S. E.; Sacco, G. G.; Randich, S.; Gilmore, G.; Asplund, M.; Binney, J.; Bonifacio, P.; Drew, J. E.; Feltzing, S.; Ferguson, A. M. N.; Micela, G.; Neguerela, I.; Prusti, T.; Rix, H.-W.; Vallenari, A.; Alfaro, E. J.; Allende Prieto, C.; Babusiaux, C.; Bensby, T.; Blomme, R.; Bragaglia, A.; Flaccomio, E.; Francois, P.; Hambly, N.; Irwin, M.; Korn, A. J.; Lanzafame, A. C.; Pancino, E.; Recio-Blanco, A.; Smiljanic, R.; Van Eck, S.; Walton, N.; Bayo, A.; Bergemann, M.; Carraro, G.; Costado, M. T.; Damiani, F.; Edvardsson, B.; Franciosini, E.; Frasca, A.; Heiter, U.; Hill, V.; Hourihane, A.; Jofré, P.; Lardo, C.; de Laverny, P.; Lind, K.; Magrini, L.; Marconi, G.; Martayan, C.; Masseron, T.; Monaco, L.; Morbidelli, L.; Prisinzano, L.; Sbordone, L.; Sousa, S. G.; Worley, C. C.; Zaggia, S.
2015-08-01
Context. The Gaia-ESO Survey (GES) is a large public spectroscopic survey at the European Southern Observatory Very Large Telescope. Aims: A key aim is to provide precise radial velocities (RVs) and projected equatorial velocities (vsini) for representative samples of Galactic stars, which will complement information obtained by the Gaia astrometry satellite. Methods: We present an analysis to empirically quantify the size and distribution of uncertainties in RV and vsini using spectra from repeated exposures of the same stars. Results: We show that the uncertainties vary as simple scaling functions of signal-to-noise ratio (S/N) and vsini, that the uncertainties become larger with increasing photospheric temperature, but that the dependence on stellar gravity, metallicity and age is weak. The underlying uncertainty distributions have extended tails that are better represented by Student's t-distributions than by normal distributions. Conclusions: Parametrised results are provided, which enable estimates of the RV precision for almost all GES measurements, and estimates of the vsini precision for stars in young clusters, as a function of S/N, vsini and stellar temperature. The precision of individual high S/N GES RV measurements is 0.22-0.26 km s-1, dependent on instrumental configuration. Based on observations collected with the FLAMES spectrograph at VLT/UT2 telescope (Paranal Observatory, ESO, Chile), for the Gaia- ESO Large Public Survey (188.B-3002).Full Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/580/A75
Zipf's law and city size distribution: A survey of the literature and future research agenda
NASA Astrophysics Data System (ADS)
Arshad, Sidra; Hu, Shougeng; Ashraf, Badar Nadeem
2018-02-01
This study provides a systematic review of the existing literature on Zipf's law for city size distribution. Existing empirical evidence suggests that Zipf's law is not always observable even for the upper-tail cities of a territory. However, the controversy with empirical findings arises due to sample selection biases, methodological weaknesses and data limitations. The hypothesis of Zipf's law is more likely to be rejected for the entire city size distribution and, in such case, alternative distributions have been suggested. On the contrary, the hypothesis is more likely to be accepted if better empirical methods are employed and cities are properly defined. The debate is still far from to be conclusive. In addition, we identify four emerging areas in Zipf's law and city size distribution research including the size distribution of lower-tail cities, the size distribution of cities in sub-national regions, the alternative forms of Zipf's law, and the relationship between Zipf's law and the coherence property of the urban system.
Increasing power-law range in avalanche amplitude and energy distributions
NASA Astrophysics Data System (ADS)
Navas-Portella, Víctor; Serra, Isabel; Corral, Álvaro; Vives, Eduard
2018-02-01
Power-law-type probability density functions spanning several orders of magnitude are found for different avalanche properties. We propose a methodology to overcome empirical constraints that limit the range of truncated power-law distributions. By considering catalogs of events that cover different observation windows, the maximum likelihood estimation of a global power-law exponent is computed. This methodology is applied to amplitude and energy distributions of acoustic emission avalanches in failure-under-compression experiments of a nanoporous silica glass, finding in some cases global exponents in an unprecedented broad range: 4.5 decades for amplitudes and 9.5 decades for energies. In the latter case, however, strict statistical analysis suggests experimental limitations might alter the power-law behavior.
Increasing power-law range in avalanche amplitude and energy distributions.
Navas-Portella, Víctor; Serra, Isabel; Corral, Álvaro; Vives, Eduard
2018-02-01
Power-law-type probability density functions spanning several orders of magnitude are found for different avalanche properties. We propose a methodology to overcome empirical constraints that limit the range of truncated power-law distributions. By considering catalogs of events that cover different observation windows, the maximum likelihood estimation of a global power-law exponent is computed. This methodology is applied to amplitude and energy distributions of acoustic emission avalanches in failure-under-compression experiments of a nanoporous silica glass, finding in some cases global exponents in an unprecedented broad range: 4.5 decades for amplitudes and 9.5 decades for energies. In the latter case, however, strict statistical analysis suggests experimental limitations might alter the power-law behavior.
Distribution of permanent plots to evaluate silvicultural treatments in the Inland Empire
John C. Byrne; Albert R. Stage; David L. Renner
1988-01-01
To assess the adequacy of a permanent-plot data base for estimating growth and yield, one first needs to know how the plots in the data base are distributed in relation to the population they are presumed to represent. The distribution of permanent plots to study forest growth in the Inland Empire (northeastern Washington, northern Idaho, and western Montana) is...
Bažant, Zdeněk P.; Le, Jia-Liang; Bazant, Martin Z.
2009-01-01
The failure probability of engineering structures such as aircraft, bridges, dams, nuclear structures, and ships, as well as microelectronic components and medical implants, must be kept extremely low, typically <10−6. The safety factors needed to ensure it have so far been assessed empirically. For perfectly ductile and perfectly brittle structures, the empirical approach is sufficient because the cumulative distribution function (cdf) of random material strength is known and fixed. However, such an approach is insufficient for structures consisting of quasibrittle materials, which are brittle materials with inhomogeneities that are not negligible compared with the structure size. The reason is that the strength cdf of quasibrittle structure varies from Gaussian to Weibullian as the structure size increases. In this article, a recently proposed theory for the strength cdf of quasibrittle structure is refined by deriving it from fracture mechanics of nanocracks propagating by small, activation-energy-controlled, random jumps through the atomic lattice. This refinement also provides a plausible physical justification of the power law for subcritical creep crack growth, hitherto considered empirical. The theory is further extended to predict the cdf of structural lifetime at constant load, which is shown to be size- and geometry-dependent. The size effects on structure strength and lifetime are shown to be related and the latter to be much stronger. The theory fits previously unexplained deviations of experimental strength and lifetime histograms from the Weibull distribution. Finally, a boundary layer method for numerical calculation of the cdf of structural strength and lifetime is outlined. PMID:19561294
Waiting time distribution in public health care: empirics and theory.
Dimakou, Sofia; Dimakou, Ourania; Basso, Henrique S
2015-12-01
Excessive waiting times for elective surgery have been a long-standing concern in many national healthcare systems in the OECD. How do the hospital admission patterns that generate waiting lists affect different patients? What are the hospitals characteristics that determine waiting times? By developing a model of healthcare provision and analysing empirically the entire waiting time distribution we attempt to shed some light on those issues. We first build a theoretical model that describes the optimal waiting time distribution for capacity constraint hospitals. Secondly, employing duration analysis, we obtain empirical representations of that distribution across hospitals in the UK from 1997-2005. We observe important differences on the 'scale' and on the 'shape' of admission rates. Scale refers to how quickly patients are treated and shape represents trade-offs across duration-treatment profiles. By fitting the theoretical to the empirical distributions we estimate the main structural parameters of the model and are able to closely identify the main drivers of these empirical differences. We find that the level of resources allocated to elective surgery (budget and physical capacity), which determines how constrained the hospital is, explains differences in scale. Changes in benefits and costs structures of healthcare provision, which relate, respectively, to the desire to prioritise patients by duration and the reduction in costs due to delayed treatment, determine the shape, affecting short and long duration patients differently. JEL Classification I11; I18; H51.
NASA Technical Reports Server (NTRS)
He, Maosheng; Vogt, Joachim; Luehr, Hermann; Sorbalo, Eugen; Blagau, Adrian; Le, Guan; Lu, Gang
2012-01-01
Ten years of CHAMP magnetic field measurements are integrated into MFACE, a model of field-aligned currents (FACs) using empirical orthogonal functions (EOFs). EOF1 gives the basic Region-1/Region-2 pattern varying mainly with the interplanetary magnetic field Bz component. EOF2 captures separately the cusp current signature and By-related variability. Compared to existing models, MFACE yields significantly better spatial resolution, reproduces typically observed FAC thickness and intensity, improves on the magnetic local time (MLT) distribution, and gives the seasonal dependence of FAC latitudes and the NBZ current signature. MFACE further reveals systematic dependences on By, including 1) Region-1/Region-2 topology modifications around noon; 2) imbalance between upward and downward maximum current density; 3) MLT location of the Harang discontinuity. Furthermore, our procedure allows quantifying response times of FACs to solar wind driving at the bow shock nose: we obtain 20 minutes and 35-40 minutes lags for the FAC density and latitude, respectively.
D. M. Jimenez; B. W. Butler; J. Reardon
2003-01-01
Current methods for predicting fire-induced plant mortality in shrubs and trees are largely empirical. These methods are not readily linked to duff burning, soil heating, and surface fire behavior models. In response to the need for a physics-based model of this process, a detailed model for predicting the temperature distribution through a tree stem as a function of...
NASA Technical Reports Server (NTRS)
Boyce, L.
1992-01-01
A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.
Wang, Jung-Han; Abdel-Aty, Mohamed; Wang, Ling
2017-07-01
There have been plenty of studies intended to use different methods, for example, empirical Bayes before-after methods, to get accurate estimation of CMFs. All of them have different assumptions toward crash count if there was no treatment. Additionally, another major assumption is that multiple sites share the same true CMF. Under this assumption, the CMF at an individual intersection is randomly drawn from a normally distributed population of CMFs at all intersections. Since CMFs are non-zero values, the population of all CMFs might not follow normal distributions, and even if it does, the true mean of CMFs at some intersections may be different from that at others. Therefore, a bootstrap method based on before-after empirical Bayes theory was proposed to estimate CMFs, but it did not make distributional assumptions. This bootstrap procedure has the added benefit of producing a measure of CMF stability. Furthermore, based on the bootstrapped CMF, a new CMF precision rating method was proposed to evaluate the reliability of CMFs. This study chose 29 urban four-legged intersections as treated sites, and their controls were changed from stop-controlled to signal-controlled. Meanwhile, 124 urban four-legged stop-controlled intersections were selected as reference sites. At first, different safety performance functions (SPFs) were applied to five crash categories, and it was found that each crash category had different optimal SPF form. Then, the CMFs of these five crash categories were estimated using the bootstrap empirical Bayes method. The results of the bootstrapped method showed that signalization significantly decreased Angle+Left-Turn crashes, and its CMF had the highest precision. While, the CMF for Rear-End crashes was unreliable. For KABCO, KABC, and KAB crashes, their CMFs were proved to be reliable for the majority of intersections, but the estimated effect of signalization may be not accurate at some sites. Copyright © 2017 Elsevier Ltd. All rights reserved.
Empirical tests of Zipf's law mechanism in open source Linux distribution.
Maillart, T; Sornette, D; Spaeth, S; von Krogh, G
2008-11-21
Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.
[An EMD based time-frequency distribution and its application in EEG analysis].
Li, Xiaobing; Chu, Meng; Qiu, Tianshuang; Bao, Haiping
2007-10-01
Hilbert-Huang transform (HHT) is a new time-frequency analytic method to analyze the nonlinear and the non-stationary signals. The key step of this method is the empirical mode decomposition (EMD), with which any complicated signal can be decomposed into a finite and small number of intrinsic mode functions (IMF). In this paper, a new EMD based method for suppressing the cross-term of Wigner-Ville distribution (WVD) is developed and is applied to analyze the epileptic EEG signals. The simulation data and analysis results show that the new method suppresses the cross-term of the WVD effectively with an excellent resolution.
About an adaptively weighted Kaplan-Meier estimate.
Plante, Jean-François
2009-09-01
The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.
Two stochastic models useful in petroleum exploration
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1972-01-01
A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits
Experimental Research on Boundary Shear Stress in Typical Meandering Channel
NASA Astrophysics Data System (ADS)
Chen, Kai-hua; Xia, Yun-feng; Zhang, Shi-zhao; Wen, Yun-cheng; Xu, Hua
2018-06-01
A novel instrument named Micro-Electro-Mechanical System (MEMS) flexible hot-film shear stress sensor was used to study the boundary shear stress distribution in the generalized natural meandering open channel, and the mean sidewall shear stress distribution along the meandering channel, and the lateral boundary shear stress distribution in the typical cross-section of the meandering channel was analysed. Based on the measurement of the boundary shear stress, a semi-empirical semi-theoretical computing approach of the boundary shear stress was derived including the effects of the secondary flow, sidewall roughness factor, eddy viscosity and the additional Reynolds stress, and more importantly, for the first time, it combined the effects of the cross-section central angle and the Reynolds number into the expressions. Afterwards, a comparison between the previous research and this study was developed. Following the result, we found that the semi-empirical semi-theoretical boundary shear stress distribution algorithm can predict the boundary shear stress distribution precisely. Finally, a single factor analysis was conducted on the relationship between the average sidewall shear stress on the convex and concave bank and the flow rate, water depth, slope ratio, or the cross-section central angle of the open channel bend. The functional relationship with each of the above factors was established, and then the distance from the location of the extreme sidewall shear stress to the bottom of the open channel was deduced based on the statistical theory.
Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C
2013-01-01
In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377
Signatures of Currency Vertices
NASA Astrophysics Data System (ADS)
Holme, Petter
2009-03-01
Many real-world networks have broad degree distributions. For some systems, this means that the functional significance of the vertices is also broadly distributed, in other cases the vertices are equally significant, but in different ways. One example of the latter case is metabolic networks, where the high-degree vertices — the currency metabolites — supply the molecular groups to the low-degree metabolites, and the latter are responsible for the higher-order biological function, of vital importance to the organism. In this paper, we propose a generalization of currency metabolites to currency vertices. We investigate the network structural characteristics of such systems, both in model networks and in some empirical systems. In addition to metabolic networks, we find that a network of music collaborations and a network of e-mail exchange could be described by a division of the vertices into currency vertices and others.
Basic features of the pion valence-quark distribution function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Lei; Mezrag, Cédric; Moutarde, Hervé
2014-10-07
The impulse-approximation expression used hitherto to define the pion's valence-quark distribution function is flawed because it omits contributions from the gluons which bind quarks into the pion. A corrected leading-order expression produces the model-independent result that quarks dressed via the rainbow–ladder truncation, or any practical analogue, carry all the pion's light-front momentum at a characteristic hadronic scale. Corrections to the leading contribution may be divided into two classes, responsible for shifting dressed-quark momentum into glue and sea-quarks. Working with available empirical information, we use an algebraic model to express the principal impact of both classes of corrections. This enables amore » realistic comparison with experiment that allows us to highlight the basic features of the pion's measurable valence-quark distribution, q π(x); namely, at a characteristic hadronic scale, q π(x)~(1-x) 2 for x≳0.85; and the valence-quarks carry approximately two-thirds of the pion's light-front momentum.« less
Empirical study on human acupuncture point network
NASA Astrophysics Data System (ADS)
Li, Jian; Shen, Dan; Chang, Hui; He, Da-Ren
2007-03-01
Chinese medical theory is ancient and profound, however is confined by qualitative and faint understanding. The effect of Chinese acupuncture in clinical practice is unique and effective, and the human acupuncture points play a mysterious and special role, however there is no modern scientific understanding on human acupuncture points until today. For this reason, we attend to use complex network theory, one of the frontiers in the statistical physics, for describing the human acupuncture points and their connections. In the network nodes are defined as the acupuncture points, two nodes are connected by an edge when they are used for a medical treatment of a common disease. A disease is defined as an act. Some statistical properties have been obtained. The results certify that the degree distribution, act degree distribution, and the dependence of the clustering coefficient on both of them obey SPL distribution function, which show a function interpolating between a power law and an exponential decay. The results may be helpful for understanding Chinese medical theory.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
Online probabilistic learning with an ensemble of forecasts
NASA Astrophysics Data System (ADS)
Thorey, Jean; Mallet, Vivien; Chaussin, Christophe
2016-04-01
Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).
Frequency-Dependent Rupture Processes for the 2011 Tohoku Earthquake
NASA Astrophysics Data System (ADS)
Miyake, H.
2012-12-01
The 2011 Tohoku earthquake is characterized by frequency-dependent rupture process [e.g., Ide et al., 2011; Wang and Mori, 2011; Yao et al., 2011]. For understanding rupture dynamics of this earthquake, it is extremely important to investigate wave-based source inversions for various frequency bands. The above frequency-dependent characteristics have been derived from teleseismic analyses. This study challenges to infer frequency-dependent rupture processes from strong motion waveforms of K-NET and KiK-net stations. The observations suggested three or more S-wave phases, and ground velocities at several near-source stations showed different arrivals of their long- and short-period components. We performed complex source spectral inversions with frequency-dependent phase weighting developed by Miyake et al. [2002]. The technique idealizes both the coherent and stochastic summation of waveforms using empirical Green's functions. Due to the limitation of signal-to-noise ratio of the empirical Green's functions, the analyzed frequency bands were set within 0.05-10 Hz. We assumed a fault plane with 480 km in length by 180 km in width with a single time window for rupture following Koketsu et al. [2011] and Asano and Iwata [2012]. The inversion revealed source ruptures expanding from the hypocenter, and generated sharp slip-velocity intensities at the down-dip edge. In addition to test the effects of empirical/hybrid Green's functions and with/without rupture front constraints on the inverted solutions, we will discuss distributions of slip-velocity intensity and a progression of wave generation with increasing frequency.
Matthews, A P; Garenne, M L
2013-09-01
A dynamic, two-sex, age-structured marriage model is presented. Part 1 focused on first marriage only and described a marriage market matching algorithm. In Part 2 the model is extended to include divorce, widowing, and remarriage. The model produces a self-consistent set of marital states distributed by age and sex in a stable population by means of a gender-symmetric numerical method. The model is compared with empirical data for the case of Zambia. Furthermore, a dynamic marriage function for a changing population is demonstrated in simulations of three hypothetical scenarios of elevated mortality in young to middle adulthood. The marriage model has its primary application to simulation of HIV-AIDS epidemics in African countries. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico
2005-05-01
In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
The General Evolving Model for Energy Supply-Demand Network with Local-World
NASA Astrophysics Data System (ADS)
Sun, Mei; Han, Dun; Li, Dandan; Fang, Cuicui
2013-10-01
In this paper, two general bipartite network evolving models for energy supply-demand network with local-world are proposed. The node weight distribution, the "shifting coefficient" and the scaling exponent of two different kinds of nodes are presented by the mean-field theory. The numerical results of the node weight distribution and the edge weight distribution are also investigated. The production's shifted power law (SPL) distribution of coal enterprises and the installed capacity's distribution of power plants in the US are obtained from the empirical analysis. Numerical simulations and empirical results are given to verify the theoretical results.
The distribution of first-passage times and durations in FOREX and future markets
NASA Astrophysics Data System (ADS)
Sazuka, Naoya; Inoue, Jun-ichi; Scalas, Enrico
2009-07-01
Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely a distribution derived from the Mittag-Leffler survival function and the Weibull distribution. For the Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter tmax, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other hand, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time in a market: The Weibull distribution with a power-law tail. This distribution compensates the gap between theoretical and empirical results more efficiently than a simple Weibull distribution. It should be stressed that a Weibull distribution with a power-law tail is more flexible than the Mittag-Leffler distribution, which itself can be approximated by a Weibull distribution and a power-law. Indeed, the key point is that in the former case there is freedom of choice for the exponent of the power-law attached to the Weibull distribution, which can exceed 1 in order to reproduce decays faster than possible with a Mittag-Leffler distribution. We also give a useful formula to determine an optimal crossover point minimizing the difference between the empirical average waiting time and the one predicted from renewal theory. Moreover, we discuss the limitation of our distributions by applying our distribution to the analysis of the BTP future and calculating the average waiting time. We find that our distribution is applicable as long as durations follow a Weibull law for short times and do not have too heavy a tail.
Stochastic theory of fatigue corrosion
NASA Astrophysics Data System (ADS)
Hu, Haiyun
1999-10-01
A stochastic theory of corrosion has been constructed. The stochastic equations are described giving the transportation corrosion rate and fluctuation corrosion coefficient. In addition the pit diameter distribution function, the average pit diameter and the most probable pit diameter including other related empirical formula have been derived. In order to clarify the effect of stress range on the initiation and growth behaviour of pitting corrosion, round smooth specimen were tested under cyclic loading in 3.5% NaCl solution.
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Kaufmann, Eric; Levene, Mark; Loizou, George
Human dynamics and sociophysics suggest statistical models that may explain and provide us with better insight into social phenomena. Contextual and selection effects tend to produce extreme values in the tails of rank-ordered distributions of both census data and district-level election outcomes. Models that account for this nonlinearity generally outperform linear models. Fitting nonlinear functions based on rank-ordering census and election data therefore improves the fit of aggregate voting models. This may help improve ecological inference, as well as election forecasting in majoritarian systems. We propose a generative multiplicative decrease model that gives rise to a rank-order distribution and facilitates the analysis of the recent UK EU referendum results. We supply empirical evidence that the beta-like survival function, which can be generated directly from our model, is a close fit to the referendum results, and also may have predictive value when covariate data are available.
The emergence of different tail exponents in the distributions of firm size variables
NASA Astrophysics Data System (ADS)
Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki
2013-05-01
We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.
Empirical effective temperatures and bolometric corrections for early-type stars
NASA Technical Reports Server (NTRS)
Code, A. D.; Bless, R. C.; Davis, J.; Brown, R. H.
1976-01-01
An empirical effective temperature for a star can be found by measuring its apparent angular diameter and absolute flux distribution. The angular diameters of 32 bright stars in the spectral range O5f to F8 have recently been measured with the stellar interferometer at Narrabri Observatory, and their absolute flux distributions have been found by combining observations of ultraviolet flux from the Orbiting Astronomical Observatory (OAO-2) with ground-based photometry. In this paper, these data have been combined to derive empirical effective temperatures and bolometric corrections for these 32 stars.
Nachman, Gösta
2006-01-01
The spatial distributions of two-spotted spider mites Tetranychus urticae and their natural enemy, the phytoseiid predator Phytoseiulus persimilis, were studied on six full-grown cucumber plants. Both mite species were very patchily distributed and P. persimilis tended to aggregate on leaves with abundant prey. The effects of non-homogenous distributions and degree of spatial overlap between prey and predators on the per capita predation rate were studied by means of a stage-specific predation model that averages the predation rates over all the local populations inhabiting the individual leaves. The empirical predation rates were compared with predictions assuming random predator search and/or an even distribution of prey. The analysis clearly shows that the ability of the predators to search non-randomly increases their predation rate. On the other hand, the prey may gain if it adopts a more even distribution when its density is low and a more patchy distribution when density increases. Mutual interference between searching predators reduces the predation rate, but the effect is negligible. The stage-specific functional response model was compared with two simpler models without explicit stage structure. Both unstructured models yielded predictions that were quite similar to those of the stage-structured model.
Full stellar kinematical profiles of central parts of nearby galaxies
NASA Astrophysics Data System (ADS)
Vudragović, A.; Samurović, S.; Jovanović, M.
2016-09-01
Context. We present the largest catalog of detailed stellar kinematics of the central parts of nearby galaxies, which includes higher moments of the line-of-sight velocity distribution (LOSVD) function represented by the Gauss-Hermite series. The kinematics is measured on a sample of galaxies selected from the Arecibo Legacy Fast ALFA (Alfalfa) survey using spectroscopy from the Sloan Digital Sky Survey (SDSS DR7). Aims: The SDSS DR7 offers measurements of the LOSVD based on the assumption of a pure Gaussian shape of the broadening function caused by the combination of rotational and random motion of the stars in galaxies. We discuss the consequences of this oversimplification since the velocity dispersion, one of the measured quantities, often serves as the proxy to important modeling parameters such as the black-hole mass and the virial mass of galaxies. Methods: The publicly available pPXF code is used to calculate the full kinematical profile for the sample galaxies including higher moments of their LOSVD. Both observed and synthetic stellar libraries were used and the related template mismatch problem is discussed. Results: For the whole sample of 2180 nearby galaxies reflecting morphological distribution characteristic for the local Universe, we successfully recovered stellar kinematics of their central parts, including higher order moments of the LOSVD function, for signal-to-noise above 50. Conclusions: We show the consequences of the oversimplification of the LOSVD function with Gaussian function on the velocity dispersion for the empirical and the synthetic stellar library. For the empirical stellar library, this approximation leads to an increase in the virial mass of 13% on average, while for the synthetic library the effect is weaker, with an increase of 9% on average. Systematic erroneous estimates of the velocity dispersion comes from the use of the synthetic stellar library instead of the empirical one and is much larger than the value imposed by the use of the Gaussian function. Only after a careful analysis of the template mismatch problem does one need to address the issue of the deviation of the LOSVD from the Gaussian function. We also show that the kurtotic parameter describing symmetrical departures from the Gaussian seems to increase along the continuous morphological sequence from late- to early-type galaxies. The catalog is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/593/A40
A Dual Power Law Distribution for the Stellar Initial Mass Function
NASA Astrophysics Data System (ADS)
Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett
2018-05-01
We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
EGG: hatching a mock Universe from empirical prescriptions⋆
NASA Astrophysics Data System (ADS)
Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; Michałowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.
2017-06-01
This paper introduces EGG, the Empirical Galaxy Generator, a tool designed within the ASTRODEEP collaboration to generate mock galaxy catalogs for deep fields with realistic fluxes and simple morphologies. The simulation procedure is based exclusively on empirical prescriptions - rather than first principles - to provide the most accurate match with current observations at 0
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
NASA Technical Reports Server (NTRS)
Huddleston, D.; Neugebauer, M.; Goldstein, B.
1994-01-01
The shape of the velocity distribution of water-group ions observed by the Giotto ion mass spectrometer on its approach to comet Halley is modeled to derive empirical values for the rates on ionization, energy diffusion, and loss in the mid-cometosheath.
ERIC Educational Resources Information Center
Charters, Margaret; And Others
The primary objective of the Syracuse project was to make an empirical determination of the effectiveness of a competency-based (CB) distributive education program by comparing student achievement in three of its major components with similar traditionally organized courses at Syracuse, Buffalo, and Baruch. The three components were retailing,…
NASA Astrophysics Data System (ADS)
Venzmer, M. S.; Bothmer, V.
2018-03-01
Context. The Parker Solar Probe (PSP; formerly Solar Probe Plus) mission will be humanitys first in situ exploration of the solar corona with closest perihelia at 9.86 solar radii (R⊙) distance to the Sun. It will help answer hitherto unresolved questions on the heating of the solar corona and the source and acceleration of the solar wind and solar energetic particles. The scope of this study is to model the solar-wind environment for PSPs unprecedented distances in its prime mission phase during the years 2018 to 2025. The study is performed within the Coronagraphic German And US SolarProbePlus Survey (CGAUSS) which is the German contribution to the PSP mission as part of the Wide-field Imager for Solar PRobe. Aim. We present an empirical solar-wind model for the inner heliosphere which is derived from OMNI and Helios data. The German-US space probes Helios 1 and Helios 2 flew in the 1970s and observed solar wind in the ecliptic within heliocentric distances of 0.29 au to 0.98 au. The OMNI database consists of multi-spacecraft intercalibrated in situ data obtained near 1 au over more than five solar cycles. The international sunspot number (SSN) and its predictions are used to derive dependencies of the major solar-wind parameters on solar activity and to forecast their properties for the PSP mission. Methods: The frequency distributions for the solar-wind key parameters, magnetic field strength, proton velocity, density, and temperature, are represented by lognormal functions. In addition, we consider the velocity distributions bi-componental shape, consisting of a slower and a faster part. Functional relations to solar activity are compiled with use of the OMNI data by correlating and fitting the frequency distributions with the SSN. Further, based on the combined data set from both Helios probes, the parameters frequency distributions are fitted with respect to solar distance to obtain power law dependencies. Thus an empirical solar-wind model for the inner heliosphere confined to the ecliptic region is derived, accounting for solar activity and for solar distance through adequate shifts of the lognormal distributions. Finally, the inclusion of SSN predictions and the extrapolation down to PSPs perihelion region enables us to estimate the solar-wind environment for PSPs planned trajectory during its mission duration. Results: The CGAUSS empirical solar-wind model for PSP yields dependencies on solar activity and solar distance for the solar-wind parameters' frequency distributions. The estimated solar-wind median values for PSPs first perihelion in 2018 at a solar distance of 0.16 au are 87 nT, 340 km s-1, 214 cm-3, and 503 000 K. The estimates for PSPs first closest perihelion, occurring in 2024 at 0.046 au (9.86 R⊙), are 943 nT, 290 km s-1, 2951 cm-3, and 1 930 000 K. Since the modeled velocity and temperature values below approximately 20 R⊙appear overestimated in comparison with existing observations, this suggests that PSP will directly measure solar-wind acceleration and heating processes below 20 R⊙ as planned.
An Empirical Cumulus Parameterization Scheme for a Global Spectral Model
NASA Technical Reports Server (NTRS)
Rajendran, K.; Krishnamurti, T. N.; Misra, V.; Tao, W.-K.
2004-01-01
Realistic vertical heating and drying profiles in a cumulus scheme is important for obtaining accurate weather forecasts. A new empirical cumulus parameterization scheme based on a procedure to improve the vertical distribution of heating and moistening over the tropics is developed. The empirical cumulus parameterization scheme (ECPS) utilizes profiles of Tropical Rainfall Measuring Mission (TRMM) based heating and moistening derived from the European Centre for Medium- Range Weather Forecasts (ECMWF) analysis. A dimension reduction technique through rotated principal component analysis (RPCA) is performed on the vertical profiles of heating (Q1) and drying (Q2) over the convective regions of the tropics, to obtain the dominant modes of variability. Analysis suggests that most of the variance associated with the observed profiles can be explained by retaining the first three modes. The ECPS then applies a statistical approach in which Q1 and Q2 are expressed as a linear combination of the first three dominant principal components which distinctly explain variance in the troposphere as a function of the prevalent large-scale dynamics. The principal component (PC) score which quantifies the contribution of each PC to the corresponding loading profile is estimated through a multiple screening regression method which yields the PC score as a function of the large-scale variables. The profiles of Q1 and Q2 thus obtained are found to match well with the observed profiles. The impact of the ECPS is investigated in a series of short range (1-3 day) prediction experiments using the Florida State University global spectral model (FSUGSM, T126L14). Comparisons between short range ECPS forecasts and those with the modified Kuo scheme show a very marked improvement in the skill in ECPS forecasts. This improvement in the forecast skill with ECPS emphasizes the importance of incorporating realistic vertical distributions of heating and drying in the model cumulus scheme. This also suggests that in the absence of explicit models for convection, the proposed statistical scheme improves the modeling of the vertical distribution of heating and moistening in areas of deep convection.
Ryu, Jihye; Torres, Elizabeth B.
2018-01-01
The field of enacted/embodied cognition has emerged as a contemporary attempt to connect the mind and body in the study of cognition. However, there has been a paucity of methods that enable a multi-layered approach tapping into different levels of functionality within the nervous systems (e.g., continuously capturing in tandem multi-modal biophysical signals in naturalistic settings). The present study introduces a new theoretical and statistical framework to characterize the influences of cognitive demands on biophysical rhythmic signals harnessed from deliberate, spontaneous and autonomic activities. In this study, nine participants performed a basic pointing task to communicate a decision while they were exposed to different levels of cognitive load. Within these decision-making contexts, we examined the moment-by-moment fluctuations in the peak amplitude and timing of the biophysical time series data (e.g., continuous waveforms extracted from hand kinematics and heart signals). These spike-trains data offered high statistical power for personalized empirical statistical estimation and were well-characterized by a Gamma process. Our approach enabled the identification of different empirically estimated families of probability distributions to facilitate inference regarding the continuous physiological phenomena underlying cognitively driven decision-making. We found that the same pointing task revealed shifts in the probability distribution functions (PDFs) of the hand kinematic signals under study and were accompanied by shifts in the signatures of the heart inter-beat-interval timings. Within the time scale of an experimental session, marked changes in skewness and dispersion of the distributions were tracked on the Gamma parameter plane with 95% confidence. The results suggest that traditional theoretical assumptions of stationarity and normality in biophysical data from the nervous systems are incongruent with the true statistical nature of empirical data. This work offers a unifying platform for personalized statistical inference that goes far beyond those used in conventional studies, often assuming a “one size fits all model” on data drawn from discrete events such as mouse clicks, and observations that leave out continuously co-occurring spontaneous activity taking place largely beneath awareness. PMID:29681805
From Maximum Entropy Models to Non-Stationarity and Irreversibility
NASA Astrophysics Data System (ADS)
Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar
The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.
Awazu, Akinori; Tanabe, Takahiro; Kamitani, Mari; Tezuka, Ayumi; Nagano, Atsushi J
2018-05-29
Gene expression levels exhibit stochastic variations among genetically identical organisms under the same environmental conditions. In many recent transcriptome analyses based on RNA sequencing (RNA-seq), variations in gene expression levels among replicates were assumed to follow a negative binomial distribution, although the physiological basis of this assumption remains unclear. In this study, RNA-seq data were obtained from Arabidopsis thaliana under eight conditions (21-27 replicates), and the characteristics of gene-dependent empirical probability density function (ePDF) profiles of gene expression levels were analyzed. For A. thaliana and Saccharomyces cerevisiae, various types of ePDF of gene expression levels were obtained that were classified as Gaussian, power law-like containing a long tail, or intermediate. These ePDF profiles were well fitted with a Gauss-power mixing distribution function derived from a simple model of a stochastic transcriptional network containing a feedback loop. The fitting function suggested that gene expression levels with long-tailed ePDFs would be strongly influenced by feedback regulation. Furthermore, the features of gene expression levels are correlated with their functions, with the levels of essential genes tending to follow a Gaussian-like ePDF while those of genes encoding nucleic acid-binding proteins and transcription factors exhibit long-tailed ePDF.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
Collective Human Mobility Pattern from Taxi Trips in Urban Area
Peng, Chengbin; Jin, Xiaogang; Wong, Ka-Chun; Shi, Meixia; Liò, Pietro
2012-01-01
We analyze the passengers' traffic pattern for 1.58 million taxi trips of Shanghai, China. By employing the non-negative matrix factorization and optimization methods, we find that, people travel on workdays mainly for three purposes: commuting between home and workplace, traveling from workplace to workplace, and others such as leisure activities. Therefore, traffic flow in one area or between any pair of locations can be approximated by a linear combination of three basis flows, corresponding to the three purposes respectively. We name the coefficients in the linear combination as traffic powers, each of which indicates the strength of each basis flow. The traffic powers on different days are typically different even for the same location, due to the uncertainty of the human motion. Therefore, we provide a probability distribution function for the relative deviation of the traffic power. This distribution function is in terms of a series of functions for normalized binomial distributions. It can be well explained by statistical theories and is verified by empirical data. These findings are applicable in predicting the road traffic, tracing the traffic pattern and diagnosing the traffic related abnormal events. These results can also be used to infer land uses of urban area quite parsimoniously. PMID:22529917
Delgado-Baquerizo, Manuel; Fry, Ellen L; Eldridge, David J; de Vries, Franciska T; Manning, Peter; Hamonts, Kelly; Kattge, Jens; Boenisch, Gerhard; Singh, Brajesh K; Bardgett, Richard D
2018-04-19
We lack strong empirical evidence for links between plant attributes (plant community attributes and functional traits) and the distribution of soil microbial communities at large spatial scales. Using datasets from two contrasting regions and ecosystem types in Australia and England, we report that aboveground plant community attributes, such as diversity (species richness) and cover, and functional traits can predict a unique portion of the variation in the diversity (number of phylotypes) and community composition of soil bacteria and fungi that cannot be explained by soil abiotic properties and climate. We further identify the relative importance and evaluate the potential direct and indirect effects of climate, soil properties and plant attributes in regulating the diversity and community composition of soil microbial communities. Finally, we deliver a list of examples of common taxa from Australia and England that are strongly related to specific plant traits, such as specific leaf area index, leaf nitrogen and nitrogen fixation. Together, our work provides new evidence that plant attributes, especially plant functional traits, can predict the distribution of soil microbial communities at the regional scale and across two hemispheres. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.
Use of collateral information to improve LANDSAT classification accuracies
NASA Technical Reports Server (NTRS)
Strahler, A. H. (Principal Investigator)
1981-01-01
Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.
A technique for plasma velocity-space cross-correlation
NASA Astrophysics Data System (ADS)
Mattingly, Sean; Skiff, Fred
2018-05-01
An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.
Choice of resolution by functional trait or taxonomy affects allometric scaling in soil food webs.
Sechi, Valentina; Brussaard, Lijbert; De Goede, Ron G M; Rutgers, Michiel; Mulder, Christian
2015-01-01
Belowground organisms often display a shift in their mass-abundance scaling relationships due to environmental factors such as soil chemistry and atmospheric deposition. Here we present new empirical data that show strong differences in allometric scaling according to whether the resolution at the local scale is based on a taxonomic or a functional classification, while only slight differences arise according to soil environmental conditions. For the first time, isometry (an inverse 1:1 proportion) is recognized in mass-abundance relationships, providing a functional signal for constant biomass distribution in soil biota regardless of discrete trophic levels. Our findings are in contrast to those from aquatic ecosystems, in that higher trophic levels in soil biota are not a direct function of increasing body mass.
The effect of El Chichon on the intensity and polarization of skylight at Mauna Loa
NASA Technical Reports Server (NTRS)
King, M. D.; Fraser, R. S.
1983-01-01
An empirical model of the stratospheric aerosol over Mauna Loa Observatory (MLO) has been developed, in order to study the effect of aerosol particles from the eruption of the El Chichon volcano on the intensity and degree of skylight polarization. The modeling computations were based on measurements of monthly mean optical thickness for July 1982, together with lidar measurements of the vertical distribution of aerosols. On the basis of the theoretical computations, it is shown that the number and location of polarization neutral points, and the location and magnitude of the peak polarization were both functions of the relatively narrow distribution of aerosol particle size near 0.4 microns.
Establishing the kinetics of ballistic-to-diffusive transition using directional statistics
NASA Astrophysics Data System (ADS)
Liu, Pai; Heinson, William R.; Sumlin, Benjamin J.; Shen, Kuan-Yu; Chakrabarty, Rajan K.
2018-04-01
We establish the kinetics of ballistic-to-diffusive (BD) transition observed in two-dimensional random walk using directional statistics. Directional correlation is parameterized using the walker's turning angle distribution, which follows the commonly adopted wrapped Cauchy distribution (WCD) function. During the BD transition, the concentration factor (ρ) governing the WCD shape is observed to decrease from its initial value. We next analytically derive the relationship between effective ρ and time, which essentially quantifies the BD transition rate. The prediction of our kinetic expression agrees well with the empirical datasets obtained from correlated random walk simulation. We further connect our formulation with the conventionally used scaling relationship between the walker's mean-square displacement and time.
Atomistic simulations of TeO₂-based glasses: interatomic potentials and molecular dynamics.
Gulenko, Anastasia; Masson, Olivier; Berghout, Abid; Hamani, David; Thomas, Philippe
2014-07-21
In this work we present for the first time empirical interatomic potentials that are able to reproduce TeO2-based systems. Using these potentials in classical molecular dynamics simulations, we obtained first results for the pure TeO2 glass structure model. The calculated pair distribution function is in good agreement with the experimental one, which indicates a realistic glass structure model. We investigated the short- and medium-range TeO2 glass structures. The local environment of the Te atom strongly varies, so that the glass structure model has a broad Q polyhedral distribution. The glass network is described as weakly connected with a large number of terminal oxygen atoms.
Valence-quark distribution functions in the kaon and pion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chen; Chang, Lei; Roberts, Craig D.
2016-04-18
We describe expressions for pion and kaon dressed-quark distribution functions that incorporate contributions from gluons which bind quarks into these mesons and hence overcome a flaw of the commonly used handbag approximation. The distributions therewith obtained are purely valence in character, ensuring that dressed quarks carry all the meson’s momentum at a characteristic hadronic scale and vanish as ( 1 - x ) 2 when Bjorken- x → 1 . Comparing such distributions within the pion and kaon, it is apparent that the size of S U ( 3 ) -flavor symmetry breaking in meson parton distribution functions is modulatedmore » by the flavor dependence of dynamical chiral symmetry breaking. Corrections to these leading-order formulas may be divided into two classes, responsible for shifting dressed-quark momentum into glue and sea quarks. Working with available empirical information, we build an algebraic framework that is capable of expressing the principal impact of both classes of corrections. This enables a realistic comparison with experiment which allows us to identify and highlight basic features of measurable pion and kaon valence-quark distributions. We find that whereas roughly two thirds of the pion’s light-front momentum is carried by valence dressed quarks at a characteristic hadronic scale; this fraction rises to 95% in the kaon; evolving distributions with these features to a scale typical of available Drell-Yan data produces a kaon-to-pion ratio of u -quark distributions that is in agreement with the single existing data set, and predicts a u -quark distribution within the pion that agrees with a modern reappraisal of π N Drell-Yan data. Precise new data are essential in order to validate this reappraisal and because a single modest-quality measurement of the kaon-to-pion ratio cannot be considered definitive.« less
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
Huestis, Diana L; Oppert, Brenda; Marshall, Jeremy L
2009-01-01
Background Geographic clines within species are often interpreted as evidence of adaptation to varying environmental conditions. However, clines can also result from genetic drift, and these competing hypotheses must therefore be tested empirically. The striped ground cricket, Allonemobius socius, is widely-distributed in the eastern United States, and clines have been documented in both life-history traits and genetic alleles. One clinally-distributed locus, isocitrate dehydrogenase (Idh-1), has been shown previously to exhibit significant correlations between allele frequencies and environmental conditions (temperature and rainfall). Further, an empirical study revealed a significant genotype-by-environmental interaction (GxE) between Idh-1 genotype and temperature which affected fitness. Here, we use enzyme kinetics to further explore GxE between Idh-1 genotype and temperature, and test the predictions of kinetic activity expected under drift or selection. Results We found significant GxE between temperature and three enzyme kinetic parameters, providing further evidence that the natural distributions of Idh-1 allele frequencies in A. socius are maintained by natural selection. Differences in enzyme kinetic activity across temperatures also mirror many of the geographic patterns observed in allele frequencies. Conclusion This study further supports the hypothesis that the natural distribution of Idh-1 alleles in A. socius is driven by natural selection on differential enzymatic performance. This example is one of several which clearly document a functional basis for both the maintenance of common alleles and observed clines in allele frequencies, and provides further evidence for the non-neutrality of some allozyme alleles. PMID:19460149
SU-E-I-07: An Improved Technique for Scatter Correction in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, S; Wang, Y; Lue, K
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less
Thompson, Leigh L; Wang, Jiunwen; Gunia, Brian C
2010-01-01
Negotiation occurs whenever people cannot achieve their own goals without the cooperation of others. Our review highlights recent empirical research that investigates this ubiquitous social activity. We selectively review descriptive research emerging from social psychology and organizational behavior. This research examines negotiation behavior and outcomes at five levels of analysis: intrapersonal, interpersonal, group, organizational, and virtual. At each level, we review research on negotiation processes and outcomes, and we discuss the implications of various processes and outcomes for the two functions of negotiation: value creation (integrative negotiation) and value claiming (distributive negotiation).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...
2012-05-01
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Electron momentum density and Compton profile by a semi-empirical approach
NASA Astrophysics Data System (ADS)
Aguiar, Julio C.; Mitnik, Darío; Di Rocco, Héctor O.
2015-08-01
Here we propose a semi-empirical approach to describe with good accuracy the electron momentum densities and Compton profiles for a wide range of pure crystalline metals. In the present approach, we use an experimental Compton profile to fit an analytical expression for the momentum densities of the valence electrons. This expression is similar to a Fermi-Dirac distribution function with two parameters, one of which coincides with the ground state kinetic energy of the free-electron gas and the other resembles the electron-electron interaction energy. In the proposed scheme conduction electrons are neither completely free nor completely bound to the atomic nucleus. This procedure allows us to include correlation effects. We tested the approach for all metals with Z=3-50 and showed the results for three representative elements: Li, Be and Al from high-resolution experiments.
Model of Pressure Distribution in Vortex Flow Controls
NASA Astrophysics Data System (ADS)
Mielczarek, Szymon; Sawicki, Jerzy M.
2015-06-01
Vortex valves belong to the category of hydrodynamic flow controls. They are important and theoretically interesting devices, so complex from hydraulic point of view, that probably for this reason none rational concept of their operation has been proposed so far. In consequence, functioning of vortex valves is described by CFD-methods (computer-aided simulation of technical objects) or by means of simple empirical relations (using discharge coefficient or hydraulic loss coefficient). Such rational model of the considered device is proposed in the paper. It has a simple algebraic form, but is well grounded physically. The basic quantitative relationship, which describes the valve operation, i.e. dependence between the flow discharge and the circumferential pressure head, caused by the rotation, has been verified empirically. Conformity between calculated and measured parameters of the device allows for acceptation of the proposed concept.
NASA Technical Reports Server (NTRS)
Sittler, E. C., Jr.; Scudder, J. D.
1979-01-01
Empirical evidence is presented that solar wind thermal electrons obey a polytrope law with polytrope index gamma = 1.175 plus or minus 0.03. The Voyager 2 and Mariner 10 data used as evidence are compared and discussed. The theoretical predictions that solar wind thermal electrons in the asymptotic solar wind should obey a polytrope law with polytrope index gamma = 1.16 plus or minus. The widespread impressions in the literature that solar wind electrons behave more like an isothermal than adiabatic gas, and the arguments that Coulomb collisions are the dominant stochastic process shaping observed electron distribution functions in the solar wind are reexamined, reviewed and evaluated. The assignment of the interplanetary potential as equal to approximately seven times the temperature of the thermal electrons is discussed.
A study of a diffusive model of asset returns and an empirical analysis of financial markets
NASA Astrophysics Data System (ADS)
Alejandro Quinones, Angel Luis
A diffusive model for market dynamics is studied and the predictions of the model are compared to real financial markets. The model has a non-constant diffusion coefficient which depends both on the asset value and the time. A general solution for the distribution of returns is obtained and shown to match the results of computer simulations for two simple cases, piecewise linear and quadratic diffusion. The effects of discreteness in the market dynamics on the model are also studied. For the quadratic diffusion case, a type of phase transition leading to fat tails is observed as the discrete distribution approaches the continuum limit. It is also found that the model captures some of the empirical stylized facts observed in real markets, including fat-tails and scaling behavior in the distribution of returns. An analysis of empirical data for the EUR/USD currency exchange rate and the S&P 500 index is performed. Both markets show time scaling behavior consistent with a value of 1/2 for the Hurst exponent. Finally, the results show that the distribution of returns for the two markets is well fitted by the model, and the corresponding empirical diffusion coefficients are determined.
NASA Astrophysics Data System (ADS)
Miladinovich, D.; Datta-Barua, S.; Bust, G. S.; Ramirez, U.
2017-12-01
Understanding physical processes during storm time in the ionosphere-thermosphere (IT) system is limited, in part, due to the inability to obtain accurate estimates of IT states on a global scale. One reason for this inability is the sparsity of spatially distributed high quality data sets. Data assimilation is showing promise toward enabling global estimates by blending high quality observational data sets with established climate models. We are continuing development of an algorithm called Estimating Model Parameters for Ionospheric Reverse Engineering (EMPIRE) to enable assimilation of global datasets for storm time estimates of IT drivers. EMPIRE is a data assimilation algorithm that uses a Kalman filtering routine to ingest model and observational data. The EMPIRE algorithm is based on spherical harmonics which provide a spherically symmetric, smooth, continuous, and orthonormal set of basis functions suitable for a spherical domain such as Earth's IT region (200-600 km altitude). Once the basis function coefficients are determined, the newly fitted function represents the disagreement between observational measurements and models. We apply spherical harmonics to study the March 17, 2015 storm. Data sources include Fabry-Perot interferometer neutral wind measurements and global Ionospheric Data Assimilation 4 Dimensional (IDA4D) assimilated total electron content (TEC). Models include Weimer 2000 electric potential, International Geomagnetic Reference Field (IGRF) magnetic field, and Horizontal Wind Model 2014 (HWM14) neutral winds. We present the EMPIRE assimilation results of Earth's electric potential and thermospheric winds. We also compare EMPIRE storm time E cross B ion drift estimates to measured drifts produced from the Super Dual Auroral Radar Network (SuperDARN) and Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) measurement datasets. The analysis from these results will enable the generation of globally assimilated storm time IT state estimates for future studies. In particular, the ability to provide data assimilated estimation of the drivers of the IT system from high to low latitudes is a critical step toward forecasting the influence of geomagnetic storms on the near Earth space environment.
A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.
2014-12-01
A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.
[Crop geometry identification based on inversion of semiempirical BRDF models].
Huang, Wen-jiang; Wang, Jin-di; Mu, Xi-han; Wang, Ji-hua; Liu, Liang-yun; Liu, Qiang; Niu, Zheng
2007-10-01
Investigations have been made on identification of erective and horizontal varieties by bidirectional canopy reflected spectrum and semi-empirical bidirectional reflectance distribution function (BRDF) models. The qualitative effect of leaf area index (LAI) and average leaf angle (ALA) on crop canopy reflected spectrum was studied. The structure parameter sensitive index (SPEI) based on the weight for the volumetric kernel (fvol), the weight for the geometric kernel (fgeo), and the weight for constant corresponding to isotropic reflectance (fiso), was defined in the present study for crop geometry identification. However, the weights associated with the kernels of semi-empirical BRDF model do not have a direct relationship with measurable biophysical parameters. Therefore, efforts have focused on trying to find the relation between these semi-empirical BRDF kernel weights and various vegetation structures. SPEI was proved to be more sensitive to identify crop geometry structures than structural scattering index (SSI) and normalized difference f-index (NDFI), SPEI could be used to distinguish erective and horizontal geometry varieties. So, it is feasible to identify horizontal and erective varieties of wheat by bidirectional canopy reflected spectrum.
Wang, Honglei; Yoshida, Masaya; Thompson, Cynthia K.
2015-01-01
Individuals with agrammatic aphasia exhibit restricted patterns of impairment of functional morphemes, however, syntactic characterization of the impairment is controversial. Previous studies have focused on functional morphology in clauses only. This study extends the empirical domain by testing functional morphemes in English nominal phrases in aphasia and comparing patients’ impairment to their impairment of functional morphemes in English clauses. In the linguistics literature, it is assumed that clauses and nominal phrases are structurally parallel but exhibit inflectional differences. The results of the present study indicated that aphasic speakers evinced similar impairment patterns in clauses and nominal phrases. These findings are consistent with the Distributed Morphology Hypothesis (DMH), suggesting that the source of functional morphology deficits among agrammatics relates to difficulty implementing rules that convert inflectional features into morphemes. Our findings, however, are inconsistent with the Tree Pruning Hypothesis (TPH), which suggests that patients have difficulty building complex hierarchical structures. PMID:26379370
Space evolution model and empirical analysis of an urban public transport network
NASA Astrophysics Data System (ADS)
Sui, Yi; Shao, Feng-jing; Sun, Ren-cheng; Li, Shu-jing
2012-07-01
This study explores the space evolution of an urban public transport network, using empirical evidence and a simulation model validated on that data. Public transport patterns primarily depend on traffic spatial-distribution, demands of passengers and expected utility of investors. Evolution is an iterative process of satisfying the needs of passengers and investors based on a given traffic spatial-distribution. The temporal change of urban public transport network is evaluated both using topological measures and spatial ones. The simulation model is validated using empirical data from nine big cities in China. Statistical analyses on topological and spatial attributes suggest that an evolution network with traffic demands characterized by power-law numerical values which distribute in a mode of concentric circles tallies well with these nine cities.
Extended Empirical Roadside Shadowing model from ACTS mobile measurements
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Vogel, Wolfhard
1995-01-01
Employing multiple data bases derived from land-mobile satellite measurements using the Advanced Communications Technology Satellite (ACTS) at 20 GHz, MARECS B-2 at 1.5 GHz, and helicopter measurements at 870 MHz and 1.5 GHz, the Empirical Road Side Shadowing Model (ERS) has been extended. The new model (Extended Empirical Roadside Shadowing Model, EERS) may now be employed at frequencies from UHF to 20 GHz, at elevation angles from 7 to 60 deg and at percentages from 1 to 80 percent (0 dB fade). The EERS distributions are validated against measured ones and fade deviations associated with the model are assessed. A model is also presented for estimating the effects of foliage (or non-foliage) on 20 GHz distributions, given distributions from deciduous trees devoid of leaves (or in full foliage).
Hindcast of extreme sea states in North Atlantic extratropical storms
NASA Astrophysics Data System (ADS)
Ponce de León, Sonia; Guedes Soares, Carlos
2015-02-01
This study examines the variability of freak wave parameters around the eye of northern hemisphere extratropical cyclones. The data was obtained from a hindcast performed with the WAve Model (WAM) model forced by the wind fields of the Climate Forecast System Reanalysis (CFSR). The hindcast results were validated against the wave buoys and satellite altimetry data showing a good correlation. The variability of different wave parameters was assessed by applying the empirical orthogonal functions (EOF) technique on the hindcast data. From the EOF analysis, it can be concluded that the first empirical orthogonal function (V1) accounts for greater share of variability of significant wave height (Hs), peak period (Tp), directional spreading (SPR) and Benjamin-Feir index (BFI). The share of variance in V1 varies for cyclone and variable: for the 2nd storm and Hs V1 contains 96 % of variance while for the 3rd storm and BFI V1 accounts only for 26 % of variance. The spatial patterns of V1 show that the variables are distributed around the cyclones centres mainly in a lobular fashion.
Universal patterns of inequality
NASA Astrophysics Data System (ADS)
Banerjee, Anand; Yakovenko, Victor M.
2010-07-01
Probability distributions of money, income and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced. The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for the USA. We show that the increase in income inequality in the USA originates primarily from the increase in the income fraction going to the upper tail, which now exceeds 20% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000 and 2005, we discuss the effect of globalization on the inequality of energy consumption.
Menzerath-Altmann law for distinct word distribution analysis in a large text
NASA Astrophysics Data System (ADS)
Eroglu, Sertac
2013-06-01
The empirical law uncovered by Menzerath and formulated by Altmann, known as the Menzerath-Altmann law (henceforth the MA law), reveals the statistical distribution behavior of human language in various organizational levels. Building on previous studies relating organizational regularities in a language, we propose that the distribution of distinct (or different) words in a large text can effectively be described by the MA law. The validity of the proposition is demonstrated by examining two text corpora written in different languages not belonging to the same language family (English and Turkish). The results show not only that distinct word distribution behavior can accurately be predicted by the MA law, but that this result appears to be language-independent. This result is important not only for quantitative linguistic studies, but also may have significance for other naturally occurring organizations that display analogous organizational behavior. We also deliberately demonstrate that the MA law is a special case of the probability function of the generalized gamma distribution.
Dendritic growth model of multilevel marketing
NASA Astrophysics Data System (ADS)
Pang, James Christopher S.; Monterola, Christopher P.
2017-02-01
Biologically inspired dendritic network growth is utilized to model the evolving connections of a multilevel marketing (MLM) enterprise. Starting from agents at random spatial locations, a network is formed by minimizing a distance cost function controlled by a parameter, termed the balancing factor bf, that weighs the wiring and the path length costs of connection. The paradigm is compared to an actual MLM membership data and is shown to be successful in statistically capturing the membership distribution, better than the previously reported agent based preferential attachment or analytic branching process models. Moreover, it recovers the known empirical statistics of previously studied MLM, specifically: (i) a membership distribution characterized by the existence of peak levels indicating limited growth, and (ii) an income distribution obeying the 80 - 20 Pareto principle. Extensive types of income distributions from uniform to Pareto to a "winner-take-all" kind are also modeled by varying bf. Finally, the robustness of our dendritic growth paradigm to random agent removals is explored and its implications to MLM income distributions are discussed.
Superstatistical Energy Distributions of an Ion in an Ultracold Buffer Gas
NASA Astrophysics Data System (ADS)
Rouse, I.; Willitsch, S.
2017-04-01
An ion in a radio frequency ion trap interacting with a buffer gas of ultracold neutral atoms is a driven dynamical system which has been found to develop a nonthermal energy distribution with a power law tail. The exact analytical form of this distribution is unknown, but has often been represented empirically by q -exponential (Tsallis) functions. Based on the concepts of superstatistics, we introduce a framework for the statistical mechanics of an ion trapped in an rf field subject to collisions with a buffer gas. We derive analytic ion secular energy distributions from first principles both neglecting and including the effects of the thermal energy of the buffer gas. For a buffer gas with a finite temperature, we prove that Tsallis statistics emerges from the combination of a constant heating term and multiplicative energy fluctuations. We show that the resulting distributions essentially depend on experimentally controllable parameters paving the way for an accurate control of the statistical properties of ion-atom hybrid systems.
Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.
Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi
2015-09-01
A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.
The distribution of genetic variance across phenotypic space and the response to selection.
Blows, Mark W; McGuigan, Katrina
2015-05-01
The role of adaptation in biological invasions will depend on the availability of genetic variation for traits under selection in the new environment. Although genetic variation is present for most traits in most populations, selection is expected to act on combinations of traits, not individual traits in isolation. The distribution of genetic variance across trait combinations can be characterized by the empirical spectral distribution of the genetic variance-covariance (G) matrix. Empirical spectral distributions of G from a range of trait types and taxa all exhibit a characteristic shape; some trait combinations have large levels of genetic variance, while others have very little genetic variance. In this study, we review what is known about the empirical spectral distribution of G and show how it predicts the response to selection across phenotypic space. In particular, trait combinations that form a nearly null genetic subspace with little genetic variance respond only inconsistently to selection. We go on to set out a framework for understanding how the empirical spectral distribution of G may differ from the random expectations that have been developed under random matrix theory (RMT). Using a data set containing a large number of gene expression traits, we illustrate how hypotheses concerning the distribution of multivariate genetic variance can be tested using RMT methods. We suggest that the relative alignment between novel selection pressures during invasion and the nearly null genetic subspace is likely to be an important component of the success or failure of invasion, and for the likelihood of rapid adaptation in small populations in general. © 2014 John Wiley & Sons Ltd.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
NASA Astrophysics Data System (ADS)
Gastis, P.; Perdikakis, G.; Robertson, D.; Almus, R.; Anderson, T.; Bauder, W.; Collon, P.; Lu, W.; Ostdiek, K.; Skulski, M.
2016-04-01
Equilibrium charge state distributions of stable 60Ni, 59Co, and 63Cu beams passing through a 1 μm thick Mo foil were measured at beam energies of 1.84 MeV/u, 2.09 MeV/u, and 2.11 MeV/u respectively. A 1-D position sensitive Parallel Grid Avalanche Counter detector (PGAC) was used at the exit of a spectrograph magnet, enabling us to measure the intensity of several charge states simultaneously. The number of charge states measured for each beam constituted more than 99% of the total equilibrium charge state distribution for that element. Currently, little experimental data exists for equilibrium charge state distributions for heavy ions with 19 ≲Zp,Zt ≲ 54 (Zp and Zt, are the projectile's and target's atomic numbers respectively). Hence the success of the semi-empirical models in predicting typical characteristics of equilibrium CSDs (mean charge states and distribution widths), has not been thoroughly tested at the energy region of interest. A number of semi-empirical models from the literature were evaluated in this study, regarding their ability to reproduce the characteristics of the measured charge state distributions. The evaluated models were selected from the literature based on whether they are suitable for the given range of atomic numbers and on their frequent use by the nuclear physics community. Finally, an attempt was made to combine model predictions for the mean charge state, the distribution width and the distribution shape, to come up with a more reliable model. We discuss this new ;combinatorial; prescription and compare its results with our experimental data and with calculations using the other semi-empirical models studied in this work.
Estimation and confidence intervals for empirical mixing distributions
Link, W.A.; Sauer, J.R.
1995-01-01
Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
ERIC Educational Resources Information Center
Owen, John D.
Empirical evidence is presented consistent with the hypothesis that instructional expenditures are distributed unequally, and that less is spent on non-white and poor students than on others in large American cities. The most experienced teachers are generally to be found in schools attended by the less poor white children. More important, the…
Big data prediction of durations for online collective actions based on peak's timing
NASA Astrophysics Data System (ADS)
Nie, Shizhao; Wang, Zheng; Pujia, Wangmo; Nie, Yuan; Lu, Peng
2018-02-01
Peak Model states that each collective action has a life circle, which contains four periods of "prepare", "outbreak", "peak", and "vanish"; and the peak determines the max energy and the whole process. The peak model's re-simulation indicates that there seems to be a stable ratio between the peak's timing (TP) and the total span (T) or duration of collective actions, which needs further validations through empirical data of collective actions. Therefore, the daily big data of online collective actions is applied to validate the model; and the key is to check the ratio between peak's timing and the total span. The big data is obtained from online data recording & mining of websites. It is verified by the empirical big data that there is a stable ratio between TP and T; furthermore, it seems to be normally distributed. This rule holds for both the general cases and the sub-types of collective actions. Given the distribution of the ratio, estimated probability density function can be obtained, and therefore the span can be predicted via the peak's timing. Under the scenario of big data, the instant span (how long the collective action lasts or when it ends) will be monitored and predicted in real-time. With denser data (Big Data), the estimation of the ratio's distribution gets more robust, and the prediction of collective actions' spans or durations will be more accurate.
A Review of Discrete Element Method (DEM) Particle Shapes and Size Distributions for Lunar Soil
NASA Technical Reports Server (NTRS)
Lane, John E.; Metzger, Philip T.; Wilkinson, R. Allen
2010-01-01
As part of ongoing efforts to develop models of lunar soil mechanics, this report reviews two topics that are important to discrete element method (DEM) modeling the behavior of soils (such as lunar soils): (1) methods of modeling particle shapes and (2) analytical representations of particle size distribution. The choice of particle shape complexity is driven primarily by opposing tradeoffs with total number of particles, computer memory, and total simulation computer processing time. The choice is also dependent on available DEM software capabilities. For example, PFC2D/PFC3D and EDEM support clustering of spheres; MIMES incorporates superquadric particle shapes; and BLOKS3D provides polyhedra shapes. Most commercial and custom DEM software supports some type of complex particle shape beyond the standard sphere. Convex polyhedra, clusters of spheres and single parametric particle shapes such as the ellipsoid, polyellipsoid, and superquadric, are all motivated by the desire to introduce asymmetry into the particle shape, as well as edges and corners, in order to better simulate actual granular particle shapes and behavior. An empirical particle size distribution (PSD) formula is shown to fit desert sand data from Bagnold. Particle size data of JSC-1a obtained from a fine particle analyzer at the NASA Kennedy Space Center is also fitted to a similar empirical PSD function.
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
NASA Astrophysics Data System (ADS)
Bernhard, E.; Mullaney, J. R.; Aird, J.; Hickox, R. C.; Jones, M. L.; Stanley, F.; Grimmett, L. P.; Daddi, E.
2018-05-01
The lack of a strong correlation between AGN X-ray luminosity (LX; a proxy for AGN power) and the star formation rate (SFR) of their host galaxies has recently been attributed to stochastic AGN variability. Studies using population synthesis models have incorporated this by assuming a broad, universal (i.e. does not depend on the host galaxy properties) probability distribution for AGN specific X-ray luminosities (i.e. the ratio of LX to host stellar mass; a common proxy for Eddington ratio). However, recent studies have demonstrated that this universal Eddington ratio distribution fails to reproduce the observed X-ray luminosity functions beyond z ˜ 1.2. Furthermore, empirical studies have recently shown that the Eddington ratio distribution may instead depend upon host galaxy properties, such as SFR and/or stellar mass. To investigate this further, we develop a population synthesis model in which the Eddington ratio distribution is different for star-forming and quiescent host galaxies. We show that, although this model is able to reproduce the observed X-ray luminosity functions out to z ˜ 2, it fails to simultaneously reproduce the observed flat relationship between SFR and X-ray luminosity. We can solve this, however, by incorporating a mass dependency in the AGN Eddington ratio distribution for star-forming host galaxies. Overall, our models indicate that a relative suppression of low Eddington ratios (λEdd ≲ 0.1) in lower mass galaxies (M* ≲ 1010 - 11 M⊙) is required to reproduce both the observed X-ray luminosity functions and the observed flat SFR/X-ray relationship.
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A
2008-09-01
The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2013-01-01
The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942
NASA Astrophysics Data System (ADS)
Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.
2017-12-01
Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.
NASA Astrophysics Data System (ADS)
Xu, M., III; Liu, X.
2017-12-01
In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.
Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi
2011-06-01
For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.
Bonnet-Lebrun, Anne-Sophie; Manica, Andrea; Eriksson, Anders; Rodrigues, Ana S L
2017-05-01
Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modeled communities-that is with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities-from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in preequilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under preequilibrium conditions. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Joint scaling laws in functional and evolutionary categories in prokaryotic genomes
Grilli, J.; Bassetti, B.; Maslov, S.; Cosentino Lagomarsino, M.
2012-01-01
We propose and study a class-expansion/innovation/loss model of genome evolution taking into account biological roles of genes and their constituent domains. In our model, numbers of genes in different functional categories are coupled to each other. For example, an increase in the number of metabolic enzymes in a genome is usually accompanied by addition of new transcription factors regulating these enzymes. Such coupling can be thought of as a proportional ‘recipe’ for genome composition of the type ‘a spoonful of sugar for each egg yolk’. The model jointly reproduces two known empirical laws: the distribution of family sizes and the non-linear scaling of the number of genes in certain functional categories (e.g. transcription factors) with genome size. In addition, it allows us to derive a novel relation between the exponents characterizing these two scaling laws, establishing a direct quantitative connection between evolutionary and functional categories. It predicts that functional categories that grow faster-than-linearly with genome size to be characterized by flatter-than-average family size distributions. This relation is confirmed by our bioinformatics analysis of prokaryotic genomes. This proves that the joint quantitative trends of functional and evolutionary classes can be understood in terms of evolutionary growth with proportional recipes. PMID:21937509
Maximum Entropy for the International Division of Labor.
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.
Maximum Entropy for the International Division of Labor
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052
NASA Technical Reports Server (NTRS)
Sturrock, P. A.; Antiochos, S. K.; Klinchuk, J. A.; Roumeliotis, G.
1994-01-01
It is known from computer calculations that if a force-free magnetic field configuration is stressed progressively by footpoint displacements, the configuration expands and approaches the open configuration with the same surface flux distribution and the energy of the field increases progressively. For configurations of translationalsymmetry, it has been found empirically that the energy tends asymptotically to a certain functional form. It is here shown that analysis of a simple model of the asymptotic form of force-free fields of translational symmetry leads to and therefore justifies this functional form. According to this model, the field evolves in a well-behaved manner with no indication of instability or loss of equilibrium.
Focal ratio degradation: a new perspective
NASA Astrophysics Data System (ADS)
Haynes, Dionne M.; Withford, Michael J.; Dawes, Judith M.; Haynes, Roger; Bland-Hawthorn, Joss
2008-07-01
We have developed an alternative FRD empirical model for the parallel laser beam technique which can accommodate contributions from both scattering and modal diffusion. It is consistent with scattering inducing a Lorentzian contribution and modal diffusion inducing a Gaussian contribution. The convolution of these two functions produces a Voigt function which is shown to better simulate the observed behavior of the FRD distribution and provides a greatly improved fit over the standard Gaussian fitting approach. The Voigt model can also be used to quantify the amount of energy displaced by FRD, therefore allowing astronomical instrument scientists to identify, quantify and potentially minimize the various sources of FRD, and optimise the fiber and instrument performance.
NASA Astrophysics Data System (ADS)
Yanti, Apriwida; Susilo, Bowo; Wicaksono, Pramaditya
2016-11-01
Gajahmungkur reservoir is administratively located in Wonogiri Regency, Central Java, with the main function as a flood control in the upstream of Bengawan Solo River. Other functions of the reservoir are as hydroelectric power plant (PLTA), water supply, irrigation, fisheries and tourism. Economic utilization of the reservoir is estimated until 100 years, but it is begun to be threatened by the silting of the reservoir. Eroded materials entering water body will be suspended and accumulated. Suspended Material or TSS (Total Suspended Solid) will increase the turbidity of water, which can affect the quality of water and silting the reservoir. Remote sensing technology can be used to determine the spatial distribution of TSS. The purposes of this study were to 1) utilize and compare the accuracy of single band Landsat 8 OLI for mapping the spatial distribution of TSS and 2) estimate the TSS on Gajahmungkur reservoir surface waters up to the depth of 30 cm. The method used for modelling the TSS spatial distribution is the empirical modelling that integrates image pixel values and field data using correlation analysis and regression analysis. The data used in the empirical modelling are single band of visible, NIR, and SWIR of Landsat 8 OLI, which was acquired on 8 May 2016, and field-measured TSS values based on the field data collection conducted on 12 April 2016. The results revealed that mapping the distribution and the estimated value of TSS in Reservoir Gajahmungkur can be performed more accurately using band 4 (red band). The determinant coefficient between TSS field and TSS value of image using band 4 is 0.5431. The Standard Error (SE) of the predicted TSS value is 16.16 mg/L. The results also showed that the estimated total TSS of May 2016 according to band 4 is 1.087,56 tons. The average estimation of TSS value in up to the depth of 30 cm is 61.61 mg/L. The highest TSS distribution is in the northern parts, which was dominated by eroded materials from Keduang River.
NASA Astrophysics Data System (ADS)
Emül, Y.; Erbahar, D.; Açıkgöz, M.
2014-11-01
The local structure around Cr3+ centers in perovskite KMgF3 crystal have been investigated through the applications of both an ab-initio, density functional theory (DFT), and a semi empirical, superposition model (SPM), analyses. A supercell approach is used for DFT calculations. All the tetragonal (Cr3+-VMg and Cr3+-Li+), trigonal (Cr3+-VK), and CrF5O cluster centers have been considered with various structural models based on the previously suggested experimental inferences. The significant structural changes around the Cr3+ centers induced by Mg2+ or K+ vacancies and the Li substitution at those vacancy sites have been determined and discussed by means of charge distribution. This study provides insight on both the roles of Mg2+ and K+ vacancies and Li+ ion in the local structural properties around Cr3+ centers in KMgF3.
Linear dynamical modes as new variables for data-driven ENSO forecast
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
An empirical model for global earthquake fatality estimation
Jaiswal, Kishor; Wald, David
2010-01-01
We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.
An integrated approach to characterize genetic interaction networks in yeast metabolism
Szappanos, Balázs; Kovács, Károly; Szamecz, Béla; Honti, Frantisek; Costanzo, Michael; Baryshnikova, Anastasia; Gelius-Dietrich, Gabriel; Lercher, Martin J.; Jelasity, Márk; Myers, Chad L.; Andrews, Brenda J.; Boone, Charles; Oliver, Stephen G.; Pál, Csaba; Papp, Balázs
2011-01-01
Intense experimental and theoretical efforts have been made to globally map genetic interactions, yet we still do not understand how gene-gene interactions arise from the operation of biomolecular networks. To bridge the gap between empirical and computational studies, we: i) quantitatively measure genetic interactions between ~185,000 metabolic gene pairs in Saccharomyces cerevisiae, ii) superpose the data on a detailed systems biology model of metabolism, and iii) introduce a machine-learning method to reconcile empirical interaction data with model predictions. We systematically investigate the relative impacts of functional modularity and metabolic flux coupling on the distribution of negative and positive genetic interactions. We also provide a mechanistic explanation for the link between the degree of genetic interaction, pleiotropy, and gene dispensability. Last, we demonstrate the feasibility of automated metabolic model refinement by correcting misannotations in NAD biosynthesis and confirming them by in vivo experiments. PMID:21623372
Fang, Huaming; Zhang, Peng; Huang, Lisa P.; Zhao, Zhengyi; Pi, Fengmei; Montemagno, Carlo; Guo, Peixuan
2014-01-01
Living systems produce ordered structures and nanomachines that inspire the development of biomimetic nanodevices such as chips, MEMS, actuators, sensors, sorters, and apparatuses for single-pore DNA sequencing, disease diagnosis, drug or therapeutic RNA delivery. Determination of the copy numbers of subunits that build these machines is challenging due to small size. Here we report a simple mathematical method to determine the stoichiometry, using phi29 DNA-packaging nanomotor as a model to elucidate the application of a formula ∑M=0Z(ZM)pZ−MqM, where p and q are the percentage of wild-type and inactive mutant in the empirical assay; M is the copy numbers of mutant and Z is the stoichiometry in question. Variable ratios of mutants and wild-type were mixed to inhibit motor function. Empirical data were plotted over the theoretical curves to determine the stoichiometry and the value of K, which is the number of mutant needed in each machine to block the function, all based on the condition that wild-type and mutant are equal in binding affinity. Both Z and K from 1–12 were investigated. The data precisely confirmed that phi29 motor contains six copies (Z) of the motor ATPase gp16, and K = 1. PMID:24650885
Multiscaling properties of coastal waters particle size distribution from LISST in situ measurements
NASA Astrophysics Data System (ADS)
Pannimpullath Remanan, R.; Schmitt, F. G.; Loisel, H.; Mériaux, X.
2013-12-01
An eulerian high frequency sampling of particle size distribution (PSD) is performed during 5 tidal cycles (65 hours) in a coastal environment of the eastern English Channel at 1 Hz. The particle data are recorded using a LISST-100x type C (Laser In Situ Scattering and Transmissometry, Sequoia Scientific), recording volume concentrations of particles having diameters ranging from 2.5 to 500 mu in 32 size classes in logarithmic scale. This enables the estimation at each time step (every second) of the probability density function of particle sizes. At every time step, the pdf of PSD is hyperbolic. We can thus estimate PSD slope time series. Power spectral analysis shows that the mean diameter of the suspended particles is scaling at high frequencies (from 1s to 1000s). The scaling properties of particle sizes is studied by computing the moment function, from the pdf of the size distribution. Moment functions at many different time scales (from 1s to 1000 s) are computed and their scaling properties considered. The Shannon entropy at each time scale is also estimated and is related to other parameters. The multiscaling properties of the turbidity (coefficient cp computed from the LISST) are also consider on the same time scales, using Empirical Mode Decomposition.
Empirical evidence for multi-scaled controls on wildfire size distributions in California
NASA Astrophysics Data System (ADS)
Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.
2014-12-01
Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California wildfire ecosystems appear to be adaptive, governed by stationary and non-stationary controls, which may be either exogenous or endogenous to the system.
Phytoplankton pigment patterns and wind forcing off central California
NASA Technical Reports Server (NTRS)
Abbott, Mark R.; Barksdale, Brett
1991-01-01
Mesoscale variability in phytoplankton pigment distributions of central California during the spring-summer upwelling season are studied via a 4-yr time series of high-resolution coastal zone color scanner imagery. Empirical orthogonal functions are used to decompose the time series of spatial images into its dominant modes of variability. The coupling between wind forcing of the upper ocean and phytoplankton distribution on mesoscales is investigated. Wind forcing, in particular the curl of the wind stress, was found to play an important role in the distribution of phytoplankton pigment in the California Current. The spring transition varies in timing and intensity from year to year but appears to be a recurrent feature associated with the rapid onset of the upwelling-favorable winds. Although the underlying dynamics may be dominated by processes other than forcing by wind stress curl, it appears that curl may force the variability of the filaments and hence the pigment patterns.
Measures of dependence for multivariate Lévy distributions
NASA Astrophysics Data System (ADS)
Boland, J.; Hurd, T. R.; Pivato, M.; Seco, L.
2001-02-01
Recent statistical analysis of a number of financial databases is summarized. Increasing agreement is found that logarithmic equity returns show a certain type of asymptotic behavior of the largest events, namely that the probability density functions have power law tails with an exponent α≈3.0. This behavior does not vary much over different stock exchanges or over time, despite large variations in trading environments. The present paper proposes a class of multivariate distributions which generalizes the observed qualities of univariate time series. A new consequence of the proposed class is the "spectral measure" which completely characterizes the multivariate dependences of the extreme tails of the distribution. This measure on the unit sphere in M-dimensions, in principle completely general, can be determined empirically by looking at extreme events. If it can be observed and determined, it will prove to be of importance for scenario generation in portfolio risk management.
Method development estimating ambient mercury concentration from monitored mercury wet deposition
NASA Astrophysics Data System (ADS)
Chen, S. M.; Qiu, X.; Zhang, L.; Yang, F.; Blanchard, P.
2013-05-01
Speciated atmospheric mercury data have recently been monitored at multiple locations in North America; but the spatial coverage is far less than the long-established mercury wet deposition network. The present study describes a first attempt linking ambient concentration with wet deposition using Beta distribution fitting of a ratio estimate. The mean, median, mode, standard deviation, and skewness of the fitted Beta distribution parameters were generated using data collected in 2009 at 11 monitoring stations. Comparing the normalized histogram and the fitted density function, the empirical and fitted Beta distribution of the ratio shows a close fit. The estimated ambient mercury concentration was further partitioned into reactive gaseous mercury and particulate bound mercury using linear regression model developed by Amos et al. (2012). The method presented here can be used to roughly estimate mercury ambient concentration at locations and/or times where such measurement is not available but where wet deposition is monitored.
Implementation of jump-diffusion algorithms for understanding FLIR scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1995-07-01
Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.
Image deblurring based on nonlocal regularization with a non-convex sparsity constraint
NASA Astrophysics Data System (ADS)
Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi
2018-04-01
In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Cugerone, Katia; De Michele, Carlo; Ghezzi, Antonio; Gianelle, Vorne; Gilardoni, Stefania
2018-04-01
Particle number size distributions (PNSDs) have been collected periodically in the urban area of Milan, Italy, during 2011 and 2012 in winter and summer months. Moreover, comparable PNSD measurements were carried out in the rural mountain site of Oga-San Colombano (2250 m a.s.l.), Italy, during February 2005 and August 2011. The aerosol data have been measured through the use of optical particle counters in the size range 0.3-25 µm, with a time resolution of 1 min. The comparison of the PNSDs collected in the two sites has been done in terms of total number concentration, showing higher numbers in Milan (often exceeding 103 cm-3 in winter season) compared to Oga-San Colombano (not greater than 2×102 cm-3), as expected. The skewness-kurtosis plane has been used in order to provide a synoptic view, and select the best distribution family describing the empirical PNSD pattern. The four-parameter Johnson system-bounded distribution (called Johnson SB or JSB) has been tested for this aim, due to its great flexibility and ability to assume different shapes. The PNSD pattern has been found to be generally invariant under site and season changes. Nevertheless, several PNSDs belonging to the Milan winter season (generally more than 30 %) clearly deviate from the standard empirical pattern. The seasonal increase in the concentration of primary aerosols due to combustion processes in winter and the influence of weather variables throughout the year, such as precipitation and wind speed, could be considered plausible explanations of PNSD dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Nonparametric bootstrap analysis with applications to demographic effects in demand functions.
Gozalo, P L
1997-12-01
"A new bootstrap proposal, labeled smooth conditional moment (SCM) bootstrap, is introduced for independent but not necessarily identically distributed data, where the classical bootstrap procedure fails.... A good example of the benefits of using nonparametric and bootstrap methods is the area of empirical demand analysis. In particular, we will be concerned with their application to the study of two important topics: what are the most relevant effects of household demographic variables on demand behavior, and to what extent present parametric specifications capture these effects." excerpt
Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures
NASA Technical Reports Server (NTRS)
James, Benjamin Wylie
1935-01-01
This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.
Power-Laws and Scaling in Finance: Empirical Evidence and Simple Models
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
We discuss several models that may explain the origin of power-law distributions and power-law correlations in financial time series. From an empirical point of view, the exponents describing the tails of the price increments distribution and the decay of the volatility correlations are rather robust and suggest universality. However, many of the models that appear naturally (for example, to account for the distribution of wealth) contain some multiplicative noise, which generically leads to non universal exponents. Recent progress in the empirical study of the volatility suggests that the volatility results from some sort of multiplicative cascade. A convincing `microscopic' (i.e. trader based) model that explains this observation is however not yet available. We discuss a rather generic mechanism for long-ranged volatility correlations based on the idea that agents constantly switch between active and inactive strategies depending on their relative performance.
NASA Astrophysics Data System (ADS)
Xie, Yanan; Zhou, Mingliang; Pan, Dengke
2017-10-01
The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.
From the Cover: The growth of business firms: Theoretical framework and empirical evidence
NASA Astrophysics Data System (ADS)
Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S. V.; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H. Eugene
2005-12-01
We introduce a model of proportional growth to explain the distribution Pg(g) of business-firm growth rates. The model predicts that Pg(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships. proportional growth | preferential attachment | Laplace distribution
Effect of sub-pore scale morphology of biological deposits on porous media flow properties
NASA Astrophysics Data System (ADS)
Ghezzehei, T. A.
2012-12-01
Biological deposits often influence fluid flow by altering the pore space morphology and related hydrologic properties such as porosity, water retention characteristics, and permeability. In most coupled-processes models changes in porosity are inferred from biological process models using mass-balance. The corresponding evolution of permeability is estimated using (semi-) empirical porosity-permeability functions such as the Kozeny-Carman equation or power-law functions. These equations typically do not account for the heterogeneous spatial distribution and morphological irregularities of the deposits. As a result, predictions of permeability evolution are generally unsatisfactory. In this presentation, we demonstrate the significance of pore-scale deposit distribution on porosity-permeability relations using high resolution simulations of fluid flow through a single pore interspersed with deposits of varying morphologies. Based on these simulations, we present a modification to the Kozeny-Carman model that accounts for the shape of the deposits. Limited comparison with published experimental data suggests the plausibility of the proposed conceptual model.
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo; Gurgiolo, Chris A.; Nieves-Chinchilla, Teresa; Goldstein, Melvyn L.
2010-01-01
It has been suggested by a number of authors that the solar wind electron halo can be formed by the scattering of the strahl. On frequent occasions we have observed in electron angular skymaps (Phi/Theta-plots) of the electron 3D velocity distribution functions) a bursty-filament of particles connecting the strahl to the solar wind core-halo. These are seen over a very limited energy range. When the magnetic field is well off the nominal solar wind flow direction such filaments are inconsistent with any local forces and are probably the result of strong scattering. Furthermore, observations indicates that the strahl component is frequently and significantly anisotropic (Tper/Tpal approx.2). This provides a possible free energy source for the excitation of whistler waves as a possible scattering mechanism. The empirical observational evidence between the halo and the strahl suggests that the strahl population may be, at least in part, the source of the halo component.
Observations of thermal and suprathermal tail ions from WIND
NASA Astrophysics Data System (ADS)
Randol, B. M.; Christian, E. R.; Wilson, L. B., III
2016-12-01
The velocity distribution function (VDF) of solar wind protons (as well as other ion populations) is comprised of a thermal Maxwellian core and an accelerated suprathermal tail, beginning at around 1 keV in the frame co-moving with solar wind bulk velocity. The form of the suprathermal tail is a power law in phase space density, f, vs. speed, v, such that f / vγ, where γ is the power law index. This commonly observed index is of particular interest because no traditional theory predicts its existence. We need more data in order to test these theories. The general shape is of interest because it is kappa-like. We show combined observations from three different instruments on the WIND spacecraft: 3DP/PLSP, STICS, and 3DP/SST/Open. These data stretch from 102 to 107 eV in energy, encompassing both the thermal and suprathermal proton populations. We show further evidence for this kappa-like distribution and report on our progress on fitting of empirical functions to these data.
On Detecting Influential Data and Selecting Regression Variables
1989-10-01
subset of the data. The empirical influence function for ,, IFA is defined to be IFA = AA -- A (2) For a given positive definite matrix M and a nonzero...interest. Cook and Weisberg (1980) tried to treat their measurement of the influence on the fitted values X. They used the empirical influence function for...Characterizations of an empirical influence function for detecting influential cases in regression. Technometrics 22, 495-508. [3] Gray, J. B. and Ling, R. F
Interevent time distributions of human multi-level activity in a virtual world
NASA Astrophysics Data System (ADS)
Mryglod, O.; Fuchs, B.; Szell, M.; Holovatch, Yu.; Thurner, S.
2015-02-01
Studying human behavior in virtual environments provides extraordinary opportunities for a quantitative analysis of social phenomena with levels of accuracy that approach those of the natural sciences. In this paper we use records of player activities in the massive multiplayer online game Pardus over 1238 consecutive days, and analyze dynamical features of sequences of actions of players. We build on previous work where temporal structures of human actions of the same type were quantified, and provide an empirical understanding of human actions of different types. This study of multi-level human activity can be seen as a dynamic counterpart of static multiplex network analysis. We show that the interevent time distributions of actions in the Pardus universe follow highly non-trivial distribution functions, from which we extract action-type specific characteristic 'decay constants'. We discuss characteristic features of interevent time distributions, including periodic patterns on different time scales, bursty dynamics, and various functional forms on different time scales. We comment on gender differences of players in emotional actions, and find that while males and females act similarly when performing some positive actions, females are slightly faster for negative actions. We also observe effects on the age of players: more experienced players are generally faster in making decisions about engaging in and terminating enmity and friendship, respectively.
NASA Astrophysics Data System (ADS)
Quiroga, S.; Suárez, C.
2015-07-01
This paper examines the effects of climate change and drought on agricultural outputs in Spanish rural areas. By now the effects of drought as a response to climate change or policy restrictions have been analyzed through response functions considering direct effects on crop productivity and incomes. These changes also affect incomes distribution in the region and therefore modify the social structure. Here we consider this complementary indirect effect on social distribution of incomes which is essential in the long term. We estimate crop production functions for a range of Mediterranean crops in Spain and we use a decomposition of inequalities measure to estimate the impact of climate change and drought on yield disparities. This social aspect is important for climate change policies since it can be determinant for the public acceptance of certain adaptation measures in a context of drought. We provide the empirical estimations for the marginal effects of the two considered impacts: farms' income average and social income distribution. In our estimates we consider crop productivity response to both bio-physical and socio-economic aspects to analyze long term implications on both competitiveness and social disparities. We find disparities in the adaptation priorities depending on the crop and the region analyzed.
Network topology and resilience analysis of South Korean power grid
NASA Astrophysics Data System (ADS)
Kim, Dong Hwan; Eisenberg, Daniel A.; Chun, Yeong Han; Park, Jeryang
2017-01-01
In this work, we present topological and resilience analyses of the South Korean power grid (KPG) with a broad voltage level. While topological analysis of KPG only with high-voltage infrastructure shows an exponential degree distribution, providing another empirical evidence of power grid topology, the inclusion of low voltage components generates a distribution with a larger variance and a smaller average degree. This result suggests that the topology of a power grid may converge to a highly skewed degree distribution if more low-voltage data is considered. Moreover, when compared to ER random and BA scale-free networks, the KPG has a lower efficiency and a higher clustering coefficient, implying that highly clustered structure does not necessarily guarantee a functional efficiency of a network. Error and attack tolerance analysis, evaluated with efficiency, indicate that the KPG is more vulnerable to random or degree-based attacks than betweenness-based intentional attack. Cascading failure analysis with recovery mechanism demonstrates that resilience of the network depends on both tolerance capacity and recovery initiation time. Also, when the two factors are fixed, the KPG is most vulnerable among the three networks. Based on our analysis, we propose that the topology of power grids should be designed so the loads are homogeneously distributed, or functional hubs and their neighbors have high tolerance capacity to enhance resilience.
NASA Astrophysics Data System (ADS)
Pavlick, R.; Schimel, D.
2014-12-01
Dynamic Global Vegetation Models (DGVMs) typically employ only a small set of Plant Functional Types (PFTs) to represent the vast diversity of observed vegetation forms and functioning. There is growing evidence, however, that this abstraction may not adequately represent the observed variation in plant functional traits, which is thought to play an important role for many ecosystem functions and for ecosystem resilience to environmental change. The geographic distribution of PFTs in these models is also often based on empirical relationships between present-day climate and vegetation patterns. Projections of future climate change, however, point toward the possibility of novel regional climates, which could lead to no-analog vegetation compositions incompatible with the PFT paradigm. Here, we present results from the Jena Diversity-DGVM (JeDi-DGVM), a novel traits-based vegetation model, which simulates a large number of hypothetical plant growth strategies constrained by functional tradeoffs, thereby allowing for a more flexible temporal and spatial representation of the terrestrial biosphere. First, we compare simulated present-day geographical patterns of functional traits with empirical trait observations (in-situ and from airborne imaging spectroscopy). The observed trait patterns are then used to improve the tradeoff parameterizations of JeDi-DGVM. Finally, focusing primarily on the simulated leaf traits, we run the model with various amounts of trait diversity. We quantify the effects of these modeled biodiversity manipulations on simulated ecosystem fluxes and stocks for both present-day conditions and transient climate change scenarios. The simulation results reveal that the coarse treatment of plant functional traits by current PFT-based vegetation models may contribute substantial uncertainty regarding carbon-climate feedbacks. Further development of trait-based models and further investment in global in-situ and spectroscopic plant trait observations are needed.
NASA Astrophysics Data System (ADS)
Yang, Bo; Yoon, Ji Wei; Monterola, Christopher
We present large scale, detailed analysis of the microscopic empirical data of the congested traffic flow, focusing on the non-linear interactions between the components of the many-body traffic system. By implementing a systematic procedure that averages over relatively unimportant factors, we extract the effective dependence of the acceleration on the gap between the vehicles, velocity and relative velocity. Such relationship is characterised not just by a few vehicles but the traffic system as a whole. Several interesting features of the detailed vehicle-to-vehicle interactions are revealed, including the stochastic distribution of the human responses, relative importance of the non-linear terms in different density regimes, symmetric response to the relative velocity, and the insensitivity of the acceleration to the velocity within a certain gap and velocity range. The latter leads to a multitude of steady-states without a fundamental diagram. The empirically constructed functional dependence of the acceleration on the important dynamical quantities not only gives the detailed collective driving behaviours of the traffic system, it also serves as the fundamental reference for the validations of the deterministic and stochastic microscopic traffic models in the literature.
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
2013-01-01
Background Currently, there is very limited knowledge about the genes involved in normal pigmentation variation in East Asian populations. We carried out a genome-wide scan of signatures of positive selection using the 1000 Genomes Phase I dataset, in order to identify pigmentation genes showing putative signatures of selective sweeps in East Asia. We applied a broad range of methods to detect signatures of selection including: 1) Tests designed to identify deviations of the Site Frequency Spectrum (SFS) from neutral expectations (Tajima’s D, Fay and Wu’s H and Fu and Li’s D* and F*), 2) Tests focused on the identification of high-frequency haplotypes with extended linkage disequilibrium (iHS and Rsb) and 3) Tests based on genetic differentiation between populations (LSBL). Based on the results obtained from a genome wide analysis of 25 kb windows, we constructed an empirical distribution for each statistic across all windows, and identified pigmentation genes that are outliers in the distribution. Results Our tests identified twenty genes that are relevant for pigmentation biology. Of these, eight genes (ATRN, EDAR, KLHL7, MITF, OCA2, TH, TMEM33 and TRPM1,) were extreme outliers (top 0.1% of the empirical distribution) for at least one statistic, and twelve genes (ADAM17, BNC2, CTSD, DCT, EGFR, LYST, MC1R, MLPH, OPRM1, PDIA6, PMEL (SILV) and TYRP1) were in the top 1% of the empirical distribution for at least one statistic. Additionally, eight of these genes (BNC2, EGFR, LYST, MC1R, OCA2, OPRM1, PMEL (SILV) and TYRP1) have been associated with pigmentary traits in association studies. Conclusions We identified a number of putative pigmentation genes showing extremely unusual patterns of genetic variation in East Asia. Most of these genes are outliers for different tests and/or different populations, and have already been described in previous scans for positive selection, providing strong support to the hypothesis that recent selective sweeps left a signature in these regions. However, it will be necessary to carry out association and functional studies to demonstrate the implication of these genes in normal pigmentation variation. PMID:23848512
Hider, Jessica L; Gittelman, Rachel M; Shah, Tapan; Edwards, Melissa; Rosenbloom, Arnold; Akey, Joshua M; Parra, Esteban J
2013-07-12
Currently, there is very limited knowledge about the genes involved in normal pigmentation variation in East Asian populations. We carried out a genome-wide scan of signatures of positive selection using the 1000 Genomes Phase I dataset, in order to identify pigmentation genes showing putative signatures of selective sweeps in East Asia. We applied a broad range of methods to detect signatures of selection including: 1) Tests designed to identify deviations of the Site Frequency Spectrum (SFS) from neutral expectations (Tajima's D, Fay and Wu's H and Fu and Li's D* and F*), 2) Tests focused on the identification of high-frequency haplotypes with extended linkage disequilibrium (iHS and Rsb) and 3) Tests based on genetic differentiation between populations (LSBL). Based on the results obtained from a genome wide analysis of 25 kb windows, we constructed an empirical distribution for each statistic across all windows, and identified pigmentation genes that are outliers in the distribution. Our tests identified twenty genes that are relevant for pigmentation biology. Of these, eight genes (ATRN, EDAR, KLHL7, MITF, OCA2, TH, TMEM33 and TRPM1,) were extreme outliers (top 0.1% of the empirical distribution) for at least one statistic, and twelve genes (ADAM17, BNC2, CTSD, DCT, EGFR, LYST, MC1R, MLPH, OPRM1, PDIA6, PMEL (SILV) and TYRP1) were in the top 1% of the empirical distribution for at least one statistic. Additionally, eight of these genes (BNC2, EGFR, LYST, MC1R, OCA2, OPRM1, PMEL (SILV) and TYRP1) have been associated with pigmentary traits in association studies. We identified a number of putative pigmentation genes showing extremely unusual patterns of genetic variation in East Asia. Most of these genes are outliers for different tests and/or different populations, and have already been described in previous scans for positive selection, providing strong support to the hypothesis that recent selective sweeps left a signature in these regions. However, it will be necessary to carry out association and functional studies to demonstrate the implication of these genes in normal pigmentation variation.
2016-09-01
is to fit empirical Beta distributions to observed data, and then to use a randomization approach to make inferences on the difference between...a Ridit analysis on the often sparse data sets in many Flying Qualities applicationsi. The method of this paper is to fit empirical Beta ...One such measure is the discrete- probability-distribution version of the (squared) ‘Hellinger Distance’ (Yang & Le Cam , 2000) 2(, ) = 1
NASA Astrophysics Data System (ADS)
Podladchikova, O.; Lefebvre, B.; Krasnoselskikh, V.; Podladchikov, V.
An important task for the problem of coronal heating is to produce reliable evaluation of the statistical properties of energy release and eruptive events such as micro-and nanoflares in the solar corona. Different types of distributions for the peak flux, peak count rate measurements, pixel intensities, total energy flux or emission measures increases or waiting times have appeared in the literature. This raises the question of a precise evaluation and classification of such distributions. For this purpose, we use the method proposed by K. Pearson at the beginning of the last century, based on the relationship between the first 4 moments of the distribution. Pearson's technique encompasses and classifies a broad range of distributions, including some of those which have appeared in the literature about coronal heating. This technique is successfully applied to simulated data from the model of Krasnoselskikh et al. (2002). It allows to provide successful fits to the empirical distributions of the dissipated energy, and to classify them as a function of model parameters such as dissipation mechanisms and threshold.
Inland empire logistics GIS mapping project.
DOT National Transportation Integrated Search
2009-01-01
The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...
Knott, Brandon C.; Nimlos, Claire T.; Robichaud, David J.; ...
2017-12-11
Research efforts in zeolite catalysis have become increasingly cognizant of the diversity in structure and function resulting from the distribution of framework aluminum atoms, through emerging reports of catalytic phenomena that fall outside those recognizable as the shape-selective ones emblematic of its earlier history. Molecular-level descriptions of how active-site distributions affect catalysis are an aspirational goal articulated frequently in experimental and theoretical research, yet they are limited by imprecise knowledge of the structure and behavior of the zeolite materials under interrogation. In experimental research, higher precision can result from more reliable control of structure during synthesis and from more robustmore » and quantitative structural and kinetic characterization probes. In theoretical research, construction of models with specific aluminum locations and distributions seldom capture the heterogeneity inherent to the materials studied by experiment. In this Perspective, we discuss research findings that appropriately frame the challenges in developing more predictive synthesis-structure-function relations for zeolites, highlighting studies on ZSM-5 zeolites that are among the most structurally complex molecular sieve frameworks and the most widely studied because of their versatility in commercial applications. We discuss research directions to address these challenges and forge stronger connections between zeolite structure, composition, and active sites to catalytic function. Such connections promise to aid in bridging the findings of theoretical and experimental catalysis research, and transforming zeolite active site design from an empirical endeavor into a more predictable science founded on validated models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knott, Brandon C.; Nimlos, Claire T.; Robichaud, David J.
Research efforts in zeolite catalysis have become increasingly cognizant of the diversity in structure and function resulting from the distribution of framework aluminum atoms, through emerging reports of catalytic phenomena that fall outside those recognizable as the shape-selective ones emblematic of its earlier history. Molecular-level descriptions of how active-site distributions affect catalysis are an aspirational goal articulated frequently in experimental and theoretical research, yet they are limited by imprecise knowledge of the structure and behavior of the zeolite materials under interrogation. In experimental research, higher precision can result from more reliable control of structure during synthesis and from more robustmore » and quantitative structural and kinetic characterization probes. In theoretical research, construction of models with specific aluminum locations and distributions seldom capture the heterogeneity inherent to the materials studied by experiment. In this Perspective, we discuss research findings that appropriately frame the challenges in developing more predictive synthesis-structure-function relations for zeolites, highlighting studies on ZSM-5 zeolites that are among the most structurally complex molecular sieve frameworks and the most widely studied because of their versatility in commercial applications. We discuss research directions to address these challenges and forge stronger connections between zeolite structure, composition, and active sites to catalytic function. Such connections promise to aid in bridging the findings of theoretical and experimental catalysis research, and transforming zeolite active site design from an empirical endeavor into a more predictable science founded on validated models.« less
Adopting epidemic model to optimize medication and surgical intervention of excess weight
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2017-01-01
We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.
A Monte Carlo model of hot electron trapping and detrapping in SiO2
NASA Astrophysics Data System (ADS)
Kamocsai, R. L.; Porod, W.
1991-02-01
High-field stressing and oxide degradation of SiO2 are studied using a microscopic model of electron heating and charge trapping and detrapping. Hot electrons lead to a charge buildup in the oxide according to the dynamic trapping-detrapping model by Nissan-Cohen and co-workers [Y. Nissan-Cohen, J. Shappir, D. Frohman-Bentchkowsky, J. Appl. Phys. 58, 2252 (1985)]. Detrapping events are modeled as trap-to-band impact ionization processes initiated by high energy conduction electrons. The detailed electronic distribution function obtained from Monte Carlo transport simulations is utilized for the determination of the detrapping rates. We apply our microscopic model to the calculation of the flat-band voltage shift in silicon dioxide as a function of the electric field, and we show that our model is able to reproduce the experimental results. We also compare these results to the predictions of the empirical trapping-detrapping model which assumes a heuristic detrapping cross section. Our microscopic theory accounts for the nonlocal nature of impact ionization which leads to a dark space close to the injecting cathode, which is unaccounted for in the empirical model.
Innovation flow through social networks: productivity distribution in France and Italy
NASA Astrophysics Data System (ADS)
di Matteo, T.; Aste, T.; Gallegati, M.
2005-10-01
From a detailed empirical analysis of the productivity of non financial firms across several countries and years we show that productivity follows a non-Gaussian distribution with `fat tails' in the large productivity region which are well mimicked by power law behaviors. We discuss how these empirical findings can be linked to a mechanism of exchanges in a social network where firms improve their productivity by direct innovation and/or by imitation of other firm's technological and organizational solutions. The type of network-connectivity determines how fast and how efficiently information can diffuse and how quickly innovation will permeate or behaviors will be imitated. From a model for innovation flow through a complex network we show that the expectation values of the productivity of each firm are proportional to its connectivity in the network of links between firms. The comparison with the empirical distributions in France and Italy reveals that in this model, such a network must be of a scale-free type with a power-law degree distribution in the large connectivity range.
NASA Technical Reports Server (NTRS)
Bergrun, N. R.
1951-01-01
An empirical method for the determination of the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The procedure represents an initial step toward the development of a method which is generally applicable in the design of thermal ice-prevention equipment for airplane wing and tail surfaces. Results given by the proposed empirical method are expected to be sufficiently accurate for the purpose of heated-wing design, and can be obtained from a few numerical computations once the velocity distribution over the airfoil has been determined. The empirical method presented for incompressible flow is based on results of extensive water-drop. trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer. The method developed for incompressible flow is extended to the calculation of area and rate of impingement on straight wings in subsonic compressible flow to indicate the probable effects of compressibility for airfoils at low subsonic Mach numbers.
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Identifying Similarities in Cognitive Subtest Functional Requirements: An Empirical Approach
ERIC Educational Resources Information Center
Frisby, Craig L.; Parkin, Jason R.
2007-01-01
In the cognitive test interpretation literature, a Rational/Intuitive, Indirect Empirical, or Combined approach is typically used to construct conceptual taxonomies of the functional (behavioral) similarities between subtests. To address shortcomings of these approaches, the functional requirements for 49 subtests from six individually…
Effects of plant sex on range distributions and allocation to reproduction.
Johnson, Marc T J; Smith, Stacey D; Rausher, Mark D
2010-05-01
Despite an abundance of theory, few empirical studies have explored the ecological and evolutionary consequences of sex. We used a comparative phylogenetic approach to examine whether transitions between sexual and asexual reproduction are associated with changes in the size and distribution of species' geographical ranges, and their investment in reproduction. Here, we reconstructed the phylogeny of the genus Oenothera sections Oenothera and Calylophus (Onagraceae), which contain 35 sexual and 30 functionally asexual species. From each species, we collected data on the geographical distribution and variation in plant traits related to reproduction. Functionally asexual species occurred at higher latitudes, but did not differ in range size, compared with sexual species. Transitions to asexuality were associated with decreased investment in floral structures, including the length of petals, floral tubes and styles. Decreased anther size and increased seed size within asexual species also suggest altered allocation to male and female fitness. The observed range shifts are consistent with superior colonization of environments by asexual species following glaciation, and the observed changes in reproductive allocation support predictions made by models relating to the evolution of selfing. Our results suggest that the evolutionary consequences of asexual reproduction might be less restrictive than previously thought.
NASA Astrophysics Data System (ADS)
Quiroga, Sonia; Suárez, Cristina
2016-06-01
This paper examines the effects of climate change and drought on agricultural incomes in Spanish rural areas. Present research has focused on the effects of these extreme climatological events through response functions, considering effects on crop productivity and average incomes. Among the impacts of droughts, we focused on potential effects on income distribution. The study of the effects on abnormally dry periods is therefore needed in order to perform an analysis of diverse social aspects in the long term. We estimate crop production functions for a range of Mediterranean crops in Spain and we use a measure of the decomposition of inequality to estimate the impact of climate change and drought on yield disparities. Certain adaptation measures may require a better understanding of risks by the public to achieve general acceptance. We provide empirical estimations for the marginal effects of the two impacts considered: farms' average income and income distribution. Our estimates consider crop production response to both biophysical and socio-economic aspects to analyse long-term implications on competitiveness and disparities. As for the results, we find disparities in the adaptation priorities depending on the crop and the region analysed.
The Momentum Distribution of Liquid ⁴He
Prisk, T. R.; Bryan, M. S.; Sokol, P. E.; ...
2017-07-24
We report a high-resolution neutron Compton scattering study of liquid ⁴He under milli-Kelvin temperature control. To interpret the scattering data, we performed Quantum Monte Carlo calculations of the atomic momentum distribution and final state effects for the conditions of temperature and density considered in the experiment. There is excellent agreement between the observed scattering and ab initio calculations of its lineshape at all temperatures. We also used model fit functions to obtain from the scattering data empirical estimates of the average atomic kinetic energy and Bose condensate fraction. These quantities are also in excellent agreement with ab initio calculations. Wemore » conclude that contemporary Quantum Monte Carlo methods can furnish accurate predictions for the properties of Bose liquids, including the condensate fraction, close to the superfluid transition temperature.« less
Shareholding Networks in Japan
NASA Astrophysics Data System (ADS)
Souma, Wataru; Fujiwara, Yoshi; Aoyama, Hideaki
2005-06-01
The Japanese shareholding network existing at the end of March 2002 is studied empirically. The network is constructed from 2,303 listed companies and 53 non-listed financial institutions. We consider this network as a directed graph by drawing edges from shareholders to stock corporations. The lengths of the shareholder lists vary with the companies, and the most comprehensive lists contain the top 30 shareholders. Consequently, the distribution of incoming edges has an upper bound, while that of outgoing edges has no bound. The distribution of outgoing degrees is well explained by the power law function with an exponential tail. The exponent in the power law range is γ = 1.7. To understand these features from the viewpoint of a company's growth, we consider the correlations between the outgoing degree and the company's age, profit, and total assets.
Is Directivity Still Effective in a PSHA Framework?
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Herrero, A.; Cultrera, G.
2008-12-01
Source rupture parameters, like directivity, modulate the energy release causing variations in the radiated signal amplitude. Thus they affect the empirical predictive equations and as a consequence the seismic hazard assessment. Classical probabilistic hazard evaluations, e.g. Cornell (1968), use very simple predictive equations only based on magnitude and distance which do not account for variables concerning the rupture process. However nowadays, a few predictive equations (e.g. Somerville 1997, Spudich and Chiou 2008) take into account for rupture directivity. Also few implementations have been made in a PSHA framework (e.g. Convertito et al. 2006, Rowshandel 2006). In practice, these new empirical predictive models incorporate quantitatively the rupture propagation effects through the introduction of variables like rake, azimuth, rupture velocity and laterality. The contribution of all these variables is summarized in corrective factors derived from measuring differences between the real data and the predicted ones Therefore, it's possible to keep the older computation, making use of a simple predictive model, and besides, to incorporate the directivity effect through the corrective factors. Any single supplementary variable meaning a new integral in the parametric space. However the difficulty consists of the constraints on parameter distribution functions. We present the preliminary result for ad hoc distributions (Gaussian, uniform distributions) in order to test the impact of incorporating directivity into PSHA models. We demonstrate that incorporating directivity in PSHA by means of the new predictive equations may lead to strong percentage variations in the hazard assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabacchi, G; Hutter, J; Mundy, C
2005-04-07
A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparablemore » to Kohn-Sham density functional calculations.« less
NASA Astrophysics Data System (ADS)
Cao, M.-H.; Jiang, H.-K.; Chin, J.-S.
1982-04-01
An improved flat-fan spray model is used for the semi-empirical analysis of liquid fuel distribution downstream of a plain orifice injector under cross-stream air flow. The model assumes that, due to the aerodynamic force of the high-velocity cross air flow, the injected fuel immediately forms a flat-fan liquid sheet perpendicular to the cross flow. Once the droplets have been formed, the trajectories of individual droplets determine fuel distribution downstream. Comparison with test data shows that the proposed model accurately predicts liquid fuel distribution at any point downstream of a plain orifice injector under high-velocity, low-temperature uniform cross-stream air flow over a wide range of conditions.
NASA Astrophysics Data System (ADS)
Wang, M.; Peng, Y.; Xie, X.; Liu, Y.
2017-12-01
Aerosol cloud interaction continues to constitute one of the most significant uncertainties for anthropogenic climate perturbations. The parameterization of cloud droplet size distribution and autoconversion process from large scale cloud to rain can influence the estimation of first and second aerosol indirect effects in global climate models. We design a series of experiments focusing on the microphysical cloud scheme of NCAR CAM5 (Community Atmospheric Model Version 5) in transient historical run with realistic sea surface temperature and sea ice. We investigate the effect of three empirical, two semi-empirical and one analytical expressions for droplet size distribution on cloud properties and explore the statistical relationships between aerosol optical thickness (AOT) and simulated cloud variables, including cloud top droplet effective radius (CDER), cloud optical depth (COD), cloud water path (CWP). We also introduce the droplet spectral shape parameter into the autoconversion process to incorporate the effect of droplet size distribution on second aerosol indirect effect. Three satellite datasets (MODIS Terra/ MODIS Aqua/ AVHRR) are used to evaluate the simulated aerosol indirect effect from the model. Evident CDER decreasing with significant AOT increasing is found in the east coast of China to the North Pacific Ocean and the east coast of USA to the North Atlantic Ocean. Analytical and semi-empirical expressions for spectral shape parameterization show stronger first aerosol indirect effect but weaker second aerosol indirect effect than empirical expressions because of the narrower droplet size distribution.
Evolution of the human immunodeficiency virus envelope gene is dominated by purifying selection.
Edwards, C T T; Holmes, E C; Pybus, O G; Wilson, D J; Viscidi, R P; Abrams, E J; Phillips, R E; Drummond, A J
2006-11-01
The evolution of the human immunodeficiency virus (HIV-1) during chronic infection involves the rapid, continuous turnover of genetic diversity. However, the role of natural selection, relative to random genetic drift, in governing this process is unclear. We tested a stochastic model of genetic drift using partial envelope sequences sampled longitudinally in 28 infected children. In each case the Bayesian posterior (empirical) distribution of coalescent genealogies was estimated using Markov chain Monte Carlo methods. Posterior predictive simulation was then used to generate a null distribution of genealogies assuming neutrality, with the null and empirical distributions compared using four genealogy-based summary statistics sensitive to nonneutral evolution. Because both null and empirical distributions were generated within a coalescent framework, we were able to explicitly account for the confounding influence of demography. From the distribution of corrected P-values across patients, we conclude that empirical genealogies are more asymmetric than expected if evolution is driven by mutation and genetic drift only, with an excess of low-frequency polymorphisms in the population. This indicates that although drift may still play an important role, natural selection has a strong influence on the evolution of HIV-1 envelope. A negative relationship between effective population size and substitution rate indicates that as the efficacy of selection increases, a smaller proportion of mutations approach fixation in the population. This suggests the presence of deleterious mutations. We therefore conclude that intrahost HIV-1 evolution in envelope is dominated by purifying selection against low-frequency deleterious mutations that do not reach fixation.
Castillo, Jessica A; Epps, Clinton W; Jeffress, Mackenzie R; Ray, Chris; Rodhouse, Thomas J; Schwalm, Donelle
2016-09-01
Landscape connectivity is essential for maintaining viable populations, particularly for species restricted to fragmented habitats or naturally arrayed in metapopulations and facing rapid climate change. The importance of assessing both structural connectivity (physical distribution of favorable habitat patches) and functional connectivity (how species move among habitat patches) for managing such species is well understood. However, the degree to which functional connectivity for a species varies among landscapes, and the resulting implications for conservation, have rarely been assessed. We used a landscape genetics approach to evaluate resistance to gene flow and, thus, to determine how landscape and climate-related variables influence gene flow for American pikas (Ochotona princeps) in eight federally managed sites in the western United States. We used empirically derived, individual-based landscape resistance models in conjunction with predictive occupancy models to generate patch-based network models describing functional landscape connectivity. Metareplication across landscapes enabled identification of limiting factors for dispersal that would not otherwise have been apparent. Despite the cool microclimates characteristic of pika habitat, south-facing aspects consistently represented higher resistance to movement, supporting the previous hypothesis that exposure to relatively high temperatures may limit dispersal in American pikas. We found that other barriers to dispersal included areas with a high degree of topographic relief, such as cliffs and ravines, as well as streams and distances greater than 1-4 km depending on the site. Using the empirically derived network models of habitat patch connectivity, we identified habitat patches that were likely disproportionately important for maintaining functional connectivity, areas in which habitat appeared fragmented, and locations that could be targeted for management actions to improve functional connectivity. We concluded that climate change, besides influencing patch occupancy as predicted by other studies, may alter landscape resistance for pikas, thereby influencing functional connectivity through multiple pathways simultaneously. Spatial autocorrelation among genotypes varied across study sites and was largest where habitat was most dispersed, suggesting that dispersal distances increased with habitat fragmentation, up to a point. This study demonstrates how landscape features linked to climate can affect functional connectivity for species with naturally fragmented distributions, and reinforces the importance of replicating studies across landscapes. © 2016 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Wu, J.; van der Linden, L.; Lasslop, G.; Carvalhais, N.; Pilegaard, K.; Beier, C.; Ibrom, A.
2012-04-01
The ecosystem carbon balance is affected by both external climatic forcing (e.g. solar radiation, air temperature and humidity) and internal dynamics in the ecosystem functional properties (e.g. canopy structure, leaf photosynthetic capacity and carbohydrate reserve). In order to understand to what extent and at which temporal scale, climatic variability and functional changes regulated the interannual variation (IAV) in the net ecosystem exchange of CO2 (NEE), data-driven analysis and semi-empirical modelling (Lasslop et al. 2010) were performed based on a 13 year NEE record in a temperate deciduous forest (Pilegaard et al 2011, Wu et al. 2012). We found that the sensitivity of carbon fluxes to climatic variability was significantly higher at shorter than at longer time scales and changed seasonally. This implied that the changing distribution of climate anomalies during the vegetation period could have stronger impacts on future ecosystem carbon balances than changes in average climate. At the annual time scale, approximately 80% of the interannual variability in NEE was attributed to the variation in the model parameters, indicating the observed IAV in the carbon dynamics at the investigated site was dominated by changes in ecosystem functioning. In general this study showed the need for understanding the mechanisms of ecosystem functional change. The method can be applied at other sites to explore ecosystem behavior across different plant functional types and climate gradients. Incorporating ecosystem functional change into process based models will reduce the uncertainties in long-term predictions of ecosystem carbon balances in global climate change projections. Acknowledgements. This work was supported by the EU FP7 project CARBO-Extreme, the DTU Climate Centre and the Danish national project ECOCLIM (Danish Council for Strategic Research).
Comparison of Traditional and Trial-Based Methodologies for Conducting Functional Analyses
ERIC Educational Resources Information Center
LaRue, Robert H.; Lenard, Karen; Weiss, Mary Jane; Bamond, Meredith; Palmieri, Mark; Kelley, Michael E.
2010-01-01
Functional analysis represents a sophisticated and empirically supported functional assessment procedure. While these procedures have garnered considerable empirical support, they are often underused in clinical practice. Safety risks resulting from the evocation of maladaptive behavior and the length of time required to conduct functional…
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.
The growth of business firms: theoretical framework and empirical evidence.
Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S V; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H Eugene
2005-12-27
We introduce a model of proportional growth to explain the distribution P(g)(g) of business-firm growth rates. The model predicts that P(g)(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent zeta = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
An Empirical Study of Synchrophasor Communication Delay in a Utility TCP/IP Network
NASA Astrophysics Data System (ADS)
Zhu, Kun; Chenine, Moustafa; Nordström, Lars; Holmström, Sture; Ericsson, Göran
2013-07-01
Although there is a plethora of literature dealing with Phasor Measurement Unit (PMU) communication delay, there has not been any effort made to generalize empirical delay results by identifying the distribution with the best fit. The existing studies typically assume a distribution or simply build on analogies to communication network routing delay. Specifically, this study provides insight into the characterization of the communication delay of both unprocessed PMU data and synchrophasors sorted by a Phasor Data Concentrator (PDC). The results suggest that a bi-modal distribution containing two normal distributions offers the best fit of the delay of the unprocessed data, whereas the delay profile of the sorted synchrophasors resembles a normal distribution based on these results, the possibility of evaluating the reliability of a synchrophasor application with respect to a particular choice of PDC timeout is discussed.
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
Empirical Specification of Utility Functions.
ERIC Educational Resources Information Center
Mellenbergh, Gideon J.
Decision theory can be applied to four types of decision situations in education and psychology: (1) selection; (2) placement; (3) classification; and (4) mastery. For the application of the theory, a utility function must be specified. Usually the utility function is chosen on a priori grounds. In this paper methods for the empirical assessment…
NASA Astrophysics Data System (ADS)
Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.
2002-02-01
The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.
NASA Astrophysics Data System (ADS)
Wang, Lan; De Lucia, Gabriella; Weinmann, Simone M.
2013-05-01
The empirical traditional halo occupation distribution (HOD) model of Wang et al. fits, by construction, both the stellar mass function and correlation function of galaxies in the local Universe. In contrast, the semi-analytical models of De Lucia & Blazoit (hereafter DLB07) and Guo et al. (hereafter Guo11), built on the same dark matter halo merger trees than the empirical model, still have difficulties in reproducing these observational data simultaneously. We compare the relations between the stellar mass of galaxies and their host halo mass in the three models, and find that they are different. When the relations are rescaled to have the same median values and the same scatter as in Wang et al., the rescaled DLB07 model can fit both the measured galaxy stellar mass function and the correlation function measured in different galaxy stellar mass bins. In contrast, the rescaled Guo11 model still overpredicts the clustering of low-mass galaxies. This indicates that the detail of how galaxies populate the scatter in the stellar mass-halo mass relation does play an important role in determining the correlation functions of galaxies. While the stellar mass of galaxies in the Wang et al. model depends only on halo mass and is randomly distributed within the scatter, galaxy stellar mass depends also on the halo formation time in semi-analytical models. At fixed value of infall mass, galaxies that lie above the median stellar mass-halo mass relation reside in haloes that formed earlier, while galaxies that lie below the median relation reside in haloes that formed later. This effect is much stronger in Guo11 than in DLB07, which explains the overclustering of low mass galaxies in Guo11. Assembly bias in Guo11 model might be overly strong. Nevertheless, in case that a significant assembly bias indeed exists in the real Universe, one needs to use caution when applying current HOD and abundance matching models that employ the assumption of random scatter in the relation between stellar and halo mass.
Tracking Expected Improvements of Decadal Prediction in Climate Services
NASA Astrophysics Data System (ADS)
Suckling, E.; Thompson, E.; Smith, L. A.
2013-12-01
Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Modeling Fractal Structure of City-Size Distributions Using Correlation Functions
Chen, Yanguang
2011-01-01
Zipf's law is one the most conspicuous empirical facts for cities, however, there is no convincing explanation for the scaling relation between rank and size and its scaling exponent. Using the idea from general fractals and scaling, I propose a dual competition hypothesis of city development to explain the value intervals and the special value, 1, of the power exponent. Zipf's law and Pareto's law can be mathematically transformed into one another, but represent different processes of urban evolution, respectively. Based on the Pareto distribution, a frequency correlation function can be constructed. By scaling analysis and multifractals spectrum, the parameter interval of Pareto exponent is derived as (0.5, 1]; Based on the Zipf distribution, a size correlation function can be built, and it is opposite to the first one. By the second correlation function and multifractals notion, the Pareto exponent interval is derived as [1, 2). Thus the process of urban evolution falls into two effects: one is the Pareto effect indicating city number increase (external complexity), and the other the Zipf effect indicating city size growth (internal complexity). Because of struggle of the two effects, the scaling exponent varies from 0.5 to 2; but if the two effects reach equilibrium with each other, the scaling exponent approaches 1. A series of mathematical experiments on hierarchical correlation are employed to verify the models and a conclusion can be drawn that if cities in a given region follow Zipf's law, the frequency and size correlations will follow the scaling law. This theory can be generalized to interpret the inverse power-law distributions in various fields of physical and social sciences. PMID:21949753
Dynamical Stochastic Processes of Returns in Financial Markets
NASA Astrophysics Data System (ADS)
Kim, Kyungsik; Kim, Soo Yong; Lim, Gyuchang; Zhou, Junyuan; Yoon, Seung-Min
2006-03-01
We show how the evolution of probability distribution functions of the returns from the tick data of the Korean treasury bond futures (KTB) and the S&P 500 stock index can be described by means of the Fokker-Planck equation. We derive the Fokker- Planck equation from the estimated Kramers-Moyal coefficients estimated directly from the empirical data. By analyzing the statistics of the returns, we present the quantitative deterministic and random influences on both financial time series, for which we can give a simple physical interpretation. Finally, we remark that the diffusion coefficient should be significantly considered to make a portfolio.
Energy partitioning in an inductively driven rail gun
NASA Technical Reports Server (NTRS)
Sen, K. K.; Ray, P. K.
1984-01-01
The equations describing the performance of an inductively driven rail are analyzed numerically. Friction between the projectile and rails is included through an empirical formulation. The equations are applied to the experiment of Rashleigh and Marshall to obtain an estimate of energy distribution in rail guns as a function of time. It is found that only 15 percent of energy delivered by the inductor to the gun is transformed into the kinetic energy of the projectile. This study provides an insight into the nature of nonlinear coupling involved in the electromechanical interactions in a rail gun.
The economic impact of NASA R and D spending Appendices
NASA Technical Reports Server (NTRS)
Evans, M. K.
1976-01-01
Seven appendices related to a previous report on the economic impact of NASA R and D spending were presented. They dealt with: (1) theoretical and empirical development of aggregate production functions, (2) the calculation of the time series for the rate of technological progress, (3) the calculation of the industry mix variable, (4) the estimation of distributed lags, (5) the estimation of the equations for gamma, (6) a ten-year forecast of the U.S. economy, (7) simulations of the macroeconomic model for increases in NASA R and D spending of $1.0, $.0.5, and 0.1 billions.
Interference in astronomical speckle patterns
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.
1976-01-01
Astronomical speckle patterns are examined in an atmospheric-optics context in order to determine what kind of image quality is to be expected from several different imaging techniques. The model used to describe the instantaneous complex field distribution across the pupil of a large telescope regards the pupil as a deep phase grating with a periodicity given by the size of the cell of uniform phase or the refractive index structure function. This model is used along with an empirical formula derived purely from the physical appearance of the speckle patterns to discuss the orders of interference in astronomical speckle patterns.
Improving Photometric Redshifts for Hyper Suprime-Cam
NASA Astrophysics Data System (ADS)
Speagle, Josh S.; Leauthaud, Alexie; Eisenstein, Daniel; Bundy, Kevin; Capak, Peter L.; Leistedt, Boris; Masters, Daniel C.; Mortlock, Daniel; Peiris, Hiranya; HSC Photo-z Team; HSC Weak Lensing Team
2017-01-01
Deriving accurate photometric redshift (photo-z) probability distribution functions (PDFs) are crucial science components for current and upcoming large-scale surveys. We outline how rigorous Bayesian inference and machine learning can be combined to quickly derive joint photo-z PDFs to individual galaxies and their parent populations. Using the first 170 deg^2 of data from the ongoing Hyper Suprime-Cam survey, we demonstrate our method is able to generate accurate predictions and reliable credible intervals over ~370k high-quality redshifts. We then use galaxy-galaxy lensing to empirically validate our predicted photo-z's over ~14M objects, finding a robust signal.
NASA Astrophysics Data System (ADS)
Karpushin, P. A.; Popov, Yu B.; Popova, A. I.; Popova, K. Yu; Krasnenko, N. P.; Lavrinenko, A. V.
2017-11-01
In this paper, the probabilities of faultless operation of aerologic stations are analyzed, the hypothesis of normality of the empirical data required for using the Kalman filter algorithms is tested, and the spatial correlation functions of distributions of meteorological parameters are determined. The results of a statistical analysis of two-term (0, 12 GMT) radiosonde observations of the temperature and wind velocity components at some preset altitude ranges in the troposphere in 2001-2016 are presented. These data can be used in mathematical modeling of physical processes in the atmosphere.
DOT National Transportation Integrated Search
2009-12-01
The purpose of this study is to characterize traffic inputs in support of the new Mechanistic- : Empirical Pavement Design Guide (M-E PDG) for the state of Michigan. These traffic : characteristics include monthly distribution factors (MDF), hourly d...
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Using Loss Functions for DIF Detection: An Empirical Bayes Approach.
ERIC Educational Resources Information Center
Zwick, Rebecca; Thayer, Dorothy; Lewis, Charles
2000-01-01
Studied a method for flagging differential item functioning (DIF) based on loss functions. Builds on earlier research that led to the development of an empirical Bayes enhancement to the Mantel-Haenszel DIF analysis. Tested the method through simulation and found its performance better than some commonly used DIF classification systems. (SLD)
NASA Astrophysics Data System (ADS)
Zhao, H.; Friedel, R. H. W.; Chen, Y.; Reeves, G. D.; Baker, D. N.; Li, X.; Jaynes, A. N.; Kanekal, S. G.; Claudepierre, S. G.; Fennell, J. F.; Blake, J. B.; Spence, H. E.
2018-05-01
Based on over 4 years of Van Allen Probes measurements, an empirical model of radiation belt electron equatorial pitch angle distribution (PAD) is constructed. The model, developed by fitting electron PADs with Legendre polynomials, provides the statistical PADs as a function of L-shell (L = 1-6), magnetic local time, electron energy ( 30 keV to 5.2 MeV), and geomagnetic activity (represented by the Dst index) and is also the first empirical PAD model in the inner belt and slot region. For megaelectron volt electrons, model results show more significant day-night PAD asymmetry of electrons with higher energies and during disturbed times, which is caused by geomagnetic field configuration and flux radial gradient changes. Steeper PADs with higher fluxes around 90° pitch angle and lower fluxes at lower pitch angles for higher-energy electrons and during active times are also present, which could be due to electromagnetic ion cyclotron wave scattering. For hundreds of kiloelectron volt electrons, cap PADs are generally present in the slot region during quiet times and their energy-dependent features are consistent with hiss wave scattering, while during active times, cap PADs are less significant especially at outer part of slot region, which could be due to the complex energizing and transport processes. The 90°-minimum PADs are persistently present in the inner belt and appear in the slot region during active times, and minima at 90° pitch angle are more significant for electrons with higher energies, which could be a critical evidence in identifying the underlying physical processes responsible for the formation of 90°-minimum PADs.
Empirical and model study on Travel-entering China
NASA Astrophysics Data System (ADS)
Han, Xue-Fang; Chen, Qi-Juan; Chang, Hui; He, Da-Ren
2006-03-01
We have done an empirical investigation on the travel-entering China from abroad to 31 regions of Chinese Mainland in recent ten years, including the development of the traveler's number, the traveler's number distribution for the traveler's home regions, the traveler's number distribution for the traveler's destination regions in Chinese mainland, and so on. We also suggest a dynamic model for simulating the competition between the 31 regions in the traveling market by considering two main influence factors, the attracting factor of the travel destinations and the distance between the destination and the home regions of the travelers. The simulation results show a good agreement with the empirical data. We expect the model could suggest some advice and thoughts to the travel-entering management departments in China and may be also for other countries.
A Proposed Change to ITU-R Recommendation 681
NASA Technical Reports Server (NTRS)
Davarian, F.
1996-01-01
Recommendation 681 of the International Telecommunications Union (ITU) provides five models for the prediction of propagation effects on land mobile satellite links: empirical roadside shadowing (ERS), attenuation frequency scaling, fade duration distribution, non-fade duration distribution, and fading due to multipath. Because the above prediction models have been empirically derived using a limited amount of data, these schemes work only for restricted ranges of link parameters. With the first two models, for example, the frequency and elevation angle parameters are restricted to 0.8 to 2.7 GHz and 20 to 60 degrees, respectively. Recently measured data have enabled us to enhance the range of the first two schemes. Moreover, for convenience, they have been combined into a single scheme named the extended empirical roadside shadowing (EERS) model.
Empirical study on a directed and weighted bus transport network in China
NASA Astrophysics Data System (ADS)
Feng, Shumin; Hu, Baoyu; Nie, Cen; Shen, Xianghao
2016-01-01
Bus transport networks are directed complex networks that consist of routes, stations, and passenger flow. In this study, the concept of duplication factor is introduced to analyze the differences between uplinks and downlinks for the bus transport network of Harbin (BTN-H). Further, a new representation model for BTNs is proposed, named as directed-space P. Two empirical characteristics of BTN-H are reported in this paper. First, the cumulative distributions of weighted degree, degree, number of routes that connect to each station, and node weight (peak-hour trips at a station) uniformly follow the exponential law. Meanwhile, the node weight shows positive correlations with the corresponding weighted degree, degree, and number of routes that connect to a station. Second, a new richness parameter of a node is explored by its node weight and the connectivity, weighted connectivity, average shortest path length and efficiency between rich nodes can be fitted by composite exponential functions to demonstrate the rich-club phenomenon.
Klotz, Dino; Grave, Daniel A; Dotan, Hen; Rothschild, Avner
2018-03-15
Photoelectrochemical impedance spectroscopy (PEIS) is a useful tool for the characterization of photoelectrodes for solar water splitting. However, the analysis of PEIS spectra often involves a priori assumptions that might bias the results. This work puts forward an empirical method that analyzes the distribution of relaxation times (DRT), obtained directly from the measured PEIS spectra of a model hematite photoanode. By following how the DRT evolves as a function of control parameters such as the applied potential and composition of the electrolyte solution, we obtain unbiased insights into the underlying mechanisms that shape the photocurrent. In a subsequent step, we fit the data to a process-oriented equivalent circuit model (ECM) whose makeup is derived from the DRT analysis in the first step. This yields consistent quantitative trends of the dominant polarization processes observed. Our observations reveal a common step for the photo-oxidation reactions of water and H 2 O 2 in alkaline solution.
Phase correlation of foreign exchange time series
NASA Astrophysics Data System (ADS)
Wu, Ming-Chya
2007-03-01
Correlation of foreign exchange rates in currency markets is investigated based on the empirical data of USD/DEM and USD/JPY exchange rates for a period from February 1 1986 to December 31 1996. The return of exchange time series is first decomposed into a number of intrinsic mode functions (IMFs) by the empirical mode decomposition method. The instantaneous phases of the resultant IMFs calculated by the Hilbert transform are then used to characterize the behaviors of pricing transmissions, and the correlation is probed by measuring the phase differences between two IMFs in the same order. From the distribution of phase differences, our results show explicitly that the correlations are stronger in daily time scale than in longer time scales. The demonstration for the correlations in periods of 1986-1989 and 1990-1993 indicates two exchange rates in the former period were more correlated than in the latter period. The result is consistent with the observations from the cross-correlation calculation.
Predicting the particle size distribution of eroded sediment using artificial neural networks.
Lagos-Avid, María Paz; Bonilla, Carlos A
2017-03-01
Water erosion causes soil degradation and nonpoint pollution. Pollutants are primarily transported on the surfaces of fine soil and sediment particles. Several soil loss models and empirical equations have been developed for the size distribution estimation of the sediment leaving the field, including the physically-based models and empirical equations. Usually, physically-based models require a large amount of data, sometimes exceeding the amount of available data in the modeled area. Conversely, empirical equations do not always predict the sediment composition associated with individual events and may require data that are not always available. Therefore, the objective of this study was to develop a model to predict the particle size distribution (PSD) of eroded soil. A total of 41 erosion events from 21 soils were used. These data were compiled from previous studies. Correlation and multiple regression analyses were used to identify the main variables controlling sediment PSD. These variables were the particle size distribution in the soil matrix, the antecedent soil moisture condition, soil erodibility, and hillslope geometry. With these variables, an artificial neural network was calibrated using data from 29 events (r 2 =0.98, 0.97, and 0.86; for sand, silt, and clay in the sediment, respectively) and then validated and tested on 12 events (r 2 =0.74, 0.85, and 0.75; for sand, silt, and clay in the sediment, respectively). The artificial neural network was compared with three empirical models. The network presented better performance in predicting sediment PSD and differentiating rain-runoff events in the same soil. In addition to the quality of the particle distribution estimates, this model requires a small number of easily obtained variables, providing a convenient routine for predicting PSD in eroded sediment in other pollutant transport models. Copyright © 2017 Elsevier B.V. All rights reserved.
Stable distribution and long-range correlation of Brent crude oil market
NASA Astrophysics Data System (ADS)
Yuan, Ying; Zhuang, Xin-tian; Jin, Xiu; Huang, Wei-qiang
2014-11-01
An empirical study of stable distribution and long-range correlation in Brent crude oil market was presented. First, it is found that the empirical distribution of Brent crude oil returns can be fitted well by a stable distribution, which is significantly different from a normal distribution. Second, the detrended fluctuation analysis for the Brent crude oil returns shows that there are long-range correlation in returns. It implies that there are patterns or trends in returns that persist over time. Third, the detrended fluctuation analysis for the Brent crude oil returns shows that after the financial crisis 2008, the Brent crude oil market becomes more persistence. It implies that the financial crisis 2008 could increase the frequency and strength of the interdependence and correlations between the financial time series. All of these findings may be used to improve the current fractal theories.
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-08-01
The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).
A generalized preferential attachment model for business firms growth rates. I. Empirical evidence
NASA Astrophysics Data System (ADS)
Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.
2007-05-01
We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
Dinosaur peptides suggest mechanisms of protein survival.
San Antonio, James D; Schweitzer, Mary H; Jensen, Shane T; Kalluri, Raghu; Buckley, Michael; Orgel, Joseph P R O
2011-01-01
Eleven collagen peptide sequences recovered from chemical extracts of dinosaur bones were mapped onto molecular models of the vertebrate collagen fibril derived from extant taxa. The dinosaur peptides localized to fibril regions protected by the close packing of collagen molecules, and contained few acidic amino acids. Four peptides mapped to collagen regions crucial for cell-collagen interactions and tissue development. Dinosaur peptides were not represented in more exposed parts of the collagen fibril or regions mediating intermolecular cross-linking. Thus functionally significant regions of collagen fibrils that are physically shielded within the fibril may be preferentially preserved in fossils. These results show empirically that structure-function relationships at the molecular level could contribute to selective preservation in fossilized vertebrate remains across geological time, suggest a 'preservation motif', and bolster current concepts linking collagen structure to biological function. This non-random distribution supports the hypothesis that the peptides are produced by the extinct organisms and suggests a chemical mechanism for survival.
A Powerful Test for Comparing Multiple Regression Functions.
Maity, Arnab
2012-09-01
In this article, we address the important problem of comparison of two or more population regression functions. Recently, Pardo-Fernández, Van Keilegom and González-Manteiga (2007) developed test statistics for simple nonparametric regression models: Y(ij) = θ(j)(Z(ij)) + σ(j)(Z(ij))∊(ij), based on empirical distributions of the errors in each population j = 1, … , J. In this paper, we propose a test for equality of the θ(j)(·) based on the concept of generalized likelihood ratio type statistics. We also generalize our test for other nonparametric regression setups, e.g, nonparametric logistic regression, where the loglikelihood for population j is any general smooth function [Formula: see text]. We describe a resampling procedure to obtain the critical values of the test. In addition, we present a simulation study to evaluate the performance of the proposed test and compare our results to those in Pardo-Fernández et al. (2007).
Permutation methods for the structured exploratory data analysis (SEDA) of familial trait values.
Karlin, S; Williams, P T
1984-07-01
A collection of functions that contrast familial trait values between and across generations is proposed for studying transmission effects and other collateral influences in nuclear families. Two classes of structured exploratory data analysis (SEDA) statistics are derived from ratios of these functions. SEDA-functionals are the empirical cumulative distributions of the ratio of the two contrasts computed within each family. SEDA-indices are formed by first averaging the numerator and denominator contrasts separately over the population and then forming their ratio. The significance of SEDA results are determined by a spectrum of permutation techniques that selectively shuffle the trait values across families. The process systematically alters certain family structure relationships while keeping other familial relationships intact. The methodology is applied to five data examples of plasma total cholesterol concentrations, reported height values, dermatoglyphic pattern intensity index scores, measurements of dopamine-beta-hydroxylase activity, and psychometric cognitive test results.
Reference-dependent risk sensitivity as rational inference.
Denrell, Jerker C
2015-07-01
Existing explanations of reference-dependent risk sensitivity attribute it to cognitive imperfections and heuristic choice processes. This article shows that behavior consistent with an S-shaped value function could be an implication of rational inferences about the expected values of alternatives. Theoretically, I demonstrate that even a risk-neutral Bayesian decision maker, who is uncertain about the reliability of observations, should use variability in observed outcomes as a predictor of low expected value for outcomes above a reference level, and as a predictor of high expected value for outcomes below a reference level. Empirically, I show that combining past outcomes using an S-shaped value function leads to accurate predictions about future values. The theory also offers a rationale for why risk sensitivity consistent with an inverse S-shaped value function should occur in experiments on decisions from experience with binary payoff distributions. (c) 2015 APA, all rights reserved).
An Exospheric Temperature Model Based On CHAMP Observations and TIEGCM Simulations
NASA Astrophysics Data System (ADS)
Ruan, Haibing; Lei, Jiuhou; Dou, Xiankang; Liu, Siqing; Aa, Ercha
2018-02-01
In this work, thermospheric densities from the accelerometer measurement on board the CHAMP satellite during 2002-2009 and the simulations from the National Center for Atmospheric Research Thermosphere Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM) are employed to develop an empirical exospheric temperature model (ETM). The two-dimensional basis functions of the ETM are first provided from the principal component analysis of the TIEGCM simulations. Based on the exospheric temperatures derived from CHAMP thermospheric densities, a global distribution of the exospheric temperatures is reconstructed. A parameterization is conducted for each basis function amplitude as a function of solar-geophysical and seasonal conditions. Thus, the ETM can be utilized to model the thermospheric temperature and mass density under a specified condition. Our results showed that the averaged standard deviation of the ETM is generally less than 10% than approximately 30% in the MSIS model. Besides, the ETM reproduces the global thermospheric evolutions including the equatorial thermosphere anomaly.
Critical thresholds in species` responses to landscape structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
With, K.A.; Crist, T.O.
1995-12-01
Critical thresholds are transition ranges across which small changes in spatial pattern produce abrupt shifts in ecological responses. Habitat fragmentation provides a familiar example of a critical threshold. As the landscape becomes dissected into smaller parcels of habitat. landscape connectivity-the functional linkage among habitat patches - may suddenly become disrupted, which may have important consequences for the distribution and persistence of populations. Landscape connectivity depends not only on the abundance and spatial patterning of habitat. but also on the habitat specificity and dispersal abilities of species. Habitat specialists with limited dispersal capabilities presumably have a much lower threshold to habitatmore » fragmentation than highly vagile species, which may perceive the landscape as functionally connected across a greater range of fragmentation severity. To determine where threshold effects in species, responses to landscape structure are likely to occur, a simulation model modified from percolation theory was developed. Our simulations predicted the distributional patterns of populations in different landscape mosaics, which we tested empirically using two grasshopper species (Orthoptera: Acrididae) that occur in the shortgrass prairie of north-central Colorado. The distribution of these two species in this grassland mosaic matched the predictions from our simulations. By providing quantitative predictions of threshold effects, this modelling approach may prove useful in the formulation of conservation strategies and assessment of land-use changes on species` distributional patterns and persistence.« less
NASA Technical Reports Server (NTRS)
Genge, Gary G.
1991-01-01
The probabilistic design approach currently receiving attention for structural failure modes has been adapted for obtaining measured bearing wear limits in the Space Shuttle Main Engine high-pressure oxidizer turbopump. With the development of the shaft microtravel measurements to determine bearing health, an acceptance limit was neeed that protects against all known faiure modes yet is not overly conservative. This acceptance criteria limit has been successfully determined using probabilistic descriptions of preflight hardware geometry, empirical bearing wear data, mission requirements, and measurement tool precision as an input for a Monte Carlo simulation. The result of the simulation is a frequency distribution of failures as a function of preflight acceptance limits. When the distribution is converted into a reliability curve, a conscious risk management decision is made concerning the acceptance limit.
Cultural capital and social inequality in health.
Abel, T
2008-07-01
Economic and social resources are known to contribute to the unequal distribution of health outcomes. Culture-related factors such as normative beliefs, knowledge and behaviours have also been shown to be associated with health status. The role and function of cultural resources in the unequal distribution of health is addressed. Drawing on the work of French Sociologist Pierre Bourdieu, the concept of cultural capital for its contribution to the current understanding of social inequalities in health is explored. It is suggested that class related cultural resources interact with economic and social capital in the social structuring of people's health chances and choices. It is concluded that cultural capital is a key element in the behavioural transformation of social inequality into health inequality. New directions for empirical research on the interplay between economic, social and cultural capital are outlined.
Comparison of probability statistics for automated ship detection in SAR imagery
NASA Astrophysics Data System (ADS)
Henschel, Michael D.; Rey, Maria T.; Campbell, J. W. M.; Petrovic, D.
1998-12-01
This paper discuses the initial results of a recent operational trial of the Ocean Monitoring Workstation's (OMW) ship detection algorithm which is essentially a Constant False Alarm Rate filter applied to Synthetic Aperture Radar data. The choice of probability distribution and methodologies for calculating scene specific statistics are discussed in some detail. An empirical basis for the choice of probability distribution used is discussed. We compare the results using a l-look, k-distribution function with various parameter choices and methods of estimation. As a special case of sea clutter statistics the application of a (chi) 2-distribution is also discussed. Comparisons are made with reference to RADARSAT data collected during the Maritime Command Operation Training exercise conducted in Atlantic Canadian Waters in June 1998. Reference is also made to previously collected statistics. The OMW is a commercial software suite that provides modules for automated vessel detection, oil spill monitoring, and environmental monitoring. This work has been undertaken to fine tune the OMW algorithm's, with special emphasis on the false alarm rate of each algorithm.
The dynamics of the rise and fall of empires
NASA Astrophysics Data System (ADS)
Gündüz, Güngör
2016-05-01
The rise of empires can be elucidated by treating them as living organisms, and the celebrated Verhulst or Lotka-Volterra dynamics can be used to understand the growth mechanisms of empires. The fast growth can be expressed by an exponential function as in the case of Macedonian empire of the Alexander the Great whereas a sigmoidal growth can be expressed by power-law equation as in the case of Roman and Ottoman empires. The superpowers Russia and the USA follow somehow different mechanisms, Russia displays two different exponential growth behaviors whereas the USA follows two different power-law behaviors. They did not disturb and mobilize their social capacity much during the course of their rise. The decline and the collapse of an empire occur through a kind of fragmentation process, and the consequently formed small states become rather free in their behavior. The lands of the new states formed exhibit a hierarchical pattern, and the number of the states having an area smaller than the largest one can be given either by an exponential or power-law function. The exponential distribution pattern occurs when the states are quite free in their pursuits, but the power-law behavior occurs when they are under the pressure of an empire or a strong state in the region. The geological and geographical conditions also affect whether there occurs exponential or power-law behavior. The new unions formed such as the European Union and the Shanghai Cooperation increase the power-law exponent implying that they increase the stress in the international affairs. The viscoelastic behavior of the empires can be found from the scattering diagrams, and the storage (G‧)and loss modulus (G‧‧), and the associated work-like and heat-like terms can be determined in the sense of thermodynamics. The G‧ of Ottomans was larger than that of Romans implying that they confronted severe resistance during their expansion. The G‧ of Russia is also larger than that of the USA; in fact the USA did not face severe resistance as they had an overwhelming superiority over native Americans. The G‧>G‧‧ indicates solidity in the social structure and Romans, Ottomans, and Russians all have G‧ larger than G‧‧. The G‧ is slightly larger than G‧‧ for the USA indicating that they have had a very flexible social structure. By the same token the ratio of the work-like term to the internal energy is larger for Ottomans than that of Romans, and larger for the USA than that of Russia. That means the fraction of the total energy allocated to improve the social capacity is larger for Romans than that of Ottomans, and is larger for Russians than that of the USA.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.
Stochastic processes in the social sciences: Markets, prices and wealth distributions
NASA Astrophysics Data System (ADS)
Romero, Natalia E.
The present work uses statistical mechanics tools to investigate the dynamics of markets, prices, trades and wealth distribution. We studied the evolution of market dynamics in different stages of historical development by analyzing commodity prices from two distinct periods ancient Babylon, and medieval and early modern England. We find that the first-digit distributions of both Babylon and England commodity prices follow Benfords law, indicating that the data represent empirical observations typically arising from a free market. Further, we find that the normalized prices of both Babylon and England agricultural commodities are characterized by stretched exponential distributions, and exhibit persistent correlations of a power law type over long periods of up to several centuries, in contrast to contemporary markets. Our findings suggest that similar market interactions may underlie the dynamics of ancient agricultural commodity prices, and that these interactions may remain stable across centuries. To further investigate the dynamics of markets we present the analogy between transfers of money between individuals and the transfer of energy through particle collisions by means of the kinetic theory of gases. We introduce a theoretical framework of how the micro rules of trading lead to the emergence of income and wealth distribution. Particularly, we study the effects of different types of distribution of savings/investments among individuals in a society and different welfare/subsidies redistribution policies. Results show that while considering savings propensities the models approach empirical distributions of wealth quite well the effect of redistribution better captures specific features of the distributions which earlier models failed to do; moreover the models still preserve the exponential decay observed in empirical income distributions reported by tax data and surveys.
Multiagent model and mean field theory of complex auction dynamics
NASA Astrophysics Data System (ADS)
Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng
2015-09-01
Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.
NASA Astrophysics Data System (ADS)
Voronin, Alexander; Vasilchenko, Ann; Khoperskov, Alexander
2018-03-01
The project of small watercourses restoration in the northern part of the Volga-Akhtuba floodplain is considered together with the aim of increasing the watering of the territory during small and medium floods. The topography irregularity, the complex structure of the floodplain valley consisting of large number of small watercourses, the presence of urbanized and agricultural areas require careful preliminary analysis of the hydrological safety and efficiency of geographically distributed project activities. Using the digital terrain and watercourses structure models of the floodplain, the hydrodynamic flood model, the analysis of the hydrological safety and efficiency of several project implementation strategies has been conducted. The objective function values have been obtained from the hydrodynamic calculations of the floodplain territory flooding for virtual digital terrain models simulating alternatives for the geographically distributed project activities. The comparative efficiency of several empirical strategies for the geographically distributed project activities, as well as a two-stage exact solution method for the optimization problem has been studied.
Using the Quantile Mapping to improve a weather generator
NASA Astrophysics Data System (ADS)
Chen, Y.; Themessl, M.; Gobiet, A.
2012-04-01
We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Annual Cycle of Surface Longwave Radiation
NASA Technical Reports Server (NTRS)
Mlynczak, Pamela E.; Smith, G. Louis; Wilber, Anne C.; Stackhouse, Paul W.
2011-01-01
The annual cycles of upward and downward longwave fluxes at the Earth s surface are investigated by use of the NASA/GEWEX Surface Radiation Budget Data Set. Because of the immense difference between the heat capacity of land and ocean, the surface of Earth is partitioned into these two categories. Principal component analysis is used to quantify the annual cycles. Over land, the first principal component describes over 95% of the variance of the annual cycle of the upward and downward longwave fluxes. Over ocean the first term describes more than 87% of these annual cycles. Empirical orthogonal functions show the corresponding geographical distributions of these cycles. Phase plane diagrams of the annual cycles of upward longwave fluxes as a function of net shortwave flux show the thermal inertia of land and ocean.
A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.
A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation for potential space project applications of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material requires an in-depth understanding of the MLCCs reliability. A general reliability model for Ni-BaTiO3 MLCCs is developed and discussed in this paper. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitors reliability life responds to external stresses; and an empirical function that defines the contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.
Resonance-induced multimodal body-size distributions in ecosystems
Lampert, Adam; Tlusty, Tsvi
2013-01-01
The size of an organism reflects its metabolic rate, growth rate, mortality, and other important characteristics; therefore, the distribution of body size is a major determinant of ecosystem structure and function. Body-size distributions often are multimodal, with several peaks of abundant sizes, and previous studies suggest that this is the outcome of niche separation: species from distinct peaks avoid competition by consuming different resources, which results in selection of different sizes in each niche. However, this cannot explain many ecosystems with several peaks competing over the same niche. Here, we suggest an alternative, generic mechanism underlying multimodal size distributions, by showing that the size-dependent tradeoff between reproduction and resource utilization entails an inherent resonance that may induce multiple peaks, all competing over the same niche. Our theory is well fitted to empirical data in various ecosystems, in which both model and measurements show a multimodal, periodically peaked distribution at larger sizes, followed by a smooth tail at smaller sizes. Moreover, we show a universal pattern of size distributions, manifested in the collapse of data from ecosystems of different scales: phytoplankton in a lake, metazoans in a stream, and arthropods in forests. The demonstrated resonance mechanism is generic, suggesting that multimodal distributions of numerous ecological characters emerge from the interplay between local competition and global migration. PMID:23248320
An Organizational Perspective of Distributed Leadership: A Portrait of a Middle School
ERIC Educational Resources Information Center
Angelle, Pamela S.
2010-01-01
Interest in the concept of distributed leadership has increased as evidenced in the recent literature. While there has been much discussion, there has been scant empirical evidence of distributed leadership in practice. This research study examines one middle school where educators practiced distributed leadership daily. Approached from an…
Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS
NASA Astrophysics Data System (ADS)
Willison, A.; Bedard, D.
This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where the contribution of each facet is proportionally integrated.
Distributed Leadership: According to the Evidence
ERIC Educational Resources Information Center
Harris, Alma
2008-01-01
Purpose: This paper aims to provide an overview of the literature concerning distributed leadership and organisational change. The main purpose of the paper is to consider the empirical evidence that highlights a relationship between distributed leadership and organisational outcomes. Design/methodological approach: The paper draws on several…
NASA Astrophysics Data System (ADS)
Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei
2017-03-01
Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.
A generalized theory of preferential linking
NASA Astrophysics Data System (ADS)
Hu, Haibo; Guo, Jinli; Liu, Xuan; Wang, Xiaofan
2014-12-01
There are diverse mechanisms driving the evolution of social networks. A key open question dealing with understanding their evolution is: How do various preferential linking mechanisms produce networks with different features? In this paper we first empirically study preferential linking phenomena in an evolving online social network, find and validate the linear preference. We propose an analyzable model which captures the real growth process of the network and reveals the underlying mechanism dominating its evolution. Furthermore based on preferential linking we propose a generalized model reproducing the evolution of online social networks, and present unified analytical results describing network characteristics for 27 preference scenarios. We study the mathematical structure of degree distributions and find that within the framework of preferential linking analytical degree distributions can only be the combinations of finite kinds of functions which are related to rational, logarithmic and inverse tangent functions, and extremely complex network structure will emerge even for very simple sublinear preferential linking. This work not only provides a verifiable origin for the emergence of various network characteristics in social networks, but bridges the micro individuals' behaviors and the global organization of social networks.
Comparison of volatility function technique for risk-neutral densities estimation
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Atmospheric Teleconnections From Cumulants
NASA Astrophysics Data System (ADS)
Sabou, F.; Kaspi, Y.; Marston, B.; Schneider, T.
2011-12-01
Multi-point cumulants of fields such as vorticity provide a way to visualize atmospheric teleconnections, complementing other approaches such as the method of empirical orthogonal functions (EOFs). We calculate equal-time two-point cumulants of the vorticity from NCEP reanalysis data during the period 1980 -- 2010 and from direct numerical simulation (DNS) using an idealized dry general circulation model (GCM) (Schneider and Walker, 2006). Extratropical correlations seen in the NCEP data are qualitatively reproduced by the model. Three- and four-point cumulants accumulated from DNS quantify departures of the probability distribution function from a normal distribution, shedding light on the efficacy of direct statistical simulation (DSS) of atmosphere dynamics by cumulant expansions (Marston, Conover, and Schneider, 2008; Marston 2011). Lagged-time two-point cumulants between temperature gradients and eddy kinetic energy (EKE), accumulated by DNS of an idealized moist aquaplanet GCM (O'Gorman and Schneider, 2008), reveal dynamics of storm tracks. Regions of enhanced baroclinicity (as found along the eastern boundary of continents) lead to a local enhancement of EKE and a suppression of EKE further downstream as the storm track self-destructs (Kaspi and Schneider, 2011).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, B.; Akasofu, S.; Kamide, Y.
1983-08-01
As a part of the joint efforts of operating six meridian chains of magnetometers during the IMS, magnetic records from 71 stations are used to deduce the distribution of electric fields and currents in the polar ionosphere for March 17, 18, and 19, 1978. As a continuation of this project, we have constructed hourly distribution maps of the Joule heat production rate and their sum over the entire polar region on the three days. For this purpose the conductivity distribution is inferred at each instant partially on the basis of an empirical method devised by Ahn et al. (1982). Themore » particle energy injection rate is estimated similarly by using an empirical method. The data set thus obtained allows us to estimate also the global Joule heat production rate U/sub J/, the global particle energy injection rate U/sub A/ and the sum U/sub Gamma/ of the two quantities. It is found that three global quantities (watt) are related almost linearly to the AE(nT) and AL(nT) indices. Our present estimates give the following relationships: U/sub J/ = 2.3 times 10/sup 8/ x AE/sub 8/ U/sub A/ = 0.6 times 10/sup 8/ x AE/sub 8/ and U/sub I/ = 2.9 times 10/sup 8/ x AE: U/sub J/ = 3.0 times 10/sup 8/ x AL/sub 8/ U/sub A/ = 0.8 times 10/sup 8/ x AL, and U/sub I/ = 3.8 times 10/sup 8/ x AL.« less
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
Variety and volatility in financial markets
NASA Astrophysics Data System (ADS)
Lillo, Fabrizio; Mantegna, Rosario N.
2000-11-01
We study the price dynamics of stocks traded in a financial market by considering the statistical properties of both a single time series and an ensemble of stocks traded simultaneously. We use the n stocks traded on the New York Stock Exchange to form a statistical ensemble of daily stock returns. For each trading day of our database, we study the ensemble return distribution. We find that a typical ensemble return distribution exists in most of the trading days with the exception of crash and rally days and of the days following these extreme events. We analyze each ensemble return distribution by extracting its first two central moments. We observe that these moments fluctuate in time and are stochastic processes, themselves. We characterize the statistical properties of ensemble return distribution central moments by investigating their probability density functions and temporal correlation properties. In general, time-averaged and portfolio-averaged price returns have different statistical properties. We infer from these differences information about the relative strength of correlation between stocks and between different trading days. Last, we compare our empirical results with those predicted by the single-index model and we conclude that this simple model cannot explain the statistical properties of the second moment of the ensemble return distribution.
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
Kolmogorov-Smirnov test for spatially correlated data
Olea, R.A.; Pawlowsky-Glahn, V.
2009-01-01
The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1994-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1991-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.
Power laws in citation distributions: evidence from Scopus.
Brzezinski, Michal
Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1975-01-01
Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.
Pinter-Wollman, Noa; Wollman, Roy; Guetz, Adam; Holmes, Susan; Gordon, Deborah M.
2011-01-01
Social insects exhibit coordinated behaviour without central control. Local interactions among individuals determine their behaviour and regulate the activity of the colony. Harvester ants are recruited for outside work, using networks of brief antennal contacts, in the nest chamber closest to the nest exit: the entrance chamber. Here, we combine empirical observations, image analysis and computer simulations to investigate the structure and function of the interaction network in the entrance chamber. Ant interactions were distributed heterogeneously in the chamber, with an interaction hot-spot at the entrance leading further into the nest. The distribution of the total interactions per ant followed a right-skewed distribution, indicating the presence of highly connected individuals. Numbers of ant encounters observed positively correlated with the duration of observation. Individuals varied in interaction frequency, even after accounting for the duration of observation. An ant's interaction frequency was explained by its path shape and location within the entrance chamber. Computer simulations demonstrate that variation among individuals in connectivity accelerates information flow to an extent equivalent to an increase in the total number of interactions. Individual variation in connectivity, arising from variation among ants in location and spatial behaviour, creates interaction centres, which may expedite information flow. PMID:21490001
Analytical Model for Mars Crater-Size Frequency Distribution
NASA Astrophysics Data System (ADS)
Bruckman, W.; Ruiz, A.; Ramos, E.
2009-05-01
We present a theoretical and analytical curve that reproduces essential features of the frequency distributions vs. diameter of the 42,000 impact craters contained in Barlow's Mars Catalog. The model is derived using reasonable simple assumptions that allow us to relate the present craters population with the craters population at each particular epoch. The model takes into consideration the reduction of the number of craters as a function of time caused by their erosion and obliteration, and this provides a simple and natural explanation for the presence of different slopes in the empirical log-log plot of number of craters (N) vs. diameter (D). A mean life for martians craters as a function of diameter is deduced, and it is shown that this result is consistent with the corresponding determination of craters mean life based on Earth data. Arguments are given to suggest that this consistency follows from the fact that a crater mean life is proportional to its volumen. It also follows that in the absence of erosions and obliterations, when craters are preserved, we would have N ∝ 1/D^{4.3}, which is a striking conclusion, since the exponent 4.3 is larger than previously thought. Such an exponent implies a similar slope in the extrapolated impactors size-frequency distribution.
Bridging stylized facts in finance and data non-stationarities
NASA Astrophysics Data System (ADS)
Camargo, Sabrina; Duarte Queirós, Sílvio M.; Anteneodo, Celia
2013-04-01
Employing a recent technique which allows the representation of nonstationary data by means of a juxtaposition of locally stationary paths of different length, we introduce a comprehensive analysis of the key observables in a financial market: the trading volume and the price fluctuations. From the segmentation procedure we are able to introduce a quantitative description of statistical features of these two quantities, which are often named stylized facts, namely the tails of the distribution of trading volume and price fluctuations and a dynamics compatible with the U-shaped profile of the volume in a trading section and the slow decay of the autocorrelation function. The segmentation of the trading volume series provides evidence of slow evolution of the fluctuating parameters of each patch, pointing to the mixing scenario. Assuming that long-term features are the outcome of a statistical mixture of simple local forms, we test and compare different probability density functions to provide the long-term distribution of the trading volume, concluding that the log-normal gives the best agreement with the empirical distribution. Moreover, the segmentation of the magnitude price fluctuations are quite different from the results for the trading volume, indicating that changes in the statistics of price fluctuations occur at a faster scale than in the case of trading volume.
Yoshida, Tatsusada; Hirozumi, Koji; Harada, Masataka; Hitaoka, Seiji; Chuman, Hiroshi
2011-06-03
The rate of hydrogen atom abstraction from phenolic compounds by a radical is known to be often linear with the Hammett substitution constant σ(+), defined using the S(N)1 solvolysis rates of substituted cumyl chlorides. Nevertheless, a physicochemical reason for the above "empirical fact" has not been fully revealed. The transition states of complexes between the 2,2-diphenyl-1-picrylhydrazyl radical (dpph·) and a series of para-substituted phenols were determined by DFT (Density Functional Theory) calculations, and then the activation energy as well as the homolytic bond dissociation energy of the O-H bond and charge distribution in the transition state were calculated. The heterolytic bond dissociation energy of the C-Cl bond and charge distribution in the corresponding para-substituted cumyl chlorides were calculated in parallel. Excellent correlations among σ(+), charge distribution, and activation and bond dissociation energies revealed quantitatively that there is a strong similarity between the two reactions, showing that the electron-deficiency of the π-electron system conjugated with a substituent plays a crucial role in determining rates of the two reactions. The results provide a new insight into and physicochemical understanding of σ(+) in the hydrogen abstraction from substituted phenols by a radical.
Content Representation in the Human Medial Temporal Lobe
Liang, Jackson C.; Wagner, Anthony D.
2013-01-01
Current theories of medial temporal lobe (MTL) function focus on event content as an important organizational principle that differentiates MTL subregions. Perirhinal and parahippocampal cortices may play content-specific roles in memory, whereas hippocampal processing is alternately hypothesized to be content specific or content general. Despite anatomical evidence for content-specific MTL pathways, empirical data for content-based MTL subregional dissociations are mixed. Here, we combined functional magnetic resonance imaging with multiple statistical approaches to characterize MTL subregional responses to different classes of novel event content (faces, scenes, spoken words, sounds, visual words). Univariate analyses revealed that responses to novel faces and scenes were distributed across the anterior–posterior axis of MTL cortex, with face responses distributed more anteriorly than scene responses. Moreover, multivariate pattern analyses of perirhinal and parahippocampal data revealed spatially organized representational codes for multiple content classes, including nonpreferred visual and auditory stimuli. In contrast, anterior hippocampal responses were content general, with less accurate overall pattern classification relative to MTL cortex. Finally, posterior hippocampal activation patterns consistently discriminated scenes more accurately than other forms of content. Collectively, our findings indicate differential contributions of MTL subregions to event representation via a distributed code along the anterior–posterior axis of MTL that depends on the nature of event content. PMID:22275474
The Future of Distributed Leadership
ERIC Educational Resources Information Center
Gronn, Peter
2008-01-01
Purpose: This paper aims to assess the empirical utility and conceptual significance of distributed leadership. Design/methodology/approach: Three main sources of evidence are drawn on. The paper reviews some neglected commentary of an early generation of distributed leadership theorists. It also discusses a strand of social science writings on…
Mishra, U.; Jastrow, J.D.; Matamala, R.; Hugelius, G.; Koven, C.D.; Harden, Jennifer W.; Ping, S.L.; Michaelson, G.J.; Fan, Z.; Miller, R.M.; McGuire, A.D.; Tarnocai, C.; Kuhry, P.; Riley, W.J.; Schaefer, K.; Schuur, E.A.G.; Jorgenson, M.T.; Hinzman, L.D.
2013-01-01
The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges.
Chybalski, Filip
The existing literature on the efficiency of pension system, usually addresses the problem between the choice of different theoretical models, or concerns one or few empirical pension systems. In this paper quite different approach to the measurement of pension system efficiency is proposed. It is dedicated mainly to the cross-country studies of empirical pension systems, however it may be also employed to the analysis of a given pension system on the basis of time series. I identify four dimensions of pension system efficiency, referring to: GDP-distribution, adequacy of pension, influence on the labour market and administrative costs. Consequently, I propose four sets of static and one set of dynamic efficiency indicators. In the empirical part of the paper, I use Spearman's rank correlation coefficient and cluster analysis to verify the proposed method on statistical data covering 28 European countries in years 2007-2011. I prove that the method works and enables some comparisons as well as clustering of analyzed pension systems. The study delivers also some interesting empirical findings. The main goal of pension systems seems to become poverty alleviation, since the efficiency of ensuring protection against poverty, as well as the efficiency of reducing poverty, is very resistant to the efficiency of GDP-distribution. The opposite situation characterizes the efficiency of consumption smoothing-this is generally sensitive to the efficiency of GDP-distribution, and its dynamics are sensitive to the dynamics of GDP-distribution efficiency. The results of the study indicate the Norwegian and the Icelandic pension systems to be the most efficient in the analyzed group.
3D radiation belt diffusion model results using new empirical models of whistler chorus and hiss
NASA Astrophysics Data System (ADS)
Cunningham, G.; Chen, Y.; Henderson, M. G.; Reeves, G. D.; Tu, W.
2012-12-01
3D diffusion codes model the energization, radial transport, and pitch angle scattering due to wave-particle interactions. Diffusion codes are powerful but are limited by the lack of knowledge of the spatial & temporal distribution of waves that drive the interactions for a specific event. We present results from the 3D DREAM model using diffusion coefficients driven by new, activity-dependent, statistical models of chorus and hiss waves. Most 3D codes parameterize the diffusion coefficients or wave amplitudes as functions of magnetic activity indices like Kp, AE, or Dst. These functional representations produce the average value of the wave intensities for a given level of magnetic activity; however, the variability of the wave population at a given activity level is lost with such a representation. Our 3D code makes use of the full sample distributions contained in a set of empirical wave databases (one database for each wave type, including plasmaspheric hiss, lower and upper hand chorus) that were recently produced by our team using CRRES and THEMIS observations. The wave databases store the full probability distribution of observed wave intensity binned by AE, MLT, MLAT and L*. In this presentation, we show results that make use of the wave intensity sample probability distributions for lower-band and upper-band chorus by sampling the distributions stochastically during a representative CRRES-era storm. The sampling of the wave intensity probability distributions produces a collection of possible evolutions of the phase space density, which quantifies the uncertainty in the model predictions caused by the uncertainty of the chorus wave amplitudes for a specific event. A significant issue is the determination of an appropriate model for the spatio-temporal correlations of the wave intensities, since the diffusion coefficients are computed as spatio-temporal averages of the waves over MLT, MLAT and L*. The spatiotemporal correlations cannot be inferred from the wave databases. In this study we use a temporal correlation of ~1 hour for the sampled wave intensities that is informed by the observed autocorrelation in the AE index, a spatial correlation length of ~100 km in the two directions perpendicular to the magnetic field, and a spatial correlation length of 5000 km in the direction parallel to the magnetic field, according to the work of Santolik et al (2003), who used multi-spacecraft measurements from Cluster to quantify the correlation length scales for equatorial chorus . We find that, despite the small correlation length scale for chorus, there remains significant variability in the model outcomes driven by variability in the chorus wave intensities.
Drake, Birger; Nádai, Béla
1970-03-01
An empirical measure of viscosity, which is often far from being a linear function of composition, was used together with refractive index to build up a function which bears a linear relationship to the composition of tomato paste-water-sucrose mixtures. The new function can be used directly for rapid composition control by linear vector-vector transformation.
ERIC Educational Resources Information Center
Tian, Meng; Risku, Mika; Collin, Kaija
2016-01-01
This article provides a meta-analysis of research conducted on distributed leadership from 2002 to 2013. It continues the review of distributed leadership commissioned by the English National College for School Leadership (NCSL) ("Distributed Leadership: A Desk Study," Bennett et al., 2003), which identified two gaps in the research…
ERIC Educational Resources Information Center
Sheppard, Bruce; Hurley, Noel; Dibbon, David
2010-01-01
The study reported in this paper advances the understanding of distributed leadership in schools, the role of the school principal in the facilitation of distributed leadership and its impact upon teachers' morale and enthusiasm for their work. While both the empirical base and practical application of distributed leadership has grown phenomenally…
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
NASA Astrophysics Data System (ADS)
Izacard, Olivier
2016-08-01
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izacard, Olivier, E-mail: izacard@llnl.gov
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less
Annealed Scaling for a Charged Polymer
NASA Astrophysics Data System (ADS)
Caravenna, F.; den Hollander, F.; Pétrélis, N.; Poisat, J.
2016-03-01
This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems. What happens for the quenched free energy per monomer remains open. We state two modest results and raise a few questions.
Bomphrey, Richard J; Taylor, Graham K; Lawson, Nicholas J; Thomas, Adrian L.R
2005-01-01
Actuator disc models of insect flight are concerned solely with the rate of momentum transfer to the air that passes through the disc. These simple models assume that an even pressure is applied across the disc, resulting in a uniform downwash distribution. However, a correction factor, k, is often included to correct for the difference in efficiency between the assumed even downwash distribution, and the real downwash distribution. In the absence of any empirical measurements of the downwash distribution behind a real insect, the values of k used in the literature have been necessarily speculative. Direct measurement of this efficiency factor is now possible, and could be used to compare the relative efficiencies of insect flight across the Class. Here, we use Digital Particle Image Velocimetry to measure the instantaneous downwash distribution, mid-downstroke, of a tethered desert locust (Schistocerca gregaria). By integrating the downwash distribution, we are thereby able to provide the first direct empirical measurement of k for an insect. The measured value of k=1.12 corresponds reasonably well with that predicted by previous theoretical studies. PMID:16849240
Fully probabilistic earthquake source inversion on teleseismic scales
NASA Astrophysics Data System (ADS)
Stähler, Simon; Sigloch, Karin
2017-04-01
Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.
Connallon, Tim; Clark, Andrew G
2010-12-01
Sex-biased genes--genes that are differentially expressed within males and females--are nonrandomly distributed across animal genomes, with sex chromosomes and autosomes often carrying markedly different concentrations of male- and female-biased genes. These linkage patterns are often gene- and lineage-dependent, differing between functional genetic categories and between species. Although sex-specific selection is often hypothesized to shape the evolution of sex-linked and autosomal gene content, population genetics theory has yet to account for many of the gene- and lineage-specific idiosyncrasies emerging from the empirical literature. With the goal of improving the connection between evolutionary theory and a rapidly growing body of genome-wide empirical studies, we extend previous population genetics theory of sex-specific selection by developing and analyzing a biologically informed model that incorporates sex linkage, pleiotropy, recombination, and epistasis, factors that are likely to vary between genes and between species. Our results demonstrate that sex-specific selection and sex-specific recombination rates can generate, and are compatible with, the gene- and species-specific linkage patterns reported in the genomics literature. The theory suggests that sexual selection may strongly influence the architectures of animal genomes, as well as the chromosomal distribution of fixed substitutions underlying sexually dimorphic traits. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
Identification of AR(I)MA processes for modelling temporal correlations of GPS observations
NASA Astrophysics Data System (ADS)
Luo, X.; Mayer, M.; Heck, B.
2009-04-01
In many geodetic applications observations of the Global Positioning System (GPS) are routinely processed by means of the least-squares method. However, this algorithm delivers reliable estimates of unknown parameters und realistic accuracy measures only if both the functional and stochastic models are appropriately defined within GPS data processing. One deficiency of the stochastic model used in many GPS software products consists in neglecting temporal correlations of GPS observations. In practice the knowledge of the temporal stochastic behaviour of GPS observations can be improved by analysing time series of residuals resulting from the least-squares evaluation. This paper presents an approach based on the theory of autoregressive (integrated) moving average (AR(I)MA) processes to model temporal correlations of GPS observations using time series of observation residuals. A practicable integration of AR(I)MA models in GPS data processing requires the determination of the order parameters of AR(I)MA processes at first. In case of GPS, the identification of AR(I)MA processes could be affected by various factors impacting GPS positioning results, e.g. baseline length, multipath effects, observation weighting, or weather variations. The influences of these factors on AR(I)MA identification are empirically analysed based on a large amount of representative residual time series resulting from differential GPS post-processing using 1-Hz observation data collected within the permanent SAPOS® (Satellite Positioning Service of the German State Survey) network. Both short and long time series are modelled by means of AR(I)MA processes. The final order parameters are determined based on the whole residual database; the corresponding empirical distribution functions illustrate that multipath and weather variations seem to affect the identification of AR(I)MA processes much more significantly than baseline length and observation weighting. Additionally, the modelling results of temporal correlations using high-order AR(I)MA processes are compared with those by means of first order autoregressive (AR(1)) processes and empirically estimated autocorrelation functions.
Aspects of Students' Reasoning about Variation in Empirical Sampling Distributions
ERIC Educational Resources Information Center
Noll, Jennifer; Shaughnessy, J. Michael
2012-01-01
Sampling tasks and sampling distributions provide a fertile realm for investigating students' conceptions of variability. A project-designed teaching episode on samples and sampling distributions was team-taught in 6 research classrooms (2 middle school and 4 high school) by the investigators and regular classroom mathematics teachers. Data…
Distributed Leadership as Work Redesign: Retrofitting the Job Characteristics Model
ERIC Educational Resources Information Center
Mayrowetz, David; Murphy, Joseph; Louis, Karen Seashore; Smylie, Mark A.
2007-01-01
In this article, we revive work redesign theory, specifically Hackman and Oldham's Job Characteristics Model (JCM), to examine distributed leadership initiatives. Based on our early observations of six schools engaged in distributed leadership reform and a broad review of literature, including empirical tests of work redesign theory, we retrofit…
Beyond multi-fractals: surrogate time series and fields
NASA Astrophysics Data System (ADS)
Venema, V.; Simmer, C.
2007-12-01
Most natural complex are characterised by variability on a large range of temporal and spatial scales. The two main methodologies to generate such structures are Fourier/FARIMA based algorithms and multifractal methods. The former is restricted to Gaussian data, whereas the latter requires the structure to be self-similar. This work will present so-called surrogate data as an alternative that works with any (empirical) distribution and power spectrum. The best-known surrogate algorithm is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm. We have studied six different geophysical time series (two clouds, runoff of a small and a large river, temperature and rain) and their surrogates. The power spectra and consequently the 2nd order structure functions were replicated accurately. Even the fourth order structure function was more accurately reproduced by the surrogates as would be possible by a fractal method, because the measured structure deviated too strong from fractal scaling. Only in case of the daily rain sums a fractal method could have been more accurate. Just as Fourier and multifractal methods, the current surrogates are not able to model the asymmetric increment distributions observed for runoff, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found differences for the structure functions on small scales. Surrogate methods are especially valuable for empirical studies, because the time series and fields that are generated are able to mimic measured variables accurately. Our main application is radiative transfer through structured clouds. Like many geophysical fields, clouds can only be sampled sparsely, e.g. with in-situ airborne instruments. However, for radiative transfer calculations we need full 3-dimensional cloud fields. A first study relating the measured properties of the cloud droplets and the radiative properties of the cloud field by generating surrogate cloud fields yielded good results within the measurement error. A further test of the suitability of the surrogate clouds for radiative transfer is evaluated by comparing the radiative properties of model cloud fields of sparse cumulus and stratocumulus with their surrogate fields. The bias and root mean square error in various radiative properties is small and the deviations in the radiances and irradiances are not statistically significant, i.e. these deviations can be attributed to the Monte Carlo noise of the radiative transfer calculations. We compared these results with optical properties of synthetic clouds that have either the correct distribution (but no spatial correlations) or the correct power spectrum (but a Gaussian distribution). These clouds did show statistical significant deviations. For more information see: http://www.meteo.uni-bonn.de/venema/themes/surrogates/
NASA Astrophysics Data System (ADS)
Hill, S. C.; Edmond, J. A.; Xu, H.; Perez, J. D.; Fok, M. C. H.; Goldstein, J.; McComas, D. J.; Valek, P. W.
2017-12-01
The characteristics of a four day 7-10 September 2015 co-rotating interaction region (CIR) storm (min. SYM/H ≤ -110 nT) are categorized by storm phase. Ion distributions of trapped particles in the ring current as measured by the Two Wide-Angle Imaging Neutral Atom Spectrometers (TWINS) are compared with the simulated ion distributions of the Comprehensive Inner Magnetosphere-Ionosphere Model (CIMI). The energetic neutral atom (ENA) images obtained by TWINS are deconvolved to extract equatorial pitch angle, energy spectra, ion pressure intensity, and ion pressure anisotropy distributions in the inner magnetosphere. CIMI, using either a self-consistent electric field or a semi-empirical electric field, simulates comparable distributions. There is good agreement between the data measured by TWINS and the different distributions produced by the self-consistent electric field and the semi-empirical electric field of CIMI. Throughout the storm the pitch angle distribution (PAD) is mostly perpendicular in both CIMI and TWINS and there is agreement between the anisotropy distributions. The locations of the ion pressure peaks seen by TWINS and by the self-consistent and semi empirical electric field parameters in CIMI are usually between dusk and midnight. On average, the self-consistent electric field in CIMI reveals ion pressure peaks closer to Earth than its semi empirical counterpart, while TWINS reports somewhat larger radial values for the ion pressure peak locations. There are also notable events throughout the storm during which the simulated observations show some characteristics that differ from those measured by TWINS. At times, there are ion pressure peaks with magnetic local time on the dayside and in the midnight to dawn region. We discuss these events in light of substorm injections indicated by fluctuating peaks in the AE index and a positive By component in the solar wind. There are also times in which there are multiple ion pressure peaks. This may imply that there are time dependent and spatially dependent injection events that are influenced by local reconnection regions in the tail of the magnetosphere. Using CIMI simulations, we present paths of particles with various energies to assist in interpreting these notable events.
Empirical Histograms in Item Response Theory with Ordinal Data
ERIC Educational Resources Information Center
Woods, Carol M.
2007-01-01
The purpose of this research is to describe, test, and illustrate a new implementation of the empirical histogram (EH) method for ordinal items. The EH method involves the estimation of item response model parameters simultaneously with the approximation of the distribution of the random latent variable (theta) as a histogram. Software for the EH…
Equity, Inclusion, and Beyond: Today's Urban Chief Diversity Officer
ERIC Educational Resources Information Center
Hancock, Merodie A.
2018-01-01
This paper, based primarily on the author's perspective as president of SUNY Empire State College, will explore the need for, and means of leveraging, the chief diversity officer's role in creating an equitable and inclusive environment within the distributed world that is Empire State College's "campus" and, specifically, within SUNY…
Effects of particle size distribution in thick film conductors
NASA Technical Reports Server (NTRS)
Vest, R. W.
1983-01-01
Studies of particle size distribution in thick film conductors are discussed. The distribution of particle sizes does have an effect on fired film density but the effect is not always positive. A proper distribution of sizes is necessary, and while the theoretical models can serve as guides to selecting this proper distribution, improved densities can be achieved by empirical variations from the predictions of the models.
Rafal Podlaski; Francis Roesch
2014-01-01
In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...
On species persistence-time distributions.
Suweis, S; Bertuzzo, E; Mari, L; Rodriguez-Iturbe, I; Maritan, A; Rinaldo, A
2012-06-21
We present new theoretical and empirical results on the probability distributions of species persistence times in natural ecosystems. Persistence times, defined as the timespans occurring between species' colonization and local extinction in a given geographic region, are empirically estimated from local observations of species' presence/absence. A connected sampling problem is presented, generalized and solved analytically. Species persistence is shown to provide a direct connection with key spatial macroecological patterns like species-area and endemics-area relationships. Our empirical analysis pertains to two different ecosystems and taxa: a herbaceous plant community and a estuarine fish database. Despite the substantial differences in ecological interactions and spatial scales, we confirm earlier evidence on the general properties of the scaling of persistence times, including the predicted effects of the structure of the spatial interaction network. The framework tested here allows to investigate directly nature and extent of spatial effects in the context of ecosystem dynamics. The notable coherence between spatial and temporal macroecological patterns, theoretically derived and empirically verified, is suggested to underlie general features of the dynamic evolution of ecosystems. Copyright © 2012 Elsevier Ltd. All rights reserved.
Dynamical influences on thermospheric composition: implications for semi-empirical models
NASA Astrophysics Data System (ADS)
Sutton, E. K.; Solomon, S. C.
2014-12-01
The TIE-GCM was recently augmented to include helium and argon, two approximately inert species that can be used as tracers of dynamics in the thermosphere. The former species is treated as a major species due to its large abundance near the upper boundary. The effects of exospheric transport are also included in order to simulate realistic seasonal and latitudinal helium distributions. The latter species is treated as a classical minor species, imparting absolutely no forces on the background atmosphere. In this study, we examine the interplay of the various dynamical terms - i.e. background circulation, molecular and Eddy diffusion - as they drive departures from the distributions that would be expected under the assumption of diffusive equilibrium. As this has implications on the formulation of all empirical thermospheric models, we use this understanding to address the following questions: (1) how do errors caused by the assumption of diffusive equilibrium manifest within empirical models of the thermosphere? and (2) where and when does an empirical model's output disagree with its underlying datasets due to the inherent limitations of said model's formulation?
Uno, Hajime; Tian, Lu; Claggett, Brian; Wei, L J
2015-12-10
With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan-Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one-sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.
Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.
Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao
2015-08-01
Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. Copyright © 2015 by the Genetics Society of America.
ARE SOME MILKY WAY GLOBULAR CLUSTERS HOSTED BY UNDISCOVERED GALAXIES?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaritsky, Dennis; Crnojević, Denija; Sand, David J., E-mail: dennis.zaritsky@gmail.com
2016-07-20
The confirmation of a globular cluster (GC) in the recently discovered ultrafaint galaxy Eridanus II (Eri II) motivated us to examine the question posed in the title. After estimating the halo mass of Eri II using a published stellar mass—halo mass relation, the one GC in this galaxy supports extending the relationship between the number of GCs hosted by a galaxy and the galaxy’s total mass about two orders of magnitude in stellar mass below the previous limit. For this empirically determined specific frequency of between 0.06 and 0.39 GCs per 10{sup 9} M {sub ⊙} of total mass, themore » surviving Milky Way (MW) subhalos with masses smaller than 10{sup 10} M {sub ⊙} could host as many as 5–31 GCs, broadly consistent with the actual population of outer halo MW GCs, although matching the radial distribution in detail remains a challenge. Using a subhalo mass function from published high-resolution numerical simulations and a Poissonian model for populating those halos with the aforementioned empirically constrained frequency, we find that about 90% of these GCs lie in lower-mass subhalos than that of Eri II. From what we know about the stellar mass–halo mass function, the subhalo mass function, and the mass-normalized GC specific frequency, we conclude that some of the MW’s outer halo GCs are likely to be hosted by undetected subhalos with extremely modest stellar populations.« less
Bayesian modelling of lung function data from multiple-breath washout tests.
Mahar, Robert K; Carlin, John B; Ranganathan, Sarath; Ponsonby, Anne-Louise; Vuillermin, Peter; Vukcevic, Damjan
2018-05-30
Paediatric respiratory researchers have widely adopted the multiple-breath washout (MBW) test because it allows assessment of lung function in unsedated infants and is well suited to longitudinal studies of lung development and disease. However, a substantial proportion of MBW tests in infants fail current acceptability criteria. We hypothesised that a model-based approach to analysing the data, in place of traditional simple empirical summaries, would enable more efficient use of these tests. We therefore developed a novel statistical model for infant MBW data and applied it to 1197 tests from 432 individuals from a large birth cohort study. We focus on Bayesian estimation of the lung clearance index, the most commonly used summary of lung function from MBW tests. Our results show that the model provides an excellent fit to the data and shed further light on statistical properties of the standard empirical approach. Furthermore, the modelling approach enables the lung clearance index to be estimated by using tests with different degrees of completeness, something not possible with the standard approach. Our model therefore allows previously unused data to be used rather than discarded, as well as routine use of shorter tests without significant loss of precision. Beyond our specific application, our work illustrates a number of important aspects of Bayesian modelling in practice, such as the importance of hierarchical specifications to account for repeated measurements and the value of model checking via posterior predictive distributions. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
Toward a quantitative account of pitch distribution in spontaneous narrative: Method and validation
Matteson, Samuel E.; Streit Olness, Gloria; Caplow, Nancy J.
2013-01-01
Pitch is well-known both to animate human discourse and to convey meaning in communication. The study of the statistical population distributions of pitch in discourse will undoubtedly benefit from methodological improvements. The current investigation examines a method that parameterizes pitch in discourse as musical pitch interval H measured in units of cents and that disaggregates the sequence of peak word-pitches using tools employed in time-series analysis and digital signal processing. The investigators test the proposed methodology by its application to distributions in pitch interval of the peak word-pitch (collectively called the discourse gamut) that occur in simulated and actual spontaneous emotive narratives obtained from 17 middle-aged African-American adults. The analysis, in rigorous tests, not only faithfully reproduced simulated distributions imbedded in realistic time series that drift and include pitch breaks, but the protocol also reveals that the empirical distributions exhibit a common hidden structure when normalized to a slowly varying mode (called the gamut root) of their respective probability density functions. Quantitative differences between narratives reveal the speakers' relative propensity for the use of pitch levels corresponding to elevated degrees of a discourse gamut (the “e-la”) superimposed upon a continuum that conforms systematically to an asymmetric Laplace distribution. PMID:23654400
Empirical scaling of the length of the longest increasing subsequences of random walks
NASA Astrophysics Data System (ADS)
Mendonça, J. Ricardo G.
2017-02-01
We provide Monte Carlo estimates of the scaling of the length L n of the longest increasing subsequences of n-step random walks for several different distributions of step lengths, short and heavy-tailed. Our simulations indicate that, barring possible logarithmic corrections, {{L}n}∼ {{n}θ} with the leading scaling exponent 0.60≲ θ ≲ 0.69 for the heavy-tailed distributions of step lengths examined, with values increasing as the distribution becomes more heavy-tailed, and θ ≃ 0.57 for distributions of finite variance, irrespective of the particular distribution. The results are consistent with existing rigorous bounds for θ, although in a somewhat surprising manner. For random walks with step lengths of finite variance, we conjecture that the correct asymptotic behavior of L n is given by \\sqrt{n}\\ln n , and also propose the form for the subleading asymptotics. The distribution of L n was found to follow a simple scaling form with scaling functions that vary with θ. Accordingly, when the step lengths are of finite variance they seem to be universal. The nature of this scaling remains unclear, since we lack a working model, microscopic or hydrodynamic, for the behavior of the length of the longest increasing subsequences of random walks.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Becker, Carolyn Black; Stice, Eric; Shaw, Heather; Woda, Susan
2009-04-01
Dissemination, or distribution, of empirically supported interventions (ESIs) for psychopathology remains a significant challenge. This paper reviews the principles of community-partnership research (CPR) and explores why CPR might improve distribution of psychological ESIs. Benefits of CPR include building trust, pooling resources and knowledge, and better serving a community by directly involving its members in the design and implementation of research. In addition, after establishing a community's trust using CPR, researchers are likely to be better positioned to partner with communities in the further distribution of ESIs via community networks. This paper reviews the case of dissonance-based eating disorder prevention interventions to provide an example of how CPR can facilitate the adoption and distribution of an ESI by a community, in this case, sororities. CPR also presents a number of challenges, however, because it is time consuming and does not always align with funding mechanisms and research designs used in randomized controlled trials. Further, CPR does not necessarily solve the challenge of training providers, though it may help with problem solving. Ultimately, we suggest that the benefits of CPR far outweigh the challenges, and hope that more researchers will adopt these practices so that more individuals can benefit from empirically supported psychological interventions.
A thermodynamic and theoretical view for enzyme regulation.
Zhao, Qinyi
2015-01-01
Precise regulation is fundamental to the proper functioning of enzymes in a cell. Current opinions about this, such as allosteric regulation and dynamic contribution to enzyme regulation, are experimental models and substantially empirical. Here we proposed a theoretical and thermodynamic model of enzyme regulation. The main idea is that enzyme regulation is processed via the regulation of abundance of active conformation in the reaction buffer. The theoretical foundation, experimental evidence, and experimental criteria to test our model are discussed and reviewed. We conclude that basic principles of enzyme regulation are laws of protein thermodynamics and it can be analyzed using the concept of distribution curve of active conformations of enzymes.
Gonioreflectometric properties of metal surfaces
NASA Astrophysics Data System (ADS)
Jaanson, P.; Manoocheri, F.; Mäntynen, H.; Gergely, M.; Widlowski, J.-L.; Ikonen, E.
2014-12-01
Angularly resolved measurements of scattered light from surfaces can provide useful information in various fields of research and industry, such as computer graphics, satellite based Earth observation etc. In practice, empirical or physics-based models are needed to interpolate the measurement results, because a thorough characterization of the surfaces under all relevant conditions may not be feasible. In this work, plain and anodized metal samples were prepared and measured optically for bidirectional reflectance distribution function (BRDF) and mechanically for surface roughness. Two models for BRDF (Torrance-Sparrow model and a polarimetric BRDF model) were fitted to the measured values. A better fit was obtained for plain metal surfaces than for anodized surfaces.
Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos
2015-05-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Track structure: time evolution from physics to chemistry.
Dingfelder, M
2006-01-01
This review discusses interaction cross sections of charged particles (electrons, protons, light ions) with atoms and molecules. The focus is on biological relevant targets like liquid water which serves as a substitute of soft tissue in most Monte Carlo codes. The spatial distribution of energy deposition patterns by different radiation qualities and their importance to the time evolution from the physical to the chemical stage or radiation response is discussed. The determination of inelastic interaction cross sections for charged particles in condensed matter is discussed within the relativistic plane-wave Born approximation and semi-empirical models. The dielectric-response-function of liquid water is discussed.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398
Agricultural Fragility Estimates Subjected to Volcanic Ash Fall Hazards
NASA Astrophysics Data System (ADS)
Ham, H. J.; Lee, S.; Choi, S. H.; Yun, W. S.
2015-12-01
Agricultural Fragility Estimates Subjected to Volcanic Ash Fall Hazards Hee Jung Ham1, Seung-Hun Choi1, Woo-Seok Yun1, Sungsu Lee2 1Department of Architectural Engineering, Kangwon National University, Korea 2Division of Civil Engineering, Chungbuk National University, Korea ABSTRACT In this study, fragility functions are developed to estimate expected volcanic ash damages of the agricultural sector in Korea. The fragility functions are derived from two approaches: 1) empirical approach based on field observations of impacts to agriculture from the 2006 eruption of Merapi volcano in Indonesia and 2) the FOSM (first-order second-moment) analytical approach based on distribution and thickness of volcanic ash observed from the 1980 eruption of Mt. Saint Helens and agricultural facility specifications in Korea. Fragility function to each agricultural commodity class is presented by a cumulative distribution function of the generalized extreme value distribution. Different functions are developed to estimate production losses from outdoor and greenhouse farming. Seasonal climate influences vulnerability of each agricultural crop and is found to be a crucial component in determining fragility of agricultural commodities to an ash fall. In the study, the seasonality coefficient is established as a multiplier of fragility function to consider the seasonal vulnerability. Yields of the different agricultural commodities are obtained from Korean Statistical Information Service to create a baseline for future agricultural volcanic loss estimation. Numerically simulated examples of scenario ash fall events at Mt. Baekdu volcano are utilized to illustrate the application of the developed fragility functions. Acknowledgements This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering Potential Volcanic Risk around Korea' [MPSS-NH-2015-81] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea. References Nimlos, T. J. and Hans, Z., The Distribution and Thickness of Volcanic Ash in Montana, Northwest Science, Vol. 56, No. 3, 1982. Wilson, T., Kaye, G., Stewart, C., and Cole, J., Impacts of the 2006 Eruption of Merapi Volcano, Indonesia, on Agriculture and Infrastructure, GNS Science Report, 2007.
NASA Astrophysics Data System (ADS)
Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min
2017-10-01
To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.
Empirical study of the tails of mutual fund size
NASA Astrophysics Data System (ADS)
Schwarzkopf, Yonathan; Farmer, J. Doyne
2010-06-01
The mutual fund industry manages about a quarter of the assets in the U.S. stock market and thus plays an important role in the U.S. economy. The question of how much control is concentrated in the hands of the largest players is best quantitatively discussed in terms of the tail behavior of the mutual fund size distribution. We study the distribution empirically and show that the tail is much better described by a log-normal than a power law, indicating less concentration than, for example, personal income. The results are highly statistically significant and are consistent across fifteen years. This contradicts a recent theory concerning the origin of the power law tails of the trading volume distribution. Based on the analysis in a companion paper, the log-normality is to be expected, and indicates that the distribution of mutual funds remains perpetually out of equilibrium.
Power Laws and Market Crashes ---Empirical Laws on Bursting Bubbles---
NASA Astrophysics Data System (ADS)
Kaizoji, T.
In this paper, we quantitatively investigate the statistical properties of a statistical ensemble of stock prices. We selected 1200 stocks traded on the Tokyo Stock Exchange, and formed a statistical ensemble of daily stock prices for each trading day in the 3-year period from January 4, 1999 to December 28, 2001, corresponding to the period of the forming of the internet bubble in Japn, and its bursting in the Japanese stock market. We found that the tail of the complementary cumulative distribution function of the ensemble of stock prices in the high value of the price is well described by a power-law distribution, P (S > x) ˜ x^{-α}, with an exponent that moves in the range of 1.09 < α < 1.27. Furthermore, we found that as the power-law exponents α approached unity, the bubbles collapsed. This suggests that Zipf's law for stock prices is a sign that bubbles are going to burst.
Liu, Jian; Pedroza, Luana S; Misch, Carissa; Fernández-Serra, Maria V; Allen, Philip B
2014-07-09
We present total energy and force calculations for the (GaN)1-x(ZnO)x alloy. Site-occupancy configurations are generated from Monte Carlo (MC) simulations, on the basis of a cluster expansion model proposed in a previous study. Local atomic coordinate relaxations of surprisingly large magnitude are found via density-functional calculations using a 432-atom periodic supercell, for three representative configurations at x = 0.5. These are used to generate bond-length distributions. The configurationally averaged composition- and temperature-dependent short-range order (SRO) parameters of the alloys are discussed. The entropy is approximated in terms of pair distribution statistics and thus related to SRO parameters. This approximate entropy is compared with accurate numerical values from MC simulations. An empirical model for the dependence of the bond length on the local chemical environments is proposed.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
NASA Astrophysics Data System (ADS)
Ji, Chenxu; Zhang, Yuanzhi; Cheng, Qiuming; Tsou, JinYeu; Jiang, Tingchen; Liang, X. San
2018-06-01
In this study, we analyze spatial and temporal sea surface temperature (SST) and chlorophylla (Chl-a) concentration in the East China Sea (ECS) during the period 2003-2016. Level 3 (4 km) monthly SST and Chl-a data from the Moderate Resolution Imaging Spectroradiometer Satellite (MODIS-Aqua) were reconstructed using the data interpolation empirical orthogonal function (DINEOF) method and used to evaluated the relationship between the two variables. The approaches employed included correlation analysis, regression analysis, and so forth. Our results show that certain strong oceanic SSTs affect Chl-a concentration, with particularly high correlation seen in the coastal area of Jiangsu and Zhejiang provinces. The mean temperature of the high correlated region was 18.67 °C. This finding may suggest that the SST has an important impact on the spatial distribution of Chl-a concentration in the ECS.
NASA Astrophysics Data System (ADS)
Benettin, Paolo; Soulsby, Chris; Birkel, Christian; Tetzlaff, Doerthe; Botter, Gianluca; Rinaldo, Andrea
2017-03-01
We use high-resolution tracer data from an experimental site to test theoretical approaches that integrate catchment-scale flow and transport processes in a unified framework centered on selective age sampling by streamflow and evapotranspiration fluxes. Transport processes operating at the catchment scale are reflected in the evolving residence time distribution of the catchment water storage and in the age selection operated by out-fluxes. Such processes are described here through StorAge Selection (SAS) functions parameterized as power laws of the normalized rank storage. Such functions are computed through appropriate solution of the master equation defining formally the evolution of residence and travel times. By representing the way in which catchment storage generates outflows composed by water of different ages, the main mechanism regulating the tracer composition of runoff is clearly identified and detailed comparison with empirical data sets are possible. Properly calibrated numerical tools provide simulations that convincingly reproduce complex measured signals of daily deuterium content in stream waters during wet and dry periods. Results for the catchment under consideration are consistent with other recent studies indicating a tendency for natural catchments to preferentially release younger available water. The study shows that power law SAS functions prove a powerful tool to explain catchment-scale transport processes that also has potential in less intensively monitored sites.
Absorption line indices in the UV. I. Empirical and theoretical stellar population models
NASA Astrophysics Data System (ADS)
Maraston, C.; Nieves Colmenárez, L.; Bender, R.; Thomas, D.
2009-01-01
Aims: Stellar absorption lines in the optical (e.g. the Lick system) have been extensively studied and constitute an important stellar population diagnostic for galaxies in the local universe and up to moderate redshifts. Proceeding towards higher look-back times, galaxies are younger and the ultraviolet becomes the relevant spectral region where the dominant stellar populations shine. A comprehensive study of ultraviolet absorption lines of stellar population models is however still lacking. With this in mind, we study absorption line indices in the far and mid-ultraviolet in order to determine age and metallicity indicators for UV-bright stellar populations in the local universe as well as at high redshift. Methods: We explore empirical and theoretical spectral libraries and use evolutionary population synthesis to compute synthetic line indices of stellar population models. From the empirical side, we exploit the IUE-low resolution library of stellar spectra and system of absorption lines, from which we derive analytical functions (fitting functions) describing the strength of stellar line indices as a function of gravity, temperature and metallicity. The fitting functions are entered into an evolutionary population synthesis code in order to compute the integrated line indices of stellar populations models. The same line indices are also directly evaluated on theoretical spectral energy distributions of stellar population models based on Kurucz high-resolution synthetic spectra, In order to select indices that can be used as age and/or metallicity indicators for distant galaxies and globular clusters, we compare the models to data of template globular clusters from the Magellanic Clouds with independently known ages and metallicities. Results: We provide synthetic line indices in the wavelength range ~1200 Å to ~3000 Å for stellar populations of various ages and metallicities.This adds several new indices to the already well-studied CIV and SiIV absorptions. Based on the comparison with globular cluster data, we select a set of 11 indices blueward of the 2000 Å rest-frame that allows us to recover well the ages and the metallicities of the clusters. These indices are ideal to study ages and metallicities of young galaxies at high redshift. We also provide the synthetic high-resolution stellar population SEDs.
Adjoint tomography of Empirical Green's functions from ambient noise in Southern California
NASA Astrophysics Data System (ADS)
Wang, K.; Liu, Q.; Yang, Y.; Basini, P.; Tape, C.
2017-12-01
We construct a new shear-wave velocity (Vsv) model in Southern California by adjoint tomography of Rayleigh-wave Empirical Green's functions at 5-50 s period from Z-Z component ambient noise cross-correlations. The initial model of our adjoint tomography is the isotropic Vs model M16 from Tape et al. [2010], which is generated by three-component body and surface waves at 2-30 s period from local earthquake data. Synthetic Green's functions (SGFs) from M16 show a good agreement with the Empirical Green's functions (EGFs) from ambient noise at 5-50 s and 10-50 s period bands, but have an average 1.75 s advance in time at 20-50 s. By minimizing the traveltime differences between the EGFs and SGFs using gradient-based algorithm, the initial model is refined and improved and the total misfits is reduced from the initial 1.75s to its convergent point of 0.33 s after five iterations. The final Vsv model fits EGF waveforms better than the initial model at all the three frequency bands with smaller misfit distributions. Our new Vsv model reveals some new features in the mid- and lower-crust, mainly including: (1) the mean speed of lower crust is slowed down by about 5%; (2) In the Los Angeles Basin and its Northern area, the speed is higher than the initial model throughout the crust; (3) beneath the westernmost Peninsular Range Batholith (PRB) and Sierra Nevada Batholith (SNB), we observe high shear velocities in the lower crust; (4) a shallow high-velocity zone in the mid-crust are observed beneath Salton Trough Basin. Our model also shows refined lateral velocity gradient across PRB, SNB, San Andreas Fault (SAF), which helps to understand the west-east compositional boundary in PRB, SNB, and the dip angle and the depth extent of SAF. Our study demonstrates the feasibility of adjoint tomography of ambient noise data in southern California, which is an important complement to earthquake data. The numerical solver used in adjoint tomography can provide more accurate structure sensitivity kernels than analytical methods used in traditional ambient noise tomography.
Money-center structures in dynamic banking systems
NASA Astrophysics Data System (ADS)
Li, Shouwei; Zhang, Minghui
2016-10-01
In this paper, we propose a dynamic model for banking systems based on the description of balance sheets. It generates some features identified through empirical analysis. Through simulation analysis of the model, we find that banking systems have the feature of money-center structures, that bank asset distributions are power-law distributions, and that contract size distributions are log-normal distributions.
Kaufman, Scott Barry; Benedek, Mathias; Jung, Rex E.; Kenett, Yoed N.; Jauk, Emanuel; Neubauer, Aljoscha C.; Silvia, Paul J.
2015-01-01
Abstract The brain's default network (DN) has been a topic of considerable empirical interest. In fMRI research, DN activity is associated with spontaneous and self‐generated cognition, such as mind‐wandering, episodic memory retrieval, future thinking, mental simulation, theory of mind reasoning, and creative cognition. Despite large literatures on developmental and disease‐related influences on the DN, surprisingly little is known about the factors that impact normal variation in DN functioning. Using structural equation modeling and graph theoretical analysis of resting‐state fMRI data, we provide evidence that Openness to Experience—a normally distributed personality trait reflecting a tendency to engage in imaginative, creative, and abstract cognitive processes—underlies efficiency of information processing within the DN. Across two studies, Openness predicted the global efficiency of a functional network comprised of DN nodes and corresponding edges. In Study 2, Openness remained a robust predictor—even after controlling for intelligence, age, gender, and other personality variables—explaining 18% of the variance in DN functioning. These findings point to a biological basis of Openness to Experience, and suggest that normally distributed personality traits affect the intrinsic architecture of large‐scale brain systems. Hum Brain Mapp 37:773–779, 2016. © 2015 Wiley Periodicals, Inc. PMID:26610181
Hartzell, S.; Harmsen, S.; Frankel, A.; Larsen, S.
1999-01-01
This article compares techniques for calculating broadband time histories of ground motion in the near field of a finite fault by comparing synthetics with the strong-motion data set for the 1994 Northridge earthquake. Based on this comparison, a preferred methodology is presented. Ground-motion-simulation techniques are divided into two general methods: kinematic- and composite-fault models. Green's functions of three types are evaluated: stochastic, empirical, and theoretical. A hybrid scheme is found to give the best fit to the Northridge data. Low frequencies ( 1 Hz) are calculated using a composite-fault model with a fractal subevent size distribution and stochastic, bandlimited, white-noise Green's functions. At frequencies below 1 Hz, theoretical elastic-wave-propagation synthetics introduce proper seismic-phase arrivals of body waves and surface waves. The 3D velocity structure more accurately reproduces record durations for the deep sedimentary basin structures found in the Los Angeles region. At frequencies above 1 Hz, scattering effects become important and wave propagation is more accurately represented by stochastic Green's functions. A fractal subevent size distribution for the composite fault model ensures an ??-2 spectral shape over the entire frequency band considered (0.1-20 Hz).
Water in a Soft Confinement: Structure of Water in Amorphous Sorbitol.
Shalaev, Evgenyi; Soper, Alan K
2016-07-28
The structure of water in 70 wt % sorbitol-30 wt % water mixture is investigated by wide-angle neutron scattering (WANS) as a function of temperature. WANS data are analyzed using empirical potential structure refinement to obtain the site-site radial distribution functions (RDFs). Orientational structure of water is represented using OW-OW-OW triangles distributions and a tetrahedrality parameter, q, while water-water correlation function is used to estimate size of water clusters. Water structure in the sorbitol matrix is compared with that of water confined in nanopores of MCM41. The results indicate the existence of voids in the sorbitol matrix with the length scale of approximately 5 Å, which are filled by water. At 298 K, positional water structure in these voids is similar to that of water in MCM41, whereas there is a difference in the tetrahedral (orientational) arrangement. Cooling to 213 K strengthens tetrahedrality, with the orientational order of water in sorbitol becoming similar to that of confined water in MCM41 at 210 K, whereas further cooling to 100 K does not introduce any additional changes in the tetrahedrality. The results obtained allow us to propose, for the first time, that such confinement of water in a sorbitol matrix is the main reason for the lack of ice formation in this system.
Localization in random bipartite graphs: Numerical and empirical study
NASA Astrophysics Data System (ADS)
Slanina, František
2017-05-01
We investigate adjacency matrices of bipartite graphs with a power-law degree distribution. Motivation for this study is twofold: first, vibrational states in granular matter and jammed sphere packings; second, graphs encoding social interaction, especially electronic commerce. We establish the position of the mobility edge and show that it strongly depends on the power in the degree distribution and on the ratio of the sizes of the two parts of the bipartite graph. At the jamming threshold, where the two parts have the same size, localization vanishes. We found that the multifractal spectrum is nontrivial in the delocalized phase, but still near the mobility edge. We also study an empirical bipartite graph, namely, the Amazon reviewer-item network. We found that in this specific graph the mobility edge disappears, and we draw a conclusion from this fact regarding earlier empirical studies of the Amazon network.
Localization in random bipartite graphs: Numerical and empirical study.
Slanina, František
2017-05-01
We investigate adjacency matrices of bipartite graphs with a power-law degree distribution. Motivation for this study is twofold: first, vibrational states in granular matter and jammed sphere packings; second, graphs encoding social interaction, especially electronic commerce. We establish the position of the mobility edge and show that it strongly depends on the power in the degree distribution and on the ratio of the sizes of the two parts of the bipartite graph. At the jamming threshold, where the two parts have the same size, localization vanishes. We found that the multifractal spectrum is nontrivial in the delocalized phase, but still near the mobility edge. We also study an empirical bipartite graph, namely, the Amazon reviewer-item network. We found that in this specific graph the mobility edge disappears, and we draw a conclusion from this fact regarding earlier empirical studies of the Amazon network.
Hyeon-Deuk, Kim; Ando, Koji
2014-05-07
Liquid para-hydrogen (p-H2) is a typical quantum liquid which exhibits strong nuclear quantum effects (NQEs) and thus anomalous static and dynamic properties. We propose a real-time simulation method of wave packet (WP) molecular dynamics (MD) based on non-empirical intra- and inter-molecular interactions of non-spherical hydrogen molecules, and apply it to condensed-phase p-H2. The NQEs, such as WP delocalization and zero-point energy, are taken into account without perturbative expansion of prepared model potential functions but with explicit interactions between nuclear and electron WPs. The developed MD simulation for 100 ps with 1200 hydrogen molecules is realized at feasible computational cost, by which basic experimental properties of p-H2 liquid such as radial distribution functions, self-diffusion coefficients, and shear viscosities are all well reproduced.
NASA Astrophysics Data System (ADS)
de Silva, Piotr; Corminboeuf, Clémence
2015-09-01
We construct an orbital-free non-empirical meta-generalized gradient approximation (GGA) functional, which depends explicitly on density through the density overlap regions indicator [P. de Silva and C. Corminboeuf, J. Chem. Theory Comput. 10, 3745 (2014)]. The functional does not depend on either the kinetic energy density or the density Laplacian; therefore, it opens a new class of meta-GGA functionals. By construction, our meta-GGA yields exact exchange and correlation energy for the hydrogen atom and recovers the second order gradient expansion for exchange in the slowly varying limit. We show that for molecular systems, overall performance is better than non-empirical GGAs. For atomization energies, performance is on par with revTPSS, without any dependence on Kohn-Sham orbitals.
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
Distributed Leadership in Practice: Evidence, Misconceptions and Possibilities
ERIC Educational Resources Information Center
Harris, Alma; DeFlaminis, John
2016-01-01
This article takes a contemporary look at distributed leadership in practice by drawing upon empirical evidence from a large-scale project in the USA. Initially, it considers the existing knowledge base on distributed leadership and questions some of the assertions and assumptions in recent accounts of the literature. The article also addresses…
An Investigation of the Sampling Distribution of the Congruence Coefficient.
ERIC Educational Resources Information Center
Broadbooks, Wendy J.; Elmore, Patricia B.
This study developed and investigated an empirical sampling distribution of the congruence coefficient. The effects of sample size, number of variables, and population value of the congruence coefficient on the sampling distribution of the congruence coefficient were examined. Sample data were generated on the basis of the common factor model and…
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Examined the sampling distributions of equating coefficients produced by the characteristic curve method for tests using graded and nominal response scoring using simulated data. For both models and across all three equating situations, the sampling distributions were generally bell-shaped and peaked, and occasionally had a small degree of…
Some statistical investigations on the nature and dynamics of electricity prices
NASA Astrophysics Data System (ADS)
Bottazzi, G.; Sapio, S.; Secchi, A.
2005-09-01
This work analyzes the log-returns of daily electricity prices from the NordPool day-ahead market. We study both the unconditional growth rates distribution and the distribution of residual shocks obtained with a non-parametric filtering procedure based on the Cholesky factor algorithm. We show that, even if the Subbotin family of distributions is able to describe the empirical observations in both cases, the Subbotin fit obtained for the unconditional growth rates and for the residual shocks reveal significant differences. Indeed, the sequence of log-returns can be described as the outcome of an aggregation of Laplace-distributed shocks with time-dependent volatility. We find that the standard deviation of shocks scales as a power law of the initial price level, with scaling exponent around -1. Moreover, the analysis of the empirical density of shocks, conditional on the price level, shows a strong relationship of the Subbotin fit with the latter. We conclude that the unconditional growth rates distribution is the superposition of shocks distributions characterized by decreasing volatility and fat-tailedness with respect to the price level.
A Systematic Analysis and Synthesis of the Empirical MOOC Literature Published in 2013-2015
ERIC Educational Resources Information Center
Veletsianos, George; Shepherdson, Peter
2016-01-01
A deluge of empirical research became available on MOOCs in 2013-2015 and this research is available in disparate sources. This paper addresses a number of gaps in the scholarly understanding of MOOCs and presents a comprehensive picture of the literature by examining the geographic distribution, publication outlets, citations, data collection and…
ERIC Educational Resources Information Center
Brunk-Chavez, Beth; Pigg, Stacey; Moore, Jessie; Rosinski, Paula; Grabill, Jeffrey T.
2018-01-01
To speak to diverse audiences about how people learn to write and how writing works inside and outside the academy, we must conduct research across geographical, institutional, and cultural contexts as well as research that enables comparison when appropriate. Large-scale empirical research is useful for both of these moves; however, we must…
Psychosocial functioning in the context of diagnosis: assessment and theoretical issues.
Ro, Eunyoe; Clark, Lee Anna
2009-09-01
Psychosocial functioning is an important focus of attention in the revision of the Diagnostic and Statistical Manual of Mental Disorders. Researchers and clinicians are converging upon the opinion that psychometrically strong, comprehensive assessment of individuals' functioning is needed to characterize disorder fully. Also shared is the realization that existing theory and research in this domain have critical shortcomings. The authors urge that the field reexamine the empirical evidence and address theoretical issues to guide future development of the construct and its measurement. The authors first discuss several theoretical issues relevant to the conceptualization and assessment of functioning: (a) definitions of functioning, (b) the role of functioning in defining disorder, and (c) understanding functioning within environmental contexts. The authors then present data regarding empirical domains of psychosocial functioning and their interrelations. Self-reported data on multiple domains of psychosocial functioning were collected from 429 participants. Factor-analytic results (promax rotation) suggest a 4-factor structure of psychosocial functioning: Well-Being, Basic Functioning, Self-Mastery, and Interpersonal and Social Relationships. Finally, the authors propose an integration of theory and empirical findings, which they believe will better incorporate psychosocial functioning into future diagnostic systems. Copyright 2009 APA, all rights reserved.
Functional Measurement in the Field of Empirical Bioethics
ERIC Educational Resources Information Center
Mullet, Etienne; Sorum, Paul C.; Teysseire, Nathalie; Nann, Stephanie; Martinez, Guadalupe Elizabeth Morales; Ahmed, Ramadan; Kamble, Shanmukh; Olivari, Cecilia; Sastre, Maria Teresa Munoz
2012-01-01
We present, in a synthetic way, some of the main findings from five studies that were conducted in the field of empirical bioethics, using the Functional Measurement framework. These studies were about (a) the rationing of rare treatments, (b) adolescents' abortions, (c) end-of-life decision-making regarding damaged neonates, (d) end-of-life…
Implicit and Explicit Preference Structures in Models of Labor Supply.
ERIC Educational Resources Information Center
Dickinson, Jonathan
The study of labor supply is directed to a theoretical methodology under which the choice of the general functional form of the income-leisure preference structure may be regarded as an empirical question. The author has reviewed the common functional forms employed in empirical labor supply models and has characterized the inherent preference…
Empirical Studies of Interactions of Semantic Roles: The Agent and Patient in Mandarin Chinese
ERIC Educational Resources Information Center
Yue, Kun
2010-01-01
This dissertation investigates the interaction between form and function in Mandarin Chinese by empirically examining the interactions of core semantic roles (Agent and Patient) and the syntactic representation of those interactions in semantically transitive events. First, I demonstrate that syntactic structures and functions are intertwined with…
The evolution of cooperation on geographical networks
NASA Astrophysics Data System (ADS)
Li, Yixiao; Wang, Yi; Sheng, Jichuan
2017-11-01
We study evolutionary public goods game on geographical networks, i.e., complex networks which are located on a geographical plane. The geographical feature effects in two ways: In one way, the geographically-induced network structure influences the overall evolutionary dynamics, and, in the other way, the geographical length of an edge influences the cost when the two players at the two ends interact. For the latter effect, we design a new cost function of cooperators, which simply assumes that the longer the distance between two players, the higher cost the cooperator(s) of them have to pay. In this study, network substrates are generated by a previous spatial network model with a cost-benefit parameter controlling the network topology. Our simulations show that the greatest promotion of cooperation is achieved in the intermediate regime of the parameter, in which empirical estimates of various railway networks fall. Further, we investigate how the distribution of edges' geographical costs influences the evolutionary dynamics and consider three patterns of the distribution: an approximately-equal distribution, a diverse distribution, and a polarized distribution. For normal geographical networks which are generated using intermediate values of the cost-benefit parameter, a diverse distribution hinders the evolution of cooperation, whereas a polarized distribution lowers the threshold value of the amplification factor for cooperation in public goods game. These results are helpful for understanding the evolution of cooperation on real-world geographical networks.
NASA Astrophysics Data System (ADS)
Bandte, Oliver
It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971
Inferring the photometric and size evolution of galaxies from image simulations. I. Method
NASA Astrophysics Data System (ADS)
Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien
2017-09-01
Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.
Empirical ethics, context-sensitivity, and contextualism.
Musschenga, Albert W
2005-10-01
In medical ethics, business ethics, and some branches of political philosophy (multi-culturalism, issues of just allocation, and equitable distribution) the literature increasingly combines insights from ethics and the social sciences. Some authors in medical ethics even speak of a new phase in the history of ethics, hailing "empirical ethics" as a logical next step in the development of practical ethics after the turn to "applied ethics." The name empirical ethics is ill-chosen because of its associations with "descriptive ethics." Unlike descriptive ethics, however, empirical ethics aims to be both descriptive and normative. The first question on which I focus is what kind of empirical research is used by empirical ethics and for which purposes. I argue that the ultimate aim of all empirical ethics is to improve the context-sensitivity of ethics. The second question is whether empirical ethics is essentially connected with specific positions in meta-ethics. I show that in some kinds of meta-ethical theories, which I categorize as broad contextualist theories, there is an intrinsic need for connecting normative ethics with empirical social research. But context-sensitivity is a goal that can be aimed for from any meta-ethical position.
Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin
2017-09-16
In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.
Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin
2017-01-01
In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF2) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach. PMID:28926953
ERIC Educational Resources Information Center
Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan
2008-01-01
Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…
Rafal Podlaski; Francis .A. Roesch
2013-01-01
The goals of this study are (1) to analyse the accuracy of the approximation of empirical distributions of diameter at breast height (dbh) using two-component mixtures of either the Weibull distribution or the gamma distribution in two−cohort stands, and (2) to discuss the procedure of choosing goodness−of−fit tests. The study plots were...
Steve P. Verrill; Frank C. Owens; David E. Kretschmann; Rubin Shmulsky
2017-01-01
It is common practice to assume that a two-parameter Weibull probability distribution is suitable for modeling lumber properties. Verrill and co-workers demonstrated theoretically and empirically that the modulus of rupture (MOR) distribution of visually graded or machine stress rated (MSR) lumber is not distributed as a Weibull. Instead, the tails of the MOR...
A semi-nonparametric Poisson regression model for analyzing motor vehicle crash data.
Ye, Xin; Wang, Ke; Zou, Yajie; Lord, Dominique
2018-01-01
This paper develops a semi-nonparametric Poisson regression model to analyze motor vehicle crash frequency data collected from rural multilane highway segments in California, US. Motor vehicle crash frequency on rural highway is a topic of interest in the area of transportation safety due to higher driving speeds and the resultant severity level. Unlike the traditional Negative Binomial (NB) model, the semi-nonparametric Poisson regression model can accommodate an unobserved heterogeneity following a highly flexible semi-nonparametric (SNP) distribution. Simulation experiments are conducted to demonstrate that the SNP distribution can well mimic a large family of distributions, including normal distributions, log-gamma distributions, bimodal and trimodal distributions. Empirical estimation results show that such flexibility offered by the SNP distribution can greatly improve model precision and the overall goodness-of-fit. The semi-nonparametric distribution can provide a better understanding of crash data structure through its ability to capture potential multimodality in the distribution of unobserved heterogeneity. When estimated coefficients in empirical models are compared, SNP and NB models are found to have a substantially different coefficient for the dummy variable indicating the lane width. The SNP model with better statistical performance suggests that the NB model overestimates the effect of lane width on crash frequency reduction by 83.1%.
NASA Astrophysics Data System (ADS)
Wen, Yi-Ying
2018-02-01
The 2014 M L 5.9 Fanglin earthquake occurred at the northern end of the aftershock distribution of the 2013 M L 6.4 Ruisui event and caused strong ground shaking and some damage in the northern part of the Longitudinal Valley. We carried out the strong-motion simulation of the 2014 Fanglin event in the broadband frequency range (0.4-10 Hz) using the empirical Green's function method and then integrated the source models to investigate the source characteristics of the 2013 Ruisui and 2014 Fanglin events. The results show that the dimension of strong motion generation area of the 2013 Ruisui event is smaller, whereas that of the 2014 Fanglin event is comparable with the empirical estimation of inland crustal earthquakes, which indicates the different faulting behaviors. Furthermore, the localized high PGV patch might be caused by the radiation energy amplified by the local low-velocity structure in the northern Longitudinal Valley. Additional study issues are required for building up the knowledge of the potential seismic hazard related to moderate-large events for various seismogenic areas in Taiwan.
Law, Y K; Hassanali, A A
2018-03-14
In this work, we examine the importance of nuclear quantum effects on capturing the line broadening and vibronic structure of optical spectra. We determine the absorption spectra of three aromatic molecules indole, pyridine, and benzene using time dependent density functional theory with several molecular dynamics sampling protocols: force-field based empirical potentials, ab initio simulations, and finally path-integrals for the inclusion of nuclear quantum effects. We show that the absorption spectrum for all these chromophores are similarly broadened in the presence of nuclear quantum effects regardless of the presence of hydrogen bond donor or acceptor groups. We also show that simulations incorporating nuclear quantum effects are able to reproduce the heterogeneous broadening of the absorption spectra even with empirical force fields. The spectral broadening associated with nuclear quantum effects can be accounted for by the broadened distribution of chromophore size as revealed by a particle in the box model. We also highlight the role that nuclear quantum effects have on the underlying electronic structure of aromatic molecules as probed by various electrostatic properties.
NASA Astrophysics Data System (ADS)
Law, Y. K.; Hassanali, A. A.
2018-03-01
In this work, we examine the importance of nuclear quantum effects on capturing the line broadening and vibronic structure of optical spectra. We determine the absorption spectra of three aromatic molecules indole, pyridine, and benzene using time dependent density functional theory with several molecular dynamics sampling protocols: force-field based empirical potentials, ab initio simulations, and finally path-integrals for the inclusion of nuclear quantum effects. We show that the absorption spectrum for all these chromophores are similarly broadened in the presence of nuclear quantum effects regardless of the presence of hydrogen bond donor or acceptor groups. We also show that simulations incorporating nuclear quantum effects are able to reproduce the heterogeneous broadening of the absorption spectra even with empirical force fields. The spectral broadening associated with nuclear quantum effects can be accounted for by the broadened distribution of chromophore size as revealed by a particle in the box model. We also highlight the role that nuclear quantum effects have on the underlying electronic structure of aromatic molecules as probed by various electrostatic properties.
NASA Astrophysics Data System (ADS)
Vanini, Seyed Ali Sadough; Abolghasemzadeh, Mohammad; Assadi, Abbas
2013-07-01
Functionally graded steels with graded ferritic and austenitic regions including bainite and martensite intermediate layers produced by electroslag remelting have attracted much attention in recent years. In this article, an empirical model based on the Zener-Hollomon (Z-H) constitutive equation with generalized material constants is presented to investigate the effects of temperature and strain rate on the hot working behavior of functionally graded steels. Next, a theoretical model, generalized by strain compensation, is developed for the flow stress estimation of functionally graded steels under hot compression based on the phase mixture rule and boundary layer characteristics. The model is used for different strains and grading configurations. Specifically, the results for αβγMγ steels from empirical and theoretical models showed excellent agreement with those of experiments of other references within acceptable error.
Vehicular headways on signalized intersections: theory, models, and reality
NASA Astrophysics Data System (ADS)
Krbálek, Milan; Šleis, Jiří
2015-01-01
We discuss statistical properties of vehicular headways measured on signalized crossroads. On the basis of mathematical approaches, we formulate theoretical and empirically inspired criteria for the acceptability of theoretical headway distributions. Sequentially, the multifarious families of statistical distributions (commonly used to fit real-road headway statistics) are confronted with these criteria, and with original empirical time clearances gauged among neighboring vehicles leaving signal-controlled crossroads after a green signal appears. Using three different numerical schemes, we demonstrate that an arrangement of vehicles on an intersection is a consequence of the general stochastic nature of queueing systems, rather than a consequence of traffic rules, driver estimation processes, or decision-making procedures.
Exact nonparametric confidence bands for the survivor function.
Matthews, David
2013-10-12
A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.
New approach to the retrieval of AOD and its uncertainty from MISR observations over dark water
NASA Astrophysics Data System (ADS)
Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Bull, Michael A.; Seidel, Felix C.
2018-01-01
A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture and then used a combination of these values to compute the final, best estimate
AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of (a) the absolute values of the cost functions for each aerosol mixture, (b) the widths of the cost function distributions as a function of AOD, and (c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on empirical thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new aerosol retrieval confidence index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI ≥ 0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.
Volume and Mass Estimation of Three-Phase High Power Transformers for Space Applications
NASA Technical Reports Server (NTRS)
Kimnach, Greg L.
2004-01-01
Spacecraft historically have had sub-1kW(sub e), electrical requirements for GN&C, science, and communications: Galileo at 600W(sub e), and Cassini at 900W(sub e), for example. Because most missions have had the same order of magnitude power requirements, the Power Distribution Systems (PDS) use existing, space-qualified technology and are DC. As science payload and mission duration requirements increase, however, the required electrical power increases. Subsequently, this requires a change from a passive energy conversion (solar arrays and batteries) to dynamic (alternator, solar dynamic, etc.), because dynamic conversion has higher thermal and conversion efficiencies, has higher power densities, and scales more readily to higher power levels. Furthermore, increased power requirements and physical distribution lengths are best served with high-voltage, multi-phase AC to maintain distribution efficiency and minimize voltage drops. The generated AC-voltage must be stepped-up (or down) to interface with various subsystems or electrical hardware. Part of the trade-space design for AC distribution systems is volume and mass estimation of high-power transformers. The volume and mass are functions of the power rating, operating frequency, the ambient and allowable temperature rise, the types and amount of heat transfer available, the core material and shape, the required flux density in a core, the maximum current density, etc. McLyman has tabulated the performance of a number of transformers cores and derived a "cookbook" methodology to determine the volume of transformers, whereas Schawrze had derived an empirical method to estimate the mass of single-phase transformers. Based on the work of McLyman and Schwarze, it is the intent herein to derive an empirical solution to the volume and mass estimation of three-phase, laminated EI-core power transformers, having radiated and conducted heat transfer mechanisms available. Estimation of the mounting hardware, connectors, etc. is not included.
Aylward, C.M.; Murdoch, J.D.; Donovan, Therese M.; Kilpatrick, C.W.; Bernier, C.; Katz, J.
2018-01-01
The American marten Martes americana is a species of conservation concern in the northeastern United States due to widespread declines from over‐harvesting and habitat loss. Little information exists on current marten distribution and how landscape characteristics shape patterns of occupancy across the region, which could help develop effective recovery strategies. The rarity of marten and lack of historical distribution records are also problematic for region‐wide conservation planning. Expert opinion can provide a source of information for estimating species–landscape relationships and is especially useful when empirical data are sparse. We created a survey to elicit expert opinion and build a model that describes marten occupancy in the northeastern United States as a function of landscape conditions. We elicited opinions from 18 marten experts that included wildlife managers, trappers and researchers. Each expert estimated occupancy probability at 30 sites in their geographic region of expertise. We, then, fit the response data with a set of 58 models that incorporated the effects of covariates related to forest characteristics, climate, anthropogenic impacts and competition at two spatial scales (1.5 and 5 km radii), and used model selection techniques to determine the best model in the set. Three top models had strong empirical support, which we model averaged based on AIC weights. The final model included effects of five covariates at the 5‐km scale: percent canopy cover (positive), percent spruce‐fir land cover (positive), winter temperature (negative), elevation (positive) and road density (negative). A receiver operating characteristic curve indicated that the model performed well based on recent occurrence records. We mapped distribution across the region and used circuit theory to estimate movement corridors between isolated core populations. The results demonstrate the effectiveness of expert‐opinion data at modeling occupancy for rare species and provide tools for planning marten recovery in the northeastern United States.
Juang, K W; Lee, D Y; Ellsworth, T R
2001-01-01
The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.
Asquith, William H.; Roussel, Meghan C.; Thompson, David B.; Cleveland, Theodore G.; Fang, Xing
2005-01-01
Hyetographs and storm depth distributions are important elements of hydraulic design by Texas Department of Transportation engineers. Design hyetographs are used in conjunction with unit hydrographs to obtain peak discharge and hydrograph shape for hydraulic design. Storm-depth distributions can be used to assess the probability of a total rainfall depth for a storm. A research project from 2000–2004 has been conducted to (1) determine if existing Natural Resources Conservation Service (NRCS) dimensionless hyetographs are representative of storms in Texas, (2) provide new procedures for dimensionless hyetograph estimation if the NRCS hyetographs are not representative, and (3) provide a procedure to estimate the distribution of storm depth for Texas. This report summarizes the research activities and results of the research project. The report documents several functional models of dimensionless hyetographs and provides curves and tabulated ordinates of empirical (nonfunctional) dimensionless hyetographs for a database of runoff-producing storms in Texas. The dimensionless hyetographs are compared to the NRCS dimensionless hyetographs. The distribution of storm depth is documented for seven values of minimum interevent time through dimensionless frequency curves and tables of mean storm depth for each county in Texas. Conclusions regarding application of the research results are included in the report.
Distributional changes in rainfall and river flow in Sarawak, Malaysia
NASA Astrophysics Data System (ADS)
Sa'adi, Zulfaqar; Shahid, Shamsuddin; Ismail, Tarmizi; Chung, Eun-Sung; Wang, Xiao-Jun
2017-11-01
Climate change may not change the rainfall mean, but the variability and extremes. Therefore, it is required to explore the possible distributional changes of rainfall characteristics over time. The objective of present study is to assess the distributional changes in annual and northeast monsoon rainfall (November-January) and river flow in Sarawak where small changes in rainfall or river flow variability/distribution may have severe implications on ecology and agriculture. A quantile regression-based approach was used to assess the changes of scale and location of empirical probability density function over the period 1980-2014 at 31 observational stations. The results indicate that diverse variation patterns exist at all stations for annual rainfall but mainly increasing quantile trend at the lowers, and higher quantiles for the month of January and December. The significant increase in annual rainfall is found mostly in the north and central-coastal region and monsoon month rainfalls in the interior and north of Sarawak. Trends in river flow data show that changes in rainfall distribution have affected higher quantiles of river flow in monsoon months at some of the basins and therefore more flooding. The study reveals that quantile trend can provide more information of rainfall change which may be useful for climate change mitigation and adaptation planning.
Pressure balance inconsistency exhibited in a statistical model of magnetospheric plasma
NASA Astrophysics Data System (ADS)
Garner, T. W.; Wolf, R. A.; Spiro, R. W.; Thomsen, M. F.; Korth, H.
2003-08-01
While quantitative theories of plasma flow from the magnetotail to the inner magnetosphere typically assume adiabatic convection, it has long been understood that these convection models tend to overestimate the plasma pressure in the inner magnetosphere. This phenomenon is called the pressure crisis or the pressure balance inconsistency. In order to analyze it in a new and more detailed manner we utilize an empirical model of the proton and electron distribution functions in the near-Earth plasma sheet (-50 RE < X < -10 RE), which uses the [1989] magnetic field model and a plasma sheet representation based upon several previously published statistical studies. We compare our results to a statistically derived particle distribution function at geosynchronous orbit. In this analysis the particle distribution function is characterized by the isotropic energy invariant λ = EV2/3, where E is the particle's kinetic energy and V is the magnetic flux tube volume. The energy invariant is conserved in guiding center drift under the assumption of strong, elastic pitch angle scattering. If, in addition, loss is negligible, the phase space density f(λ) is also conserved along the same path. The statistical model indicates that f(λ, ?) is approximately independent of X for X ≤ -35 RE but decreases with increasing X for X ≥ -35 RE. The tailward gradient of f(λ, ?) might be attributed to gradient/curvature drift for large isotropic energy invariants but not for small invariants. The tailward gradient of the distribution function indicates a violation of the adiabatic drift condition in the plasma sheet. It also confirms the existence of a "number crisis" in addition to the pressure crisis. In addition, plasma sheet pressure gradients, when crossed with the gradient of flux tube volume computed from the [1989] magnetic field model, indicate Region 1 currents on the dawn and dusk sides of the outer plasma sheet.
NASA Astrophysics Data System (ADS)
Buddendorf, B.; Fabris, L.; Malcolm, I.; Lazzaro, G.; Tetzlaff, D.; Botter, G.; Soulsby, C.
2016-12-01
Wild Atlantic salmon populations in Scottish rivers constitute an important economic and recreational resource, as well as being a key component of biodiversity. Salmon have specific habitat requirements at different life stages and their distribution is therefore strongly influenced by a complex suite of biological and physical controls. Stream hydrodynamics have a strong influence on habitat quality and affect the distribution and density of juvenile salmon. As stream hydrodynamics directly relate to stream flow variability and channel morphology, the effects of hydroclimatic drivers on the spatial and temporal variability of habitat suitability can be assessed. Critical Displacement Velocity (CDV), which describes the velocity at which fish can no longer hold station, is one potential approach for characterising habitat suitability. CDV is obtained using an empirical formula that depends on fish size and stream temperature. By characterising the proportion of a reach below CDV it is possible to assess the suitable area. We demonstrate that a generic analytical approach based on field survey and hydraulic modelling can provide insights on the interactions between flow regime and average suitable area (SA) for juvenile salmon that could be extended to other aquatic species. Analytical functions are used to model the pdf of stream flow p(q) and the relationship between flow and suitable area SA(q). Theoretically these functions can assume any form. Here we used a gamma distribution to model p(q) and a gamma function to model SA(q). Integrating the product of these functions we obtain an analytical expression of SA. Since parameters of p(q) can be estimated from meteorological and flow measurements, they can be used directly to predict the effect of flow regime on SA. We show the utility of the approach with reference to 6 electrofishing sites in a single river system where long term (50 years) data on spatially distributed juvenile salmon densities are available.
A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis
Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.
2015-01-01
Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324
Self-referenced processing, neurodevelopment and joint attention in autism.
Mundy, Peter; Gwaltney, Mary; Henderson, Heather
2010-09-01
This article describes a parallel and distributed processing model (PDPM) of joint attention, self-referenced processing and autism. According to this model, autism involves early impairments in the capacity for rapid, integrated processing of self-referenced (proprioceptive and interoceptive) and other-referenced (exteroceptive) information. Measures of joint attention have proven useful in research on autism because they are sensitive to the early development of the 'parallel' and integrated processing of self- and other-referenced stimuli. Moreover, joint attention behaviors are a consequence, but also an organizer of the functional development of a distal distributed cortical system involving anterior networks including the prefrontal and insula cortices, as well as posterior neural networks including the temporal and parietal cortices. Measures of joint attention provide early behavioral indicators of atypical development in this parallel and distributed processing system in autism. In addition it is proposed that an early, chronic disturbance in the capacity for integrating self- and other-referenced information may have cascading effects on the development of self awareness in autism. The assumptions, empirical support and future research implications of this model are discussed.
The stationary non-equilibrium plasma of cosmic-ray electrons and positrons
NASA Astrophysics Data System (ADS)
Tomaschitz, Roman
2016-06-01
The statistical properties of the two-component plasma of cosmic-ray electrons and positrons measured by the AMS-02 experiment on the International Space Station and the HESS array of imaging atmospheric Cherenkov telescopes are analyzed. Stationary non-equilibrium distributions defining the relativistic electron-positron plasma are derived semi-empirically by performing spectral fits to the flux data and reconstructing the spectral number densities of the electronic and positronic components in phase space. These distributions are relativistic power-law densities with exponential cutoff, admitting an extensive entropy variable and converging to the Maxwell-Boltzmann or Fermi-Dirac distributions in the non-relativistic limit. Cosmic-ray electrons and positrons constitute a classical (low-density high-temperature) plasma due to the low fugacity in the quantized partition function. The positron fraction is assembled from the flux densities inferred from least-squares fits to the electron and positron spectra and is subjected to test by comparing with the AMS-02 flux ratio measured in the GeV interval. The calculated positron fraction extends to TeV energies, predicting a broad spectral peak at about 1 TeV followed by exponential decay.
NASA Astrophysics Data System (ADS)
Sinaga, A. T.; Wangsaputra, R.
2018-03-01
The development of technology causes the needs of products and services become increasingly complex, diverse, and fluctuating. This causes the level of inter-company dependencies within a production chains increased. To be able to compete, efficiency improvements need to be done collaboratively in the production chain network. One of the efforts to increase efficiency is to harmonize production and distribution activities in the production chain network. This paper describes the harmonization of production and distribution activities by applying the use of push-pull system and supply hub in the production chain between two companies. The research methodology begins with conducting empirical and literature studies, formulating research questions, developing mathematical models, conducting trials and analyses, and taking conclusions. The relationship between the two companies is described in the MINLP mathematical model with the total cost of production chain as the objective function. Decisions generated by the mathematical models are the size of production lot, size of delivery lot, number of kanban, frequency of delivery, and the number of understock and overstock lot.
Research on Intelligent Synthesis Environment
NASA Technical Reports Server (NTRS)
Loftin, R. Bowen; Dryer, David; Major, Debra; Fletcher, Tom
2002-01-01
The ultimate goal of this research project is to develop a methodology for the assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments. This review provides the theoretical foundation upon which subsequent empirical work will be based. Our review of the team performance literature has identified the following 12 conceptually distinct team interaction processes as characteristic of effective teams. 1) Mission Analysis; 2) Resource Distribution; 3) Leadership; 4) Timing; 5) Intra-team Feedback; 6) Motivational Functions; 7) Team Orientation; 8) Communication; 9) Coordination; 10) Mutual Performance Monitoring; 11) Back-up Behaviors; and 12) Cooperation. In addition, this review summarizes how team task characteristics (i.e., task type, task complexity, motivation, and temporal changes), team characteristics (i.e., team structure and team knowledge), and individual team member characteristics (i.e., dispositions and teamwork knowledge, skills, and abilities) affect team interaction processes, determine the relevance of these processes, and influence team performance. The costs and benefits of distributed team collaboration are also considered. The review concludes with a brief discussion of the nature of collaborative team engineering tasks.