Study on length distribution of ramie fibers
USDA-ARS?s Scientific Manuscript database
The extra-long length of ramie fibers and the high variation in fiber length has a negative impact on the spinning processes. In order to better study the feature of ramie fiber length, in this research, the probability density function of the mixture model applied in the characterization of cotton...
Density profiles of the exclusive queuing process
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Schadschneider, Andreas
2012-12-01
The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.
Three statistical models for estimating length of stay.
Selvin, S
1977-01-01
The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program. PMID:914532
Three statistical models for estimating length of stay.
Selvin, S
1977-01-01
The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
NASA Technical Reports Server (NTRS)
Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard
1988-01-01
The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
Scattering of electromagnetic wave by the layer with one-dimensional random inhomogeneities
NASA Astrophysics Data System (ADS)
Kogan, Lev; Zaboronkova, Tatiana; Grigoriev, Gennadii., IV.
A great deal of attention has been paid to the study of probability characteristics of electro-magnetic waves scattered by one-dimensional fluctuations of medium dielectric permittivity. However, the problem of a determination of a density of a probability and average intensity of the field inside the stochastically inhomogeneous medium with arbitrary extension of fluc-tuations has not been considered yet. It is the purpose of the present report to find and to analyze the indicated functions for the plane electromagnetic wave scattered by the layer with one-dimensional fluctuations of permittivity. We assumed that the length and the amplitude of individual fluctuations as well the interval between them are random quantities. All of indi-cated fluctuation parameters are supposed as independent random values possessing Gaussian distribution. We considered the stationary time cases both small-scale and large-scale rarefied inhomogeneities. Mathematically such problem can be reduced to the solution of integral Fred-holm equation of second kind for Hertz potential (U). Using the decomposition of the field into the series of multiply scattered waves we obtained the expression for a probability density of the field of the plane wave and determined the moments of the scattered field. We have shown that all odd moments of the centered field (U-¡U¿) are equal to zero and the even moments depend on the intensity. It was obtained that the probability density of the field possesses the Gaussian distribution. The average field is small compared with the standard fluctuation of scattered field for all considered cases of inhomogeneities. The value of average intensity of the field is an order of a standard of fluctuations of field intensity and drops with increases the inhomogeneities length in the case of small-scale inhomogeneities. The behavior of average intensity is more complicated in the case of large-scale medium inhomogeneities. The value of average intensity is the oscillating function versus the average fluctuations length if the standard of fluctuations of inhomogeneities length is greater then the wave length. When the standard of fluctuations of medium inhomogeneities extension is smaller then the wave length, the av-erage intensity value weakly depends from the average fluctuations extension. The obtained results may be used for analysis of the electromagnetic wave propagation into the media with the fluctuating parameters caused by such factors as leafs of trees, cumulus, internal gravity waves with a chaotic phase and etc. Acknowledgment: This work was supported by the Russian Foundation for Basic Research (projects 08-02-97026 and 09-05-00450).
Systematic Onset of Periodic Patterns in Random Disk Packings
NASA Astrophysics Data System (ADS)
Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A. C.
2018-04-01
We report evidence of a surprising systematic onset of periodic patterns in very tall piles of disks deposited randomly between rigid walls. Independently of the pile width, periodic structures are always observed in monodisperse deposits containing up to 1 07 disks. The probability density function of the lengths of disordered transient phases that precede the onset of periodicity displays an approximately exponential tail. These disordered transients may become very large when the channel width grows without bound. For narrow channels, the probability density of finding periodic patterns of a given period displays a series of discrete peaks, which, however, are washed out completely when the channel width grows.
ERIC Educational Resources Information Center
Gratzer, William; Carpenter, James E.
2008-01-01
This article demonstrates an alternative approach to the construction of histograms--one based on the notion of using area to represent relative density in intervals of unequal length. The resulting histograms illustrate the connection between the area of the rectangles associated with particular outcomes and the relative frequency (probability)…
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
Anomalous transport in fluid field with random waiting time depending on the preceding jump length
NASA Astrophysics Data System (ADS)
Zhang, Hong; Li, Guo-Hua
2016-11-01
Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals
Mohamed, Mamdouh S.; Larson, Bennett C.; Tischler, Jonathan Z.; ...
2015-05-18
The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoreticalmore » analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kr ner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.« less
Pan, Jianjun; Cheng, Xiaolin; Sharp, Melissa; ...
2014-10-29
We report that the detailed structural and mechanical properties of a tetraoleoyl cardiolipin (TOCL) bilayer were determined using neutron spin echo (NSE) spectroscopy, small angle neutron and X-ray scattering (SANS and SAXS, respectively), and molecular dynamics (MD) simulations. We used MD simulations to develop a scattering density profile (SDP) model, which was then utilized to jointly refine SANS and SAXS data. In addition to commonly reported lipid bilayer structural parameters, component distributions were obtained, including the volume probability, electron density and neutron scattering length density.
Magnetoreresistance of carbon nanotube-polypyrrole composite yarns
NASA Astrophysics Data System (ADS)
Ghanbari, R.; Ghorbani, S. R.; Arabi, H.; Foroughi, J.
2018-05-01
Three types of samples, carbon nanotube yarn and carbon nanotube-polypyrrole composite yarns had been investigated by measurement of the electrical conductivity as a function of temperature and magnetic field. The conductivity was well explained by 3D Mott variable range hopping (VRH) law at T < 100 K. Both positive and negative magnetoresistance (MR) were observed by increasing magnetic field. The MR data were analyzed based a theoretical model. A quadratic positive and negative MR was observed for three samples. It was found that the localization length decreases with applied magnetic field while the density of states increases. The increasing of the density of states induces increasing the number of available energy states for hopping. Thus the electron hopping probability increases in between sites with the shorter distance that results to small the average hopping length.
Pérez-Pérez, A; Bermúdez De Castro, J M; Arsuaga, J L
1999-04-01
Casts of nonocclusal enamel surfaces of 190 teeth from the Middle Pleistocene site of Sima de los Huesos have been micrographed by scanning electron microscopy. Microscopic analyses of striation density and length by orientation show distinct patterns of intrapopulation variability. Significant differences in the number and length of the striations by orientation are found between maxillary and mandibular teeth. This probably reflects differences in the mechanical forces involved in the process of chewing food. Significant differences are present between isolated and in situ teeth that could be caused by postdepositional processes differentially affecting the isolated teeth. In addition, a distinct and very unusual striation pattern is observed in a sample of teeth that can be explained only by a strong nondietary, most probably postmortem abrasion of the enamel surfaces. These teeth have a very high density of scratches, shorter in length than those found on other teeth, that are not indicative of dietary habits. No known depositional process may account for the presence of such postmortem wear since heavy transportation of materials within the clayish sediments has been discarded for the site. Despite this, a characteristic dietary striation pattern can be observed in most of the teeth analyzed. Most likely the diet of the Homo heidelbergensis hominids from Sima de los Huesos was highly abrasive, probably with a large dependence on hard, poorly processed plant foods, such as roots, stems, and seeds. A highly significant sex-related difference in the striation pattern can also be observed in the teeth analyzed, suggesting a differential consistency in the foods eaten by females and males.
Annular wave packets at Dirac points in graphene and their probability-density oscillation.
Luo, Ji; Valencia, Daniel; Lu, Junqiang
2011-12-14
Wave packets in graphene whose central wave vector is at Dirac points are investigated by numerical calculations. Starting from an initial Gaussian function, these wave packets form into annular peaks that propagate to all directions like ripple-rings on water surface. At the beginning, electronic probability alternates between the central peak and the ripple-rings and transient oscillation occurs at the center. As time increases, the ripple-rings propagate at the fixed Fermi speed, and their widths remain unchanged. The axial symmetry of the energy dispersion leads to the circular symmetry of the wave packets. The fixed speed and widths, however, are attributed to the linearity of the energy dispersion. Interference between states that, respectively, belong to two branches of the energy dispersion leads to multiple ripple-rings and the probability-density oscillation. In a magnetic field, annular wave packets become confined and no longer propagate to infinity. If the initial Gaussian width differs greatly from the magnetic length, expanding and shrinking ripple-rings form and disappear alternatively in a limited spread, and the wave packet resumes the Gaussian form frequently. The probability thus oscillates persistently between the central peak and the ripple-rings. If the initial Gaussian width is close to the magnetic length, the wave packet retains the Gaussian form and its height and width oscillate with a period determined by the first Landau energy. The wave-packet evolution is determined jointly by the initial state and the magnetic field, through the electronic structure of graphene in a magnetic field. © 2011 American Institute of Physics
Murn, Campbell; Holloway, Graham J
2016-10-01
Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.
Comparison of SOM point densities based on different criteria.
Kohonen, T
1999-11-15
Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.
Acoustic trapping of active matter
NASA Astrophysics Data System (ADS)
Takatori, Sho C.; de Dier, Raf; Vermant, Jan; Brady, John F.
2016-03-01
Confinement of living microorganisms and self-propelled particles by an external trap provides a means of analysing the motion and behaviour of active systems. Developing a tweezer with a trapping radius large compared with the swimmers' size and run length has been an experimental challenge, as standard optical traps are too weak. Here we report the novel use of an acoustic tweezer to confine self-propelled particles in two dimensions over distances large compared with the swimmers' run length. We develop a near-harmonic trap to demonstrate the crossover from weak confinement, where the probability density is Boltzmann-like, to strong confinement, where the density is peaked along the perimeter. At high concentrations the swimmers crystallize into a close-packed structure, which subsequently `explodes' as a travelling wave when the tweezer is turned off. The swimmers' confined motion provides a measurement of the swim pressure, a unique mechanical pressure exerted by self-propelled bodies.
Acoustic trapping of active matter
Takatori, Sho C.; De Dier, Raf; Vermant, Jan; Brady, John F.
2016-01-01
Confinement of living microorganisms and self-propelled particles by an external trap provides a means of analysing the motion and behaviour of active systems. Developing a tweezer with a trapping radius large compared with the swimmers' size and run length has been an experimental challenge, as standard optical traps are too weak. Here we report the novel use of an acoustic tweezer to confine self-propelled particles in two dimensions over distances large compared with the swimmers' run length. We develop a near-harmonic trap to demonstrate the crossover from weak confinement, where the probability density is Boltzmann-like, to strong confinement, where the density is peaked along the perimeter. At high concentrations the swimmers crystallize into a close-packed structure, which subsequently ‘explodes' as a travelling wave when the tweezer is turned off. The swimmers' confined motion provides a measurement of the swim pressure, a unique mechanical pressure exerted by self-propelled bodies. PMID:26961816
Sedgwick, James A.; Knopf, Fritz L.
1990-01-01
We examined habitat relationships and nest site characteristics for 6 species of cavity-nesting birds--American kestrel (Falco sparverius), northern flicker (Colaptes auratus), red-headed woodpecker (Melanerpes erythrocephalus), black-capped chickadee (Parus atricapillus), house wren (Troglodytes aedon), and European starling (Sturnus vulgaris)--in a mature plains cottonwood (Populus sargentii) bottomland along the South Platte River in northeastern Colorado in 1985 and 1986. We examined characteristics of cavities, nest trees, and the habitat surrounding nest trees. Density of large trees (>69 cm dbh), total length of dead limbs ≥10 cm diameter (TDLL), and cavity density were the most important habitat variables; dead limb length (DLL), dbh, and species were the most important tree variables; and cavity height, cavity entrance diameter, and substrate condition at the cavity (live vs. dead) were the most important cavity variables in segregating cavity nesters along habitat, tree, and cavity dimensions, respectively. Random sites differed most from cavity-nesting bird sites on the basis of dbh, DLL, limb tree density (trees with ≥1 m dead limbs ≥10 cm diameter), and cavity density. Habitats of red-headed woodpeckers and American kestrels were the most unique, differing most from random sites. Based on current trends in cottonwood demography, densities of cavity-nesting birds will probably decline gradually along the South Platte River, paralleling a decline in DLL, limb tree density, snag density, and the concurrent lack of cottonwood regeneration.
Birds and insects as radar targets - A review
NASA Technical Reports Server (NTRS)
Vaughn, C. R.
1985-01-01
A review of radar cross-section measurements of birds and insects is presented. A brief discussion of some possible theoretical models is also given and comparisons made with the measurements. The comparisons suggest that most targets are, at present, better modeled by a prolate spheroid having a length-to-width ratio between 3 and 10 than by the often used equivalent weight water sphere. In addition, many targets observed with linear horizontal polarization have maximum cross sections much better estimated by a resonant half-wave dipole than by a water sphere. Also considered are birds and insects in the aggregate as a local radar 'clutter' source. Order-of-magnitude estimates are given for many reasonable target number densities. These estimates are then used to predict X-band volume reflectivities. Other topics that are of interest to the radar engineer are discussed, including the doppler bandwidth due to the internal motions of a single bird, the radar cross-section probability densities of single birds and insects, the variability of the functional form of the probability density functions, and the Fourier spectra of single birds and insects.
DENSITY VARIATIONS IN THE NW STAR STREAM OF M31
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, R. G.; Richer, Harvey B.; McConnachie, Alan W., E-mail: carlberg@astro.utoronto.ca, E-mail: richer@astro.ubc.ca, E-mail: alan.mcconnachie@nrc-cnrc.gc.ca
2011-04-20
The Pan Andromeda Archeological Survey (PAndAS) CFHT Megaprime survey of the M31-M33 system has found a star stream which extends about 120 kpc NW from the center of M31. The great length of the stream, and the likelihood that it does not significantly intersect the disk of M31, means that it is unusually well suited for a measurement of stream gaps and clumps along its length as a test for the predicted thousands of dark matter sub-halos. The main result of this paper is that the density of the stream varies between zero and about three times the mean alongmore » its length on scales of 2-20 kpc. The probability that the variations are random fluctuations in the star density is less than 10{sup -5}. As a control sample, we search for density variations at precisely the same location in stars with metallicity higher than the stream [Fe/H] = [0, -0.5] and find no variations above the expected shot noise. The lumpiness of the stream is not compatible with a low mass star stream in a smooth galactic potential, nor is it readily compatible with the disturbance caused by the visible M31 satellite galaxies. The stream's density variations appear to be consistent with the effects of a large population of steep mass function dark matter sub-halos, such as found in LCDM simulations, acting on an approximately 10 Gyr old star stream. The effects of a single set of halo substructure realizations are shown for illustration, reserving a statistical comparison for another study.« less
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.
2013-04-01
During a landslide triggering event, the tens to thousands of landslides resulting from the trigger (e.g., earthquake, heavy rainfall) may block a number of sections of the road network, posing a risk to rescue efforts, logistics and accessibility to a region. Here, we present initial results from a semi-stochastic model we are developing to evaluate the probability of landslides intersecting a road network and the network-accessibility implications of this across a region. This was performed in the open source GRASS GIS software, where we took 'model' landslides and dropped them on a 79 km2 test area region in Collazzone, Umbria, Central Italy, with a given road network (major and minor roads, 404 km in length) and already determined landslide susceptibilities. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m.2 The number of landslide areas selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. 79 landslide areas chosen randomly for each iteration. Landslides were then 'dropped' over the region semi-stochastically: (i) random points were generated across the study region; (ii) based on the landslide susceptibility map, points were accepted/rejected based on the probability of a landslide occurring at that location. After a point was accepted, it was assigned a landslide area (AL) and length to width ratio. Landslide intersections with roads were then assessed and indices such as the location, number and size of road blockage recorded. The GRASS-GIS model was performed 1000 times in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event of 1 landslide km-2 over a 79 km2 region with 404 km of road, the number of road blockages ranges from 6 to 17, resulting in one road blockage every 24-67 km of roads. The average length of road blocked was 33 m. As we progress with model development and more sophisticated network analysis, we believe this semi-stochastic modelling approach will aid civil protection agencies to get a rough idea for the probability of road network potential damage (road block number and extent) as the result of different magnitude landslide triggering event scenarios.
NASA Astrophysics Data System (ADS)
Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin
2017-12-01
A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.
CFD modeling using PDF approach for investigating the flame length in rotary kilns
NASA Astrophysics Data System (ADS)
Elattar, H. F.; Specht, E.; Fouda, A.; Bin-Mahfouz, Abdullah S.
2016-12-01
Numerical simulations using computational fluid dynamics (CFD) are performed to investigate the flame length characteristics in rotary kilns using probability density function (PDF) approach. A commercial CFD package (ANSYS-Fluent) is employed for this objective. A 2-D axisymmetric model is applied to study the effect of both operating and geometric parameters of rotary kiln on the characteristics of the flame length. Three types of gaseous fuel are used in the present work; methane (CH4), carbon monoxide (CO) and biogas (50 % CH4 + 50 % CO2). Preliminary comparison study of 2-D modeling outputs of free jet flames with available experimental data is carried out to choose and validate the proper turbulence model for the present numerical simulations. The results showed that the excess air number, diameter of kiln air entrance, radiation modeling consideration and fuel type have remarkable effects on the flame length characteristics. Numerical correlations for the rotary kiln flame length are presented in terms of the studied kiln operating and geometric parameters within acceptable error.
Proffitt, C.E.; Travis, S.E.; Edwards, K.R.
2003-01-01
Colonization, growth, and clonal morphology differ with genotype and are influenced by elevation. Local adaptation of Spartina alterniflora to environmental conditions may lead to dominance by different suites of genotypes in different locations within a marsh. In a constructed marsh, we found reduced colonization in terms of density of clones with increasing distance from edge in a 200-ha mudflat created in 1996; however, growth in diameter was not different among three 100-m-long zones that differed in distance from site edge. Distance from edge was confounded by elevation in this comparison of natural colonization. The rate of clonal expansion in diameter was 3.1 m/yr, and clonal growth was linear over the 28 mo of the study. The area dominated by S. alterniflora in the three distance zones increased concomitantly with clonal growth. However, the lower initial clonal densities and colonization by other plant species resulted in reduced overall dominance by S. alterniflora in the two more-interior locations. Seedling recruitment was an important component of S. alterniflora colonization at all elevations and distances from edge two years after site creation. Seedlings were spatially very patchy and tended to occur near clones that probably produced them. A field experiment revealed that S. alterniflora height and total stem length varied with genotype, while stem density and flowering stem density did not. Differences between edge and center of clonal patches also occurred for some response variables, and there were also significant interactions with genotype. Differences between edge and center are interpreted as differences in clone morphology. Elevation differences over distances of a few meters influenced total stem length and flowering stem density but not other response variables. Clones that were larger in diameter also tended to have greater stem heights and total stem lengths. A number of plant morphological measures were found to vary significantly among the five genotypes and had broad-sense heritabilities ranging up to 0.71. These results indicate that S. alterniflora populations developing on new substrata colonize broadly, but growth and reproduction vary with genotype and are influenced by changes in elevation (range: 11.8 cm), and probably other environmental factors, over relatively small distances. Differences in growth and clone morphology of different genets, and the frequent occurrence of seedlings throughout the site, underscore the importance of genetic variability in natural and created populations.
Chord-length and free-path distribution functions for many-body systems
NASA Astrophysics Data System (ADS)
Lu, Binglin; Torquato, S.
1993-04-01
We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.
Distribution of distances between DNA barcode labels in nanochannels close to the persistence length
NASA Astrophysics Data System (ADS)
Reinhart, Wesley F.; Reifenberger, Jeff G.; Gupta, Damini; Muralidhar, Abhiram; Sheats, Julian; Cao, Han; Dorfman, Kevin D.
2015-02-01
We obtained experimental extension data for barcoded E. coli genomic DNA molecules confined in nanochannels from 40 nm to 51 nm in width. The resulting data set consists of 1 627 779 measurements of the distance between fluorescent probes on 25 407 individual molecules. The probability density for the extension between labels is negatively skewed, and the magnitude of the skewness is relatively insensitive to the distance between labels. The two Odijk theories for DNA confinement bracket the mean extension and its variance, consistent with the scaling arguments underlying the theories. We also find that a harmonic approximation to the free energy, obtained directly from the probability density for the distance between barcode labels, leads to substantial quantitative error in the variance of the extension data. These results suggest that a theory for DNA confinement in such channels must account for the anharmonic nature of the free energy as a function of chain extension.
Large Fluctuations for Spatial Diffusion of Cold Atoms
NASA Astrophysics Data System (ADS)
Aghion, Erez; Kessler, David A.; Barkai, Eli
2017-06-01
We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.
Fiore, Lorenzo; Lorenzetti, Walter; Ratti, Giovannino
2005-11-30
A procedure is proposed to compare single-unit spiking activity elicited in repetitive cycles with an inhomogeneous Poisson process (IPP). Each spike sequence in a cycle is discretized and represented as a point process on a circle. The interspike interval probability density predicted for an IPP is computed on the basis of the experimental firing probability density; differences from the experimental interval distribution are assessed. This procedure was applied to spike trains which were repetitively induced by opening-closing movements of the distal article of a lobster leg. As expected, the density of short interspike intervals, less than 20-40 ms in length, was found to lie greatly below the level predicted for an IPP, reflecting the occurrence of the refractory period. Conversely, longer intervals, ranging from 20-40 to 100-120 ms, were markedly more abundant than expected; this provided evidence for a time window of increased tendency to fire again after a spike. Less consistently, a weak depression of spike generation was observed for longer intervals. A Monte Carlo procedure, implemented for comparison, produced quite similar results, but was slightly less precise and more demanding as concerns computation time.
NASA Astrophysics Data System (ADS)
Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.
2018-05-01
Based on the minimal length concept, inspired by Heisenberg algebra, a closed analytical formula is derived for the energy spectrum of the prolate γ-rigid Bohr-Mottelson Hamiltonian of nuclei, within a quantum perturbation method (QPM), by considering a scaled Davidson potential in β shape variable. In the resulting solution, called X(3)-D-ML, the ground state and the first β-band are all studied as a function of the free parameters. The fact of introducing the minimal length concept with a QPM makes the model very flexible and a powerful approach to describe nuclear collective excitations of a variety of vibrational-like nuclei. The introduction of scaling parameters in the Davidson potential enables us to get a physical minimum of this latter in comparison with previous works. The analysis of the corrected wave function, as well as the probability density distribution, shows that the minimal length parameter has a physical upper bound limit.
Gravitational lensing effects of vacuum strings - Exact solutions
NASA Technical Reports Server (NTRS)
Gott, J. R., III
1985-01-01
Exact interior and exterior solutions to Einstein's field equations are derived for vacuum strings. The exterior solution for a uniform density vacuum string corresponds to a conical space while the interior solution is that of a spherical cap. For Mu equals 0-1/4 the external metric is ds-squared = -dt-squared + dr-squared + (1-4 Mu)-squared r-squared dphi-squared + dz-squared, where Mu is the mass per unit length in the string in Planck masses per Planck length. A maximum mass per unit length for a string is 6.73 x 10 to the 27th g/cm. It is shown that strings cause temperature fluctuations in the cosmic microwave background and produce equal brightness double QSO images separated by up to several minutes of arc. Formulae for lensing probabilities, image splittings, and time delays are derived for strings in a realistic cosmological setting. String searches using ST, the VLA, and the COBE satellite are discussed.
Electron emission produced by photointeractions in a slab target
NASA Technical Reports Server (NTRS)
Thinger, B. E.; Dayton, J. A., Jr.
1973-01-01
The current density and energy spectrum of escaping electrons generated in a uniform plane slab target which is being irradiated by the gamma flux field of a nuclear reactor are calculated by using experimental gamma energy transfer coefficients, electron range and energy relations, and escape probability computations. The probability of escape and the average path length of escaping electrons are derived for an isotropic distribution of monoenergetic photons. The method of estimating the flux and energy distribution of electrons emerging from the surface is outlined, and a sample calculation is made for a 0.33-cm-thick tungsten target located next to the core of a nuclear reactor. The results are to be used as a guide in electron beam synthesis of reactor experiments.
Phonotactic Probability Effects in Children Who Stutter
Anderson, Julie D.; Byrd, Courtney T.
2008-01-01
Purpose The purpose of this study was to examine the influence of phonotactic probability, the frequency of different sound segments and segment sequences, on the overall fluency with which words are produced by preschool children who stutter (CWS), as well as to determine whether it has an effect on the type of stuttered disfluency produced. Method A 500+ word language sample was obtained from 19 CWS. Each stuttered word was randomly paired with a fluently produced word that closely matched it in grammatical class, word length, familiarity, word and neighborhood frequency, and neighborhood density. Phonotactic probability values were obtained for the stuttered and fluent words from an online database. Results Phonotactic probability did not have a significant influence on the overall susceptibility of words to stuttering, but it did impact the type of stuttered disfluency produced. In specific, single-syllable word repetitions were significantly lower in phonotactic probability than fluently produced words, as well as part-word repetitions and sound prolongations. Conclusions In general, the differential impact of phonotactic probability on the type of stuttering-like disfluency produced by young CWS provides some support for the notion that different disfluency types may originate in the disruption of different levels of processing. PMID:18658056
Modeling wildland fire containment with uncertain flame length and fireline width
Romain Mees; David Strauss; Richard Chase
1993-01-01
We describe a mathematical model for the probability that a fireline succeeds in containing a fire. The probability increases as the fireline width increases, and also as the fire's flame length decreases. More interestingly, uncertainties in width and flame length affect the computed containment probabilities, and can thus indirectly affect the optimum allocation...
Savage, Natasha; Yang, Thomas J W; Chen, Chung Ying; Lin, Kai-Lan; Monk, Nicholas A M; Schmidt, Wolfgang
2013-01-01
Phosphate (Pi) deficiency induces a multitude of responses aimed at improving the acquisition of Pi, including an increased density of root hairs. To understand the mechanisms involved in Pi deficiency-induced alterations of the root hair phenotype in Arabidopsis (Arabidopsis thaliana), we analyzed the patterning and length of root epidermal cells under control and Pi-deficient conditions in wild-type plants and in four mutants defective in the expression of master regulators of cell fate, CAPRICE (CPC), ENHANCER OF TRY AND CPC 1 (ETC1), WEREWOLF (WER) and SCRAMBLED (SCM). From this analysis we deduced that the longitudinal cell length of root epidermal cells is dependent on the correct perception of a positional signal ('cortical bias') in both control and Pi-deficient plants; mutants defective in the receptor of the signal, SCM, produced short cells characteristic of root hair-forming cells (trichoblasts). Simulating the effect of cortical bias on the time-evolving probability of cell fate supports a scenario in which a compromised positional signal delays the time point at which non-hair cells opt out the default trichoblast pathway, resulting in short, trichoblast-like non-hair cells. Collectively, our data show that Pi-deficient plants increase root hair density by the formation of shorter cells, resulting in a higher frequency of hairs per unit root length, and additional trichoblast cell fate assignment via increased expression of ETC1.
Savage, Natasha; Yang, Thomas J. W.; Chen, Chung Ying; Lin, Kai-Lan; Monk, Nicholas A. M.; Schmidt, Wolfgang
2013-01-01
Phosphate (Pi) deficiency induces a multitude of responses aimed at improving the acquisition of Pi, including an increased density of root hairs. To understand the mechanisms involved in Pi deficiency-induced alterations of the root hair phenotype in Arabidopsis (Arabidopsis thaliana), we analyzed the patterning and length of root epidermal cells under control and Pi-deficient conditions in wild-type plants and in four mutants defective in the expression of master regulators of cell fate, CAPRICE (CPC), ENHANCER OF TRY AND CPC 1 (ETC1), WEREWOLF (WER) and SCRAMBLED (SCM). From this analysis we deduced that the longitudinal cell length of root epidermal cells is dependent on the correct perception of a positional signal (‘cortical bias’) in both control and Pi-deficient plants; mutants defective in the receptor of the signal, SCM, produced short cells characteristic of root hair-forming cells (trichoblasts). Simulating the effect of cortical bias on the time-evolving probability of cell fate supports a scenario in which a compromised positional signal delays the time point at which non-hair cells opt out the default trichoblast pathway, resulting in short, trichoblast-like non-hair cells. Collectively, our data show that Pi-deficient plants increase root hair density by the formation of shorter cells, resulting in a higher frequency of hairs per unit root length, and additional trichoblast cell fate assignment via increased expression of ETC1. PMID:24130712
Characteristic length of the knotting probability revisited
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2015-09-01
We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(-N/NK), where the estimates of parameter NK are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius rex, i.e. the screening length of double-stranded DNA.
Knotting probability of a shaken ball-chain.
Hickford, J; Jones, R; du Pont, S Courrech; Eggers, J
2006-11-01
We study the formation of knots on a macroscopic ball chain, which is shaken on a horizontal plate at 12 times the acceleration of gravity. We find that above a certain critical length, the knotting probability is independent of chain length, while the time to shake out a knot increases rapidly with chain length. The probability of finding a knot after a certain time is the result of the balance of these two processes. In particular, the knotting probability tends to a constant for long chains.
SEMICONDUCTOR PHYSICS: Properties of the two- and three-dimensional quantum dot qubit
NASA Astrophysics Data System (ADS)
Shihua, Chen
2010-05-01
On the condition of electric-longitudinal-optical (LO) phonon strong coupling in both two- and three-dimensional parabolic quantum dots (QDs), we obtain the eigenenergies of the ground state (GS) and the first excited state (ES), the eigenfunctions of the GS and the first ES by using a variational method of Pekar type. This system in QD may be employed as a quantum system-quantum bit (qubit). When the electron is in the superposition state of the GS and the first ES, we obtain the time evolution of the electron density. The relations of both the electron probability density and the period of oscillation with the electric-LO phonon coupling strength and confinement length are discussed.
Universal Low-energy Behavior in a Quantum Lorentz Gas with Gross-Pitaevskii Potentials
NASA Astrophysics Data System (ADS)
Basti, Giulia; Cenatiempo, Serena; Teta, Alessandro
2018-06-01
We consider a quantum particle interacting with N obstacles, whose positions are independently chosen according to a given probability density, through a two-body potential of the form N 2 V ( N x) (Gross-Pitaevskii potential). We show convergence of the N dependent one-particle Hamiltonian to a limiting Hamiltonian where the quantum particle experiences an effective potential depending only on the scattering length of the unscaled potential and the density of the obstacles. In this sense our Lorentz gas model exhibits a universal behavior for N large. Moreover we explicitely characterize the fluctuations around the limit operator. Our model can be considered as a simplified model for scattering of slow neutrons from condensed matter.
On the Appearance of Thresholds in the Dynamical Model of Star Formation
NASA Astrophysics Data System (ADS)
Elmegreen, Bruce G.
2018-02-01
The Kennicutt–Schmidt (KS) relationship between the surface density of the star formation rate (SFR) and the gas surface density has three distinct power laws that may result from one model in which gas collapses at a fixed fraction of the dynamical rate. The power-law slope is 1 when the observed gas has a characteristic density for detection, 1.5 for total gas when the thickness is about constant as in the main disks of galaxies, and 2 for total gas when the thickness is regulated by self-gravity and the velocity dispersion is about constant, as in the outer parts of spirals, dwarf irregulars, and giant molecular clouds. The observed scaling of the star formation efficiency (SFR per unit CO) with the dense gas fraction (HCN/CO) is derived from the KS relationship when one tracer (HCN) is on the linear part and the other (CO) is on the 1.5 part. Observations of a threshold density or column density with a constant SFR per unit gas mass above the threshold are proposed to be selection effects, as are observations of star formation in only the dense parts of clouds. The model allows a derivation of all three KS relations using the probability distribution function of density with no thresholds for star formation. Failed galaxies and systems with sub-KS SFRs are predicted to have gas that is dominated by an equilibrium warm phase where the thermal Jeans length exceeds the Toomre length. A squared relation is predicted for molecular gas-dominated young galaxies.
NASA Technical Reports Server (NTRS)
Schwartz, H.-J.
1976-01-01
The modeling process of a complex system, based on the calculation and optimization of the system parameters, is complicated in that some parameters can be expressed only as probability distributions. In the present paper, a Monte Carlo technique was used to determine the daily range requirements of an electric road vehicle in the United States from probability distributions of trip lengths, frequencies, and average annual mileage data. The analysis shows that a daily range of 82 miles meets to 95% of the car-owner requirements at all times with the exception of long vacation trips. Further, it is shown that the requirement of a daily range of 82 miles can be met by a (intermediate-level) battery technology characterized by an energy density of 30 to 50 Watt-hours per pound. Candidate batteries in this class are nickel-zinc, nickel-iron, and iron-air. These results imply that long-term research goals for battery systems should be focused on lower cost and longer service life, rather than on higher energy densities
NASA Astrophysics Data System (ADS)
Selim, M. M.; Bezák, V.
2003-06-01
The one-dimensional version of the radiative transfer problem (i.e. the so-called rod model) is analysed with a Gaussian random extinction function (x). Then the optical length X = 0 Ldx(x) is a Gaussian random variable. The transmission and reflection coefficients, T(X) and R(X), are taken as infinite series. When these series (and also when the series representing T 2(X), T 2(X), R(X)T(X), etc.) are averaged, term by term, according to the Gaussian statistics, the series become divergent after averaging. As it was shown in a former paper by the authors (in Acta Physica Slovaca (2003)), a rectification can be managed when a `modified' Gaussian probability density function is used, equal to zero for X > 0 and proportional to the standard Gaussian probability density for X > 0. In the present paper, the authors put forward an alternative, showing that if the m.s.r. of X is sufficiently small in comparison with & $bar X$ ; , the standard Gaussian averaging is well functional provided that the summation in the series representing the variable T m-j (X)R j (X) (m = 1,2,..., j = 1,...,m) is truncated at a well-chosen finite term. The authors exemplify their analysis by some numerical calculations.
Analysis of data from NASA B-57B gust gradient program
NASA Technical Reports Server (NTRS)
Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.
1985-01-01
Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.
Effect of segmented electrode length on the performances of Hall thruster
NASA Astrophysics Data System (ADS)
Duan, Ping; Chen, Long; Liu, Guangrui; Bian, Xingyu; Yin, Yan
2016-09-01
The influences of the low-emissive graphite segmented electrode placed near the channel exit on the discharge characteristics of Hall thruster are studied using the particle-in-cell method. A two-dimensional physical model is established according to the Hall thruster discharge channel configuration. The effects of electrode length on potential, ion density, electron temperature, ionization rate and discharge current are investigated. It is found that, with the increasing of segmented electrode length, the equipotential lines bend towards the channel exit, and approximately parallel to the wall at the channel surface, radial velocity and radial flow of ions are increased, and the electron temperature is also enhanced. Due to the conductive characteristic of electrodes, the radial electric field and the axial electron conductivity near the wall are enhanced, and the probability of the electron-atom ionization is reduced, which leads to the degradation of ionization rate in discharge channel. However, the interaction between electrons and the wall enhances the near wall conductivity, therefore the discharge current grows along with the segmented electrode length, and the performance of the thruster is also affected.
Ecology of a Maryland population of black rat snakes (Elaphe o. obsoleta)
Stickel, L.F.; Stickel, W.H.; Schmid, F.C.
1980-01-01
Behavior, growth and age of black rat snakes under natural conditions were investigated by mark-recapture methods at the Patuxent Wildlife Research Center for 22 years (1942-1963), with limited observations for 13 more years (1964-1976). Over the 35-year period, 330 snakes were recorded a total of 704 times. Individual home ranges remained stable for many years; male ranges averaged at least 600 m in diam and female ranges at least 500 m, each including a diversity of habitats, evidenced also in records of foods. Population density was low, probably less than 0.5 snake/ha. Peak activity of both sexes was in May and June, with a secondary peak in September. Large trees in the midst of open areas appeared to serve a significant functional role in the behavioral life pattern of the snake population. Male combat was observed three times in the field. Male snakes grew more rapidly than females, attained larger sizes and lived longer. Some individuals of both sexes probably lived 20 years or more. Weight-length relationships changed as the snakes grew and developed heavier bodies in proportion to length. Growth apparently continued throughout life. Some individuals, however, both male and female, stopped growing for periods of I or 2 years and then resumed, a condition probably related to poor health, suggested by skin ailments.
The influence of microstructure on the probability of early failure in aluminum-based interconnects
NASA Astrophysics Data System (ADS)
Dwyer, V. M.
2004-09-01
For electromigration in short aluminum interconnects terminated by tungsten vias, the well known "short-line" effect applies. In a similar manner, for longer lines, early failure is determined by a critical value Lcrit for the length of polygranular clusters. Any cluster shorter than Lcrit is "immortal" on the time scale of early failure where the figure of merit is not the standard t50 value (the time to 50% failures), but rather the total probability of early failure, Pcf. Pcf is a complex function of current density, linewidth, line length, and material properties (the median grain size d50 and grain size shape factor σd). It is calculated here using a model based around the theory of runs, which has proved itself to be a useful tool for assessing the probability of extreme events. Our analysis shows that Pcf is strongly dependent on σd, and a change in σd from 0.27 to 0.5 can cause an order of magnitude increase in Pcf under typical test conditions. This has implications for the web-based two-dimensional grain-growth simulator MIT/EmSim, which generates grain patterns with σd=0.27, while typical as-patterned structures are better represented by a σd in the range 0.4 - 0.6. The simulator will consequently overestimate interconnect reliability due to this particular electromigration failure mode.
Density Variations in the NW Star Stream of M31
NASA Astrophysics Data System (ADS)
Carlberg, R. G.; Richer, Harvey B.; McConnachie, Alan W.; Irwin, Mike; Ibata, Rodrigo A.; Dotter, Aaron L.; Chapman, Scott; Fardal, Mark; Ferguson, A. M. N.; Lewis, G. F.; Navarro, Julio F.; Puzia, Thomas H.; Valls-Gabaud, David
2011-04-01
The Pan Andromeda Archeological Survey (PAndAS) CFHT Megaprime survey of the M31-M33 system has found a star stream which extends about 120 kpc NW from the center of M31. The great length of the stream, and the likelihood that it does not significantly intersect the disk of M31, means that it is unusually well suited for a measurement of stream gaps and clumps along its length as a test for the predicted thousands of dark matter sub-halos. The main result of this paper is that the density of the stream varies between zero and about three times the mean along its length on scales of 2-20 kpc. The probability that the variations are random fluctuations in the star density is less than 10-5. As a control sample, we search for density variations at precisely the same location in stars with metallicity higher than the stream [Fe/H] = [0, -0.5] and find no variations above the expected shot noise. The lumpiness of the stream is not compatible with a low mass star stream in a smooth galactic potential, nor is it readily compatible with the disturbance caused by the visible M31 satellite galaxies. The stream's density variations appear to be consistent with the effects of a large population of steep mass function dark matter sub-halos, such as found in LCDM simulations, acting on an approximately 10 Gyr old star stream. The effects of a single set of halo substructure realizations are shown for illustration, reserving a statistical comparison for another study. Based on observations obtained with MegaPrime / MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institute National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.
NASA Astrophysics Data System (ADS)
Theiler, C.; Diallo, A.; Fasoli, A.; Furno, I.; Labit, B.; Podestà, M.; Poli, F. M.; Ricci, P.
2008-04-01
Intermittent cross-field particle transport events (ITEs) are studied in the basic toroidal device TORPEX [TORoidal Plasma EXperiment, A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], with focus on the role of the density gradient. ITEs are due to the intermittent radial elongation of an interchange mode. The elongating positive wave crests can break apart and form blobs. This is not necessary, however, for plasma particles to be convected a considerable distance across the magnetic field lines. Conditionally sampled data reveal two different scenarios leading to ITEs. In the first case, the interchange mode grows radially from a slab-like density profile and leads to the ITE. A novel analysis technique reveals a monotonic dependence between the vertically averaged inverse radial density scale length and the probability for a subsequent ITE. In the second case, the mode is already observed before the start of the ITE. It does not elongate radially in a first stage, but at a later time. It is shown that this elongation is preceded by a steepening of the density profile as well.
Global Distribution of Density Irregularities in the Equatorial Ionosphere
NASA Technical Reports Server (NTRS)
Kil, Hyosub; Heelis, R. A.
1998-01-01
We analyzed measurements of ion number density made by the retarding potential analyzer aboard the Atmosphere Explorer-E (AE-E) satellite, which was in an approximately circular orbit at an altitude near 300 km in 1977 and later at an altitude near 400 km. Large-scale (greater than 60 km) density measurements in the high-altitude regions show large depletions of bubble-like structures which are confined to narrow local time longitude, and magnetic latitude ranges, while those in the low-altitude regions show relatively small depletions which are broadly distributed,in space. For this reason we considered the altitude regions below 300 km and above 350 km and investigated the global distribution of irregularities using the rms deviation delta N/N over a path length of 18 km as an indicator of overall irregularity intensity. Seasonal variations of irregularity occurrence probability are significant in the Pacific regions, while the occurrence probability is always high in die Atlantic-African regions and is always low in die Indian regions. We find that the high occurrence probability in the Pacific regions is associated with isolated bubble structures, while that near 0 deg longitude is produced by large depictions with bubble structures which are superimposed on a large-scale wave-like background. Considerations of longitude variations due to seeding mechanisms and due to F region winds and drifts are necessary to adequately explain the observations at low and high altitudes. Seeding effects are most obvious near 0 deg longitude, while the most easily observed effect of the F region is the suppression of irregularity growth by interhemispheric neutral winds.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
ERIC Educational Resources Information Center
Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.
2010-01-01
Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…
Mechanistic modelling of Middle Eocene atmospheric carbon dioxide using fossil plant material
NASA Astrophysics Data System (ADS)
Grein, Michaela; Roth-Nebelsick, Anita; Wilde, Volker; Konrad, Wilfried; Utescher, Torsten
2010-05-01
Various proxies (such as pedogenic carbonates, boron isotopes or phytoplankton) and geochemical models were applied in order to reconstruct palaeoatmospheric carbon dioxide, partially providing conflicting results. Another promising proxy is the frequency of stomata (pores on the leaf surface used for gaseous exchange). In this project, fossil plant material from the Messel Pit (Hesse, Germany) is used to reconstruct atmospheric carbon dioxide concentration in the Middle Eocene by analyzing stomatal density. We applied the novel mechanistic-theoretical approach of Konrad et al. (2008) which provides a quantitative derivation of the stomatal density response (number of stomata per leaf area) to varying atmospheric carbon dioxide concentration. The model couples 1) C3-photosynthesis, 2) the process of diffusion and 3) an optimisation principle providing maximum photosynthesis (via carbon dioxide uptake) and minimum water loss (via stomatal transpiration). These three sub-models also include data of the palaeoenvironment (temperature, water availability, wind velocity, atmospheric humidity, precipitation) and anatomy of leaf and stoma (depth, length and width of stomatal porus, thickness of assimilation tissue, leaf length). In order to calculate curves of stomatal density as a function of atmospheric carbon dioxide concentration, various biochemical parameters have to be borrowed from extant representatives. The necessary palaeoclimate data are reconstructed from the whole Messel flora using Leaf Margin Analysis (LMA) and the Coexistence Approach (CA). In order to obtain a significant result, we selected three species from which a large number of well-preserved leaves is available (at least 20 leaves per species). Palaeoclimate calculations for the Middle Eocene Messel Pit indicate a warm and humid climate with mean annual temperature of approximately 22°C, up to 2540 mm mean annual precipitation and the absence of extended periods of drought. Mean relative air humidity was probably rather high, up to 77%. The combined results of the three selected plant taxa indicate values for atmospheric carbon dioxide concentration between 700 and 1100 ppm (probably about 900 ppm). Reference: Konrad, W., Roth-Nebelsick, A., Grein, M. (2008). Modelling of stomatal density response to atmospheric CO2. Journal of Theoretical Biology 253(4): 638-658.
A k-omega multivariate beta PDF for supersonic turbulent combustion
NASA Technical Reports Server (NTRS)
Alexopoulos, G. A.; Baurle, R. A.; Hassan, H. A.
1993-01-01
In a recent attempt by the authors at predicting measurements in coaxial supersonic turbulent reacting mixing layers involving H2 and air, a number of discrepancies involving the concentrations and their variances were noted. The turbulence model employed was a one-equation model based on the turbulent kinetic energy. This required the specification of a length scale. In an attempt at detecting the cause of the discrepancy, a coupled k-omega joint probability density function (PDF) is employed in conjunction with a Navier-Stokes solver. The results show that improvements resulting from a k-omega model are quite modest.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A
2006-11-01
A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.
Characteristic Structure of Star-forming Clouds
NASA Astrophysics Data System (ADS)
Myers, Philip C.
2015-06-01
This paper presents a new method to diagnose the star-forming potential of a molecular cloud region from the probability density function of its column density (N-pdf). This method provides expressions for the column density and mass profiles of a symmetric filament having the same N-pdf as a filamentary region. The central concentration of this characteristic filament can distinguish regions and can quantify their fertility for star formation. Profiles are calculated for N-pdfs which are pure lognormal, pure power law, or a combination. In relation to models of singular polytropic cylinders, characteristic filaments can be unbound, bound, or collapsing depending on their central concentration. Such filamentary models of the dynamical state of N-pdf gas are more relevant to star-forming regions than are spherical collapse models. The star formation fertility of a bound or collapsing filament is quantified by its mean mass accretion rate when in radial free fall. For a given mass per length, the fertility increases with the filament mean column density and with its initial concentration. In selected regions the fertility of their characteristic filaments increases with the level of star formation.
The self-preservation of dissipation elements in homogeneous isotropic decaying turbulence
NASA Astrophysics Data System (ADS)
Gauding, Michael; Danaila, Luminita; Varea, Emilien
2017-11-01
The concept of self-preservation has played an important role in shaping the understanding of turbulent flows. The assumption of complete self-preservation imposes certain constrains on the dynamics of the flow, allowing to express statistics by choosing an appropriate unique length scale. Another approach in turbulence research is to study the dynamics of geometrical objects, like dissipation elements (DE). DE appear as coherent space-filling structures in turbulent scalar fields and can be parameterized by the linear length between their ending points. This distance is a natural length scale that provides information about the local structure of turbulence. In this work, the evolution of DE in decaying turbulence is investigated from a self-preservation perspective. The analysis is based on data obtained from direct numerical simulations (DNS). The temporal evolution of DE is governed by a complex process, involving cutting and reconnection events, which change the number and consequently also the length of DE. An analysis of the evolution equation for the probability density function of the length of DE is carried out and leads to specific constraints for the self-preservation of DE, which are justified from DNS. Financial support was provided by Labex EMC3 (under the Grant VAVIDEN), Normandy Region and FEDER.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odier, Philippe; Ecke, Robert E.
Stratified shear flows occur in many geophysical contexts, from oceanic overflows and river estuaries to wind-driven thermocline layers. In this study, we explore a turbulent wall-bounded shear flow of lighter miscible fluid into a quiescent fluid of higher density with a range of Richardson numbersmore » $$0.05\\lesssim Ri\\lesssim 1$$. In order to find a stability parameter that allows close comparison with linear theory and with idealized experiments and numerics, we investigate different definitions of$Ri$$. We find that a gradient Richardson number defined on fluid interface sections where there is no overturning at or adjacent to the maximum density gradient position provides an excellent stability parameter, which captures the Miles–Howard linear stability criterion. For small $$Ri$$ the flow exhibits robust Kelvin–Helmholtz instability, whereas for larger $$Ri$$ interfacial overturning is more intermittent with less frequent Kelvin–Helmholtz events and emerging Holmboe wave instability consistent with a thicker velocity layer compared with the density layer. We compute the perturbed fraction of interface as a quantitative measure of the flow intermittency, which is approximately 1 for the smallest $$Ri$$ but decreases rapidly as $$Ri$ increases, consistent with linear theory. For the perturbed regions, we use the Thorpe scale to characterize the overturning properties of these flows. The probability distribution of the non-zero Thorpe length yields a universal exponential form, suggesting that much of the overturning results from increasingly intermittent Kelvin–Helmholtz instability events. Finally, the distribution of turbulent kinetic energy, conditioned on the intermittency fraction, has a similar form, suggesting an explanation for the universal scaling collapse of the Thorpe length distribution.« less
Odier, Philippe; Ecke, Robert E.
2017-02-21
Stratified shear flows occur in many geophysical contexts, from oceanic overflows and river estuaries to wind-driven thermocline layers. In this study, we explore a turbulent wall-bounded shear flow of lighter miscible fluid into a quiescent fluid of higher density with a range of Richardson numbersmore » $$0.05\\lesssim Ri\\lesssim 1$$. In order to find a stability parameter that allows close comparison with linear theory and with idealized experiments and numerics, we investigate different definitions of$Ri$$. We find that a gradient Richardson number defined on fluid interface sections where there is no overturning at or adjacent to the maximum density gradient position provides an excellent stability parameter, which captures the Miles–Howard linear stability criterion. For small $$Ri$$ the flow exhibits robust Kelvin–Helmholtz instability, whereas for larger $$Ri$$ interfacial overturning is more intermittent with less frequent Kelvin–Helmholtz events and emerging Holmboe wave instability consistent with a thicker velocity layer compared with the density layer. We compute the perturbed fraction of interface as a quantitative measure of the flow intermittency, which is approximately 1 for the smallest $$Ri$$ but decreases rapidly as $$Ri$ increases, consistent with linear theory. For the perturbed regions, we use the Thorpe scale to characterize the overturning properties of these flows. The probability distribution of the non-zero Thorpe length yields a universal exponential form, suggesting that much of the overturning results from increasingly intermittent Kelvin–Helmholtz instability events. Finally, the distribution of turbulent kinetic energy, conditioned on the intermittency fraction, has a similar form, suggesting an explanation for the universal scaling collapse of the Thorpe length distribution.« less
NASA Astrophysics Data System (ADS)
Pishdast, Masoud; Ghasemi, Seyed Abolfazl; Yazdanpanah, Jamal Aldin
2017-10-01
The role of plasma density scale length on two short and long laser pulse propagation and scattering in under dense plasma have been investigated in relativistic regime using 1 D PIC simulation. In our simulation, different density scale lengths and also two short and long pulse lengths with temporal pulse duration τL = 60 fs and τL = 300 fs , respectively have been used. It is found that laser pulse length and density scale length have considerable effects on the energetic electron generation. The analysis of total radiation spectrum reveals that, for short laser pulses and with reducing density scale length, more unstable electromagnetic modes grow and strong longitudinal electric field generates which leads to the generation of more energetic plasma particles. Meanwhile, the dominant scattering mechanism is Raman scattering and tends to Thomson scattering for longer laser pulse.
Characterizing pixel and point patterns with a hyperuniformity disorder length
NASA Astrophysics Data System (ADS)
Chieco, A. T.; Dreyfus, R.; Durian, D. J.
2017-09-01
We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns—where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h =L /2 . Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h =(L /2 )(f /d ) for small f , and h =L /2 for f →1 . And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L ,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h (L ) versus L . We call this approach "hyperuniformity disorder length spectroscopy".
Characterizing pixel and point patterns with a hyperuniformity disorder length.
Chieco, A T; Dreyfus, R; Durian, D J
2017-09-01
We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns-where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h=L/2. Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h=(L/2)(f/d) for small f, and h=L/2 for f→1. And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h(L) versus L. We call this approach "hyperuniformity disorder length spectroscopy".
Solution to certain problems in the failure of composite structures
NASA Astrophysics Data System (ADS)
Goodsell, Johnathan
The present work contains the solution of two problems in composite structures. In the first, an approximate elasticity solution for prediction of the displacement, stress and strain fields within the m-layer, symmetric and balanced angle-ply composite laminate of finite-width subjected anticlastic bending deformation is developed. The solution is shown to recover classical laminated plate theory predictions at interior regions of the laminate and thereby illustrates the boundary layer character of this interlaminar phenomenon. The results exhibit the anticipated response in congruence with the solutions for uniform axial extension and uniform temperature change, where divergence of the interlaminar shearing stress is seen to occur at the intersection of the free-edge and planes between lamina of +theta and -theta orientation. The analytical results show excellent agreement with the finite-element predictions for the same boundary-value problem and thereby provide an efficient and compact solution available for parametric studies of the influence of geometry and material properties. The solution is combined with previously developed solutions for uniform axial extension and uniform temperature change of the identical laminate and the combined solution is exercised to compare the relative magnitudes of free-edge phenomenon arising from the different loading conditions, to study very thick laminates and laminates where the laminate width is less than the laminate thickness. Significantly, it was demonstrated that the solution is valid for arbitrary stacking sequence and the solution was exercised to examine antisymmetric and non-symmetric laminates. Finally, the solution was exercised to determine the dimensions of the boundary layer for very large numbers of layers. It was found that the dimension of the boundary layer width in bending is approximately twice that in uniform axial extension and uniform temperature change. In the second, the intrinsic flaw concept is extended to the determination of the intrinsic flaw length and the prediction of performance variability in the 10-degree off-axis specimen. The intrinsic flaw is defined as a fracture mechanics-type, through-thickness planar crack extending in the fiber direction from the failure initiation site of length, a. The distribution of intrinsic flaw lengths is postulated from multiple tests of 10-degree off-axis specimens by calculating the length of flaw that would cause fracture at each measured failure site and failure load given the fracture toughness of the material. The intrinsic flaw lengths on the homogeneous and micromechanical scales for unnotched (no hole) and specimens containing a centrally-located, through-thickness circular hole are compared. 8 hole-diameters ranging from 1.00--12.7 mm are considered. On the micromechanical scale, the intrinsic flaw ranges between approximately 10 and 100 microns in length, on the order of the relevant microstructural dimensions. The intrinsic flaw lengths on the homogeneous scale are determined to be an order of magnitude greater than that on the micromechanical scale. The effect of variation in the fiber volume fraction on the intrinsic flaw length is also considered. In the strength predictions for the specimens, the intrinsic flaw crack geometry and probability density function of intrinsic flaw lengths calculated from the unnotched specimens allow fracture mechanics predictions of strength variability. The strength prediction is dependent on the flaw density, the number of flaws per unit length along the free-edge. The flaw density is established by matching the predicted strength with the experimental strength. The distribution of intrinsic flaw lengths is used with the strength variability of the unnotched and of open-hole specimens to determine the flaw density at each hole-size. The flaw density is shown to be related to the fabrication machining speed suggesting machining damage as a mechanism for the hole-size dependence of the flaw density. (Abstract shortened by UMI.)
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) "true/false" SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved "true" zones could determine the corrosion rate in any of the zones.
On entropic uncertainty relations in the presence of a minimal length
NASA Astrophysics Data System (ADS)
Rastegin, Alexey E.
2017-07-01
Entropic uncertainty relations for the position and momentum within the generalized uncertainty principle are examined. Studies of this principle are motivated by the existence of a minimal observable length. Then the position and momentum operators satisfy the modified commutation relation, for which more than one algebraic representation is known. One of them is described by auxiliary momentum so that the momentum and coordinate wave functions are connected by the Fourier transform. However, the probability density functions of the physically true and auxiliary momenta are different. As the corresponding entropies differ, known entropic uncertainty relations are changed. Using differential Shannon entropies, we give a state-dependent formulation with correction term. State-independent uncertainty relations are obtained in terms of the Rényi entropies and the Tsallis entropies with binning. Such relations allow one to take into account a finiteness of measurement resolution.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Lum, Kirsten J.; Sundaram, Rajeshwari; Louis, Thomas A.
2015-01-01
Prospective pregnancy studies are a valuable source of longitudinal data on menstrual cycle length. However, care is needed when making inferences of such renewal processes. For example, accounting for the sampling plan is necessary for unbiased estimation of the menstrual cycle length distribution for the study population. If couples can enroll when they learn of the study as opposed to waiting for the start of a new menstrual cycle, then due to length-bias, the enrollment cycle will be stochastically larger than the general run of cycles, a typical property of prevalent cohort studies. Furthermore, the probability of enrollment can depend on the length of time since a woman’s last menstrual period (a backward recurrence time), resulting in selection effects. We focus on accounting for length-bias and selection effects in the likelihood for enrollment menstrual cycle length, using a recursive two-stage approach wherein we first estimate the probability of enrollment as a function of the backward recurrence time and then use it in a likelihood with sampling weights that account for length-bias and selection effects. To broaden the applicability of our methods, we augment our model to incorporate a couple-specific random effect and time-independent covariate. A simulation study quantifies performance for two scenarios of enrollment probability when proper account is taken of sampling plan features. In addition, we estimate the probability of enrollment and the distribution of menstrual cycle length for the study population of the Longitudinal Investigation of Fertility and the Environment Study. PMID:25027273
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
Solvable continuous-time random walk model of the motion of tracer particles through porous media.
Fouxon, Itzhak; Holzner, Markus
2016-08-01
We consider the continuous-time random walk (CTRW) model of tracer motion in porous medium flows based on the experimentally determined distributions of pore velocity and pore size reported by Holzner et al. [M. Holzner et al., Phys. Rev. E 92, 013015 (2015)PLEEE81539-375510.1103/PhysRevE.92.013015]. The particle's passing through one channel is modeled as one step of the walk. The step (channel) length is random and the walker's velocity at consecutive steps of the walk is conserved with finite probability, mimicking that at the turning point there could be no abrupt change of velocity. We provide the Laplace transform of the characteristic function of the walker's position and reductions for different cases of independence of the CTRW's step duration τ, length l, and velocity v. We solve our model with independent l and v. The model incorporates different forms of the tail of the probability density of small velocities that vary with the model parameter α. Depending on that parameter, all types of anomalous diffusion can hold, from super- to subdiffusion. In a finite interval of α, ballistic behavior with logarithmic corrections holds, which was observed in a previously introduced CTRW model with independent l and τ. Universality of tracer diffusion in the porous medium is considered.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2002-01-01
Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.
DNA of a Human Hepatitis B Virus Candidate
Robinson, William S.; Clayton, David A.; Greenman, Richard L.
1974-01-01
Particles containing DNA polymerase (Dane particles) were purified from the plasma of chronic carriers of hepatitis B antigen. After a DNA polymerase reaction with purified Dane particle preparations treated with Nonidet P-40 detergent, Dane particle core structures containing radioactive DNA product were isolated by sedimentation in a sucrose density gradient. The radioactive DNA was extracted with sodium dodecyl sulfate and isolated by band sedimentation in a preformed CsCl gradient. Examination of the radioactive DNA band by electron microscopy revealed exclusively circular double-stranded DNA molecules approximately 0.78 μm in length. Identical circular molecules were observed when DNA was isolated by a similar procedure from particles that had not undergone a DNA polymerase reaction. The molecules were completely degraded by DNase 1. When Dane particle core structures were treated with DNase 1 before DNA extraction, only 0.78-μm circular DNA molecules were detected. Without DNase treatment of core structures, linear molecules with lengths between 0.5 and 12 μm, in addition to the 0.78-μm circles were found. These results suggest that the 0.78-μm circular molecules were in a protected position within Dane particle cores and the linear molecules were not within core structures. Length measurements on 225 circular molecules revealed a mean length of 0.78 ± 0.09 μm which would correspond to a molecular weight of around 1.6 × 106. The circular molecules probably serve as primer-template for the DNA polymerase reaction carried out by Dane particle cores. Thermal denaturation and buoyant density measurements on the Dane particle DNA polymerase reaction product revealed a guanosine plus cytosine content of 48 to 49%. Images PMID:4847328
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph
In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potentialmore » composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.« less
The nuclear size and mass effects on muonic hydrogen-like atoms embedded in Debye plasma
NASA Astrophysics Data System (ADS)
Poszwa, A.; Bahar, M. K.; Soylu, A.
2016-10-01
Effects of finite nuclear size and finite nuclear mass are investigated for muonic atoms and muonic ions embedded in the Debye plasma. Both nuclear charge radii and nuclear masses are taken into account with experimentally determined values. In particular, isotope shifts of bound state energies, radial probability densities, transition energies, and binding energies for several atoms are studied as functions of Debye length. The theoretical model based on semianalytical calculations, the Sturmian expansion method, and the perturbative approach has been constructed, in the nonrelativistic frame. For some limiting cases, the comparison with previous most accurate literature results has been made.
Constructiveness and destructiveness of temperature in asymmetric quantum pseudo dot qubit system
NASA Astrophysics Data System (ADS)
Chen, Ying-Jie; Song, Hai-Tao; Xiao, Jing-Lin
2018-06-01
By using the variational method of the Pekar type, we theoretically study the temperature effects on the asymmetric quantum pseudo dot qubit with a pseudoharmonic potential under an electromagnetic field. The numerical results are analyzed and discussed in detail and show that the relationships of the ground and first excited state energies, the electron oscillation period and the electron probability density in the superposition state of the ground state and the first-excited state with the temperature, the chemical potential, the pseudoharmonic potential, the electric field strength, the cyclotron frequency, the electron phonon coupling constant, the transverse and longitudinal effective confinement length, respectively.
Study of the enhancement-mode AlGaN/GaN high electron mobility transistor with split floating gates
NASA Astrophysics Data System (ADS)
Wang, Hui; Wang, Ning; Jiang, Ling-Li; Zhao, Hai-Yue; Lin, Xin-Peng; Yu, Hong-Yu
2017-11-01
In this work, the charge storage based split floating gates (FGs) enhancement mode (E-mode) AlGaN/GaN high electron mobility transistors (HEMTs) are studied. The simulation results reveal that under certain density of two dimensional electron gas, the variation tendency of the threshold voltage (Vth) with the variation of the blocking dielectric thickness depends on the FG charge density. It is found that when the length sum and isolating spacing sum of the FGs both remain unchanged, the Vth shall decrease with the increasing FGs number but maintaining the device as E-mode. It is also reported that for the FGs HEMT, the failure of a FG will lead to the decrease of Vth as well as the increase of drain current, and the failure probability can be improved significantly with the increase of FGs number.
Lum, Kirsten J; Sundaram, Rajeshwari; Louis, Thomas A
2015-01-01
Prospective pregnancy studies are a valuable source of longitudinal data on menstrual cycle length. However, care is needed when making inferences of such renewal processes. For example, accounting for the sampling plan is necessary for unbiased estimation of the menstrual cycle length distribution for the study population. If couples can enroll when they learn of the study as opposed to waiting for the start of a new menstrual cycle, then due to length-bias, the enrollment cycle will be stochastically larger than the general run of cycles, a typical property of prevalent cohort studies. Furthermore, the probability of enrollment can depend on the length of time since a woman's last menstrual period (a backward recurrence time), resulting in selection effects. We focus on accounting for length-bias and selection effects in the likelihood for enrollment menstrual cycle length, using a recursive two-stage approach wherein we first estimate the probability of enrollment as a function of the backward recurrence time and then use it in a likelihood with sampling weights that account for length-bias and selection effects. To broaden the applicability of our methods, we augment our model to incorporate a couple-specific random effect and time-independent covariate. A simulation study quantifies performance for two scenarios of enrollment probability when proper account is taken of sampling plan features. In addition, we estimate the probability of enrollment and the distribution of menstrual cycle length for the study population of the Longitudinal Investigation of Fertility and the Environment Study. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Emara, Mohamed H; Elhawari, Soha A; Yousef, Salem; Radwan, Mohamed I; Abdel-Aziz, Hesham R
2016-02-01
There is growing evidence from preclinical and clinical studies that emphasizes the efficacy of probiotics in the management of Helicobacter (H) pylori infection; it increased the eradication rate, improved patient clinical manifestations and lowered treatment associated side effects. In this review we documented the potential ability of probiotics to ameliorate H. pylori induced histological features. We searched the available literature for full length articles focusing the role of probiotics on H. pylori induced gastritis from histologic perspectives. Probiotics lowered H. pylori density at the luminal side of epithelium, improved histological inflammatory and activity scores both in the gastric corpus and antrum. This effect persists for long period of time after discontinuation of probiotic supplementation and this is probably through an immune mechanism. The current evidence support the promising role of probiotics in improving H. pylori induced histopathological features both in gastric antrum and corpus and for long periods of time. Because increased density of H. pylori on the gastric mucosa is linked to more severe gastritis and increased incidence of peptic ulcers, we can infer that a reduction of the density might help to decrease the risk of developing pathologies, probably the progression toward atrophic gastritis and gastric adenocarcinoma. These effects together with improving the H. pylori eradication rates and amelioration of treatment related side effects might open the door for probiotics to be added to H. pylori eradication regimens. © 2015 John Wiley & Sons Ltd.
Principles of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Landé, Alfred
2013-10-01
Preface; Introduction: 1. Observation and interpretation; 2. Difficulties of the classical theories; 3. The purpose of quantum theory; Part I. Elementary Theory of Observation (Principle of Complementarity): 4. Refraction in inhomogeneous media (force fields); 5. Scattering of charged rays; 6. Refraction and reflection at a plane; 7. Absolute values of momentum and wave length; 8. Double ray of matter diffracting light waves; 9. Double ray of matter diffracting photons; 10. Microscopic observation of ρ (x) and σ (p); 11. Complementarity; 12. Mathematical relation between ρ (x) and σ (p) for free particles; 13. General relation between ρ (q) and σ (p); 14. Crystals; 15. Transition density and transition probability; 16. Resultant values of physical functions; matrix elements; 17. Pulsating density; 18. General relation between ρ (t) and σ (є); 19. Transition density; matrix elements; Part II. The Principle of Uncertainty: 20. Optical observation of density in matter packets; 21. Distribution of momenta in matter packets; 22. Mathematical relation between ρ and σ; 23. Causality; 24. Uncertainty; 25. Uncertainty due to optical observation; 26. Dissipation of matter packets; rays in Wilson Chamber; 27. Density maximum in time; 28. Uncertainty of energy and time; 29. Compton effect; 30. Bothe-Geiger and Compton-Simon experiments; 31. Doppler effect; Raman effect; 32. Elementary bundles of rays; 33. Jeans' number of degrees of freedom; 34. Uncertainty of electromagnetic field components; Part III. The Principle of Interference and Schrödinger's equation: 35. Physical functions; 36. Interference of probabilities for p and q; 37. General interference of probabilities; 38. Differential equations for Ψp (q) and Xq (p); 39. Differential equation for фβ (q); 40. The general probability amplitude Φβ' (Q); 41. Point transformations; 42. General theorem of interference; 43. Conjugate variables; 44. Schrödinger's equation for conservative systems; 45. Schrödinger's equation for non-conservative systems; 46. Pertubation theory; 47. Orthogonality, normalization and Hermitian conjugacy; 48. General matrix elements; Part IV. The Principle of Correspondence: 49. Contact transformations in classical mechanics; 50. Point transformations; 51. Contact transformations in quantum mechanics; 52. Constants of motion and angular co-ordinates; 53. Periodic orbits; 54. De Broglie and Schrödinger function; correspondence to classical mechanics; 55. Packets of probability; 56. Correspondence to hydrodynamics; 57. Motion and scattering of wave packets; 58. Formal correspondence between classical and quantum mechanics; Part V. Mathematical Appendix: Principle of Invariance: 59. The general theorem of transformation; 60. Operator calculus; 61. Exchange relations; three criteria for conjugacy; 62. First method of canonical transformation; 63. Second method of canonical transformation; 64. Proof of the transformation theorem; 65. Invariance of the matrix elements against unitary transformations; 66. Matrix mechanics; Index of literature; Index of names and subjects.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Inherent length-scales of periodic solar wind number density structures
NASA Astrophysics Data System (ADS)
Viall, N. M.; Kepko, L.; Spence, H. E.
2008-07-01
We present an analysis of the radial length-scales of periodic solar wind number density structures. We converted 11 years (1995-2005) of solar wind number density data into radial length series segments and Fourier analyzed them to identify all spectral peaks with radial wavelengths between 72 (116) and 900 (900) Mm for slow (fast) wind intervals. Our window length for the spectral analysis was 9072 Mm, approximately equivalent to 7 (4) h of data for the slow (fast) solar wind. We required that spectral peaks pass both an amplitude test and a harmonic F-test at the 95% confidence level simultaneously. From the occurrence distributions of these spectral peaks for slow and fast wind, we find that periodic number density structures occur more often at certain radial length-scales than at others, and are consistently observed within each speed range over most of the 11-year interval. For the slow wind, those length-scales are L ˜ 73, 120, 136, and 180 Mm. For the fast wind, those length-scales are L ˜ 187, 270 and 400 Mm. The results argue for the existence of inherent radial length-scales in the solar wind number density.
Link, Jason; Hoff, Michael H.
1998-01-01
We measured morphometric and meristic parameters of gill rakers from the first gill arch of 36 adult lake herring (Coregonus artedi) from Lake Superior that ranged in length from 283–504 mm. These data, coupled with the mean of the smallest two body dimensions (length, width, or breadth) of various zooplankton prey, allowed us to calculate retention probabilities for zooplankton taxa that are common in Lake Superior. The mean of the smallest two body dimensions was positively correlated with body length for cladocerans and copepods. The large cladoceran, Daphnia g. mendotae, is estimated to be retained at a greater probability (74%) than smaller cladocerans (18%-38%). The same is true for the large copepod, Limnocalanus macrurus (60%), when compared to smaller copepods (6–38%). Copepods have a lower probability of being retained than cladocerans of similar length. Lake herring gill rakers and total filtering area are also positively correlated with fish total length. These data provide further evidence that lake herring are primarily planktivores in Lake Superior, and our data show that lake herring can retain a broad range of prey sizes.
Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Licatta, Angelo; Griffin, Devon
2007-01-01
The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.
Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Griffin, Devon
2008-01-01
The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.
NASA Astrophysics Data System (ADS)
Matsukura, Keiichiro; Matsumura, Masaya; Tokuda, Makoto
2009-09-01
The evolution of the gall-inducing ability in insects and the adaptive significance of the galling habit have been addressed by many studies. Cicadulina bipunctata, the maize orange leafhopper, is an ideal study organism for evaluating these topics because it can be mass-reared and it feeds on model plants such as rice ( Oryza sativa) and maize ( Zea mays). To reveal differences between gall inductions by C. bipunctata and other gall inducers, we conducted four experiments concerning (a) the relationship between the feeding site and gall-induction sites of C. bipunctata on maize, (b) the effects of leafhopper sex and density, (c) the effects of length of infestation on gall induction, and (d) the effects of continuous infestation. C. bipunctata did not induce galls on the leaves where it fed but induced galls on other leaves situated at more distal positions. The degree of gall induction was significantly correlated with infestation density and length. These results indicate that C. bipunctata induces galls in a dose-dependent manner on leaves distant from feeding sites, probably by injecting chemical(s) to the plant during feeding. We suggest that insect galls are induced by a chemical stimulus injected by gall inducers during feeding into the hosts.
Random covering of the circle: the configuration-space of the free deposition process
NASA Astrophysics Data System (ADS)
Huillet, Thierry
2003-12-01
Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.
Mangen, M-J J; Nielen, M; Burrell, A M
2002-12-18
We examined the importance of pig-population density in the area of an outbreak of classical swine fever (CSF) for the spread of the infection and the choice of control measures. A spatial, stochastic, dynamic epidemiological simulation model linked to a sector-level market-and-trade model for The Netherlands were used. Outbreaks in sparsely and densely populated areas were compared under four different control strategies and with two alternative trade assumptions. The obligatory control strategy required by current EU legislation was predicted to be enough to eradicate an epidemic starting in an area with sparse pig population. By contrast, additional control measures would be necessary if the outbreak began in an area with high pig density. The economic consequences of using preventive slaughter rather than emergency vaccination as an additional control measure depended strongly on the reactions of trading partners. Reducing the number of animal movements significantly reduced the size and length of epidemics in areas with high pig density. The phenomenon of carrier piglets was included in the model with realistic probabilities of infection by this route, but it made a negligible contribution to the spread of the infection.
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.
Estimating loblolly pine size-density trajectories across a range of planting densities
Curtis L. VanderSchaaf; Harold E. Burkhart
2013-01-01
Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...
Owens, Randall W.; Noguchi, George E.
1998-01-01
Knowledge of the spawning cycle and factors affecting fecundity of slimy sculpins (Cottus cognatus) are important in understanding the population dynamics of this species in large lake systems, like Lake Ontario. Fecundity and the spawning cycle of slimy sculpins were described from samples of slimy sculpins and their egg masses collected with bottom trawls during four annual surveys, April to October, 1988 to 1994. Incidence of gravid females and collections of their egg masses indicated that spawning by slimy sculpins likely occurred from late April to mid October in Lake Ontario. Protracted spawning by slimy sculpins in Lake Ontario is probably a function of the annual water temperature cycle at various depths. Mean length of gravid females was inversely related to density of slimy sculpins. Fecundity ranged from 55 to 1,157 eggs among fish 55 to 127 mm long, and for similar-sized fish, fecundity was inversely related to density of slimy sculpins. Fecundity was about 50% higher at Olcott, where population indices of slimy sculpins were low, compared with Nine Mile Point where indices were much higher. Somatic weight or total length were both good predictors of fecundity. Lipid content of slimy sculpins was lower in an area of high sculpin abundance than in an area of low sculpin abundance, suggesting that fecundity was a function of density-dependent food availability. In large aquatic ecosystems, samples from more than one area may be necessary to describe fecundity of a sedentary species like slimy sculpin, especially if fish densities vary considerably among geographic areas. Large geographic variations in fecundity may be an indicator of spatial imbalance of a species with its prey. Low fecundity may be a compensatory response to slimy sculpins to low food supplies, thereby limiting population growth.
ERIC Educational Resources Information Center
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…
Cao, Xiu; Xia, Ren-Xue; Zhang, De-Jian; Shu, Bo
2013-06-01
Ahydroponics experiment was conducted to study the effects of nutrients (N, P, K, Ca, Mg, Fe, and Mn) deficiency on the length of primary root, the number of lateral roots, and the root hair density, length, and diameter on the primary root and lateral roots of Poncirus trifoliata seedlings. Under the deficiency of each test nutrient, root hair could generate, but was mainly concentrated on the root base and fewer on the root tip. The root hair density on lateral roots was significantly larger than that on primary root, but the root hair length was in adverse. The deficiency of each test nutrient had greater effects on the growth and development of root hairs, with the root hair density on primary root varied from 55.0 to 174.3 mm(-2). As compared with the control, Ca deficiency induced the significant increase of root hair density and length on primary root, P deficiency promoted the root hair density and length on the base and middle part of primary root and on the lateral roots significantly, Fe deficiency increased the root hair density but decreased the root hair length on the tip of primary root significantly, K deficiency significantly decreased the root hair density, length, and diameter on primary root and lateral roots, whereas Mg deficiency increased the root hair length of primary root significantly. In all treatments of nutrient deficiency, the primary root had the similar growth rate, but, with the exceptions of N and Mg deficiency, the lateral roots exhibited shedding and regeneration.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
Free-Free Absorption on Parsec Scales in Seyfert Galaxies
NASA Astrophysics Data System (ADS)
Roy, A. L.; Ulvestad, J. S.; Wilson, A. S.; Colbert, E. J. M.; Mundell, C. G.; Wrobel, J. M.; Norris, R. P.; Falcke, H.; Krichbaum, T.
Seyfert galaxies come in two main types (types 1 and 2) and the difference is probably due to obscuration of the nucleus by a torus of dense molecular material. The inner edge of the torus is expected to be ionized by optical and ultraviolet emission from the active nucleus, and will radiate direct thermal emission (e.g. NGC 1068) and will cause free-free absorption of nuclear radio components viewed through the torus (e.g. Mrk 231, Mrk 348, NGC 2639). However, the nuclear radio sources in Seyfert galaxies are weak compared to radio galaxies and quasars, demanding high sensitivity to study these effects. We have been making sensitive phase referenced VLBI observations at wavelengths between 21 and 2 cm where the free-free turnover is expected, looking for parsec-scale absorption and emission. We find that free-free absorption is common (e.g. in Mrk 348, Mrk 231, NGC 2639, NGC 1068) although compact jets are still visible, and the inferred density of the absorber agrees with the absorption columns inferred from X-ray spectra (Mrk 231, Mrk 348, NGC 2639). We find one-sided parsec-scale jets in Mrk 348 and Mrk 231, and we measure low jet speeds (typically £ 0.1 c). The one-sidedness probably is not due to Doppler boosting, but rather is probably free-free absorption. Plasma density required to produce the absorption is Ne 3 2 105 cm-3 assuming a path length of 0.1 pc, typical of that expected at the inner edge of the obscuring torus.
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
Linking pedestrian flow characteristics with stepping locomotion
NASA Astrophysics Data System (ADS)
Wang, Jiayue; Boltes, Maik; Seyfried, Armin; Zhang, Jun; Ziemer, Verena; Weng, Wenguo
2018-06-01
While properties of human traffic flow are described by speed, density and flow, the locomotion of pedestrian is based on steps. To relate characteristics of human locomotor system with properties of human traffic flow, this paper aims to connect gait characteristics like step length, step frequency, swaying amplitude and synchronization with speed and density and thus to build a ground for advanced pedestrian models. For this aim, observational and experimental study on the single-file movement of pedestrians at different densities is conducted. Methods to measure step length, step frequency, swaying amplitude and step synchronization are proposed by means of trajectories of the head. Mathematical models for the relations of step length or frequency and speed are evaluated. The problem how step length and step duration are influenced by factors like body height and density is investigated. It is shown that the effect of body height on step length and step duration changes with density. Furthermore, two different types of step in-phase synchronization between two successive pedestrians are observed and the influence of step synchronization on step length is examined.
Cache-enabled small cell networks: modeling and tradeoffs.
Baştuǧ, Ejder; Bennis, Mehdi; Kountouris, Marios; Debbah, Mérouane
We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) “true/false” SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved “true” zones could determine the corrosion rate in any of the zones. PMID:23691434
The trading time risks of stock investment in stock price drop
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Tang, Nian-Sheng; Mei, Dong-Cheng; Li, Yun-Xian; Zhang, Wan
2016-11-01
This article investigates the trading time risk (TTR) of stock investment in the case of stock price drop of Dow Jones Industrial Average (ˆDJI) and Hushen300 data (CSI300), respectively. The escape time of stock price from the maximum to minimum in a data window length (DWL) is employed to measure the absolute TTR, the ratio of the escape time to data window length is defined as the relative TTR. Empirical probability density functions of the absolute and relative TTRs for the ˆDJI and CSI300 data evidence that (i) whenever the DWL increases, the absolute TTR increases, the relative TTR decreases otherwise; (ii) there is the monotonicity (or non-monotonicity) for the stability of the absolute (or relative) TTR; (iii) there is a peak distribution for shorter trading days and a two-peak distribution for longer trading days for the PDF of ratio; (iv) the trading days play an opposite role on the absolute (or relative) TTR and its stability between ˆDJI and CSI300 data.
NASA Astrophysics Data System (ADS)
Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.
2018-04-01
We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.
Level-crossing statistics of the horizontal wind speed in the planetary surface boundary layer
NASA Astrophysics Data System (ADS)
Edwards, Paul J.; Hurst, Robert B.
2001-09-01
The probability density of the times for which the horizontal wind remains above or below a given threshold speed is of some interest in the fields of renewable energy generation and pollutant dispersal. However there appear to be no analytic or conceptual models which account for the observed power law form of the distribution of these episode lengths over a range of over three decades, from a few tens of seconds to a day or more. We reanalyze high resolution wind data and demonstrate the fractal character of the point process generated by the wind speed level crossings. We simulate the fluctuating wind speed by a Markov process which approximates the characteristics of the real (non-Markovian) wind and successfully generates a power law distribution of episode lengths. However, fundamental questions concerning the physical basis for this behavior and the connection between the properties of a continuous-time stochastic process and the fractal statistics of the point process generated by its level crossings remain unanswered.
Exact Solution of Mutator Model with Linear Fitness and Finite Genome Length
NASA Astrophysics Data System (ADS)
Saakian, David B.
2017-08-01
We considered the infinite population version of the mutator phenomenon in evolutionary dynamics, looking at the uni-directional mutations in the mutator-specific genes and linear selection. We solved exactly the model for the finite genome length case, looking at the quasispecies version of the phenomenon. We calculated the mutator probability both in the statics and dynamics. The exact solution is important for us because the mutator probability depends on the genome length in a highly non-trivial way.
Tracking of plus-ends reveals microtubule functional diversity in different cell types
NASA Astrophysics Data System (ADS)
Shaebani, M. Reza; Pasula, Aravind; Ott, Albrecht; Santen, Ludger
2016-07-01
Many cellular processes are tightly connected to the dynamics of microtubules (MTs). While in neuronal axons MTs mainly regulate intracellular trafficking, they participate in cytoskeleton reorganization in many other eukaryotic cells, enabling the cell to efficiently adapt to changes in the environment. We show that the functional differences of MTs in different cell types and regions is reflected in the dynamic properties of MT tips. Using plus-end tracking proteins EB1 to monitor growing MT plus-ends, we show that MT dynamics and life cycle in axons of human neurons significantly differ from that of fibroblast cells. The density of plus-ends, as well as the rescue and catastrophe frequencies increase while the growth rate decreases toward the fibroblast cell margin. This results in a rather stable filamentous network structure and maintains the connection between nucleus and membrane. In contrast, plus-ends are uniformly distributed along the axons and exhibit diverse polymerization run times and spatially homogeneous rescue and catastrophe frequencies, leading to MT segments of various lengths. The probability distributions of the excursion length of polymerization and the MT length both follow nearly exponential tails, in agreement with the analytical predictions of a two-state model of MT dynamics.
Self-imposed length limits in recreational fisheries
Chizinski, Christopher J.; Martin, Dustin R.; Hurley, Keith L.; Pope, Kevin L.
2014-01-01
A primary motivating factor on the decision to harvest a fish among consumptive-orientated anglers is the size of the fish. There is likely a cost-benefit trade-off for harvest of individual fish that is size and species dependent, which should produce a logistic-type response of fish fate (release or harvest) as a function of fish size and species. We define the self-imposed length limit as the length at which a captured fish had a 50% probability of being harvested, which was selected because it marks the length of the fish where the probability of harvest becomes greater than the probability of release. We assessed the influences of fish size, catch per unit effort, size distribution of caught fish, and creel limit on the self-imposed length limits for bluegill Lepomis macrochirus, channel catfish Ictalurus punctatus, black crappie Pomoxis nigromaculatus and white crappie Pomoxis annularis combined, white bass Morone chrysops, and yellow perch Perca flavescens at six lakes in Nebraska, USA. As we predicted, the probability of harvest increased with increasing size for all species harvested, which supported the concept of a size-dependent trade-off in costs and benefits of harvesting individual fish. It was also clear that probability of harvest was not simply defined by fish length, but rather was likely influenced to various degrees by interactions between species, catch rate, size distribution, creel-limit regulation and fish size. A greater understanding of harvest decisions within the context of perceived likelihood that a creel limit will be realized by a given angler party, which is a function of fish availability, harvest regulation and angler skill and orientation, is needed to predict the influence that anglers have on fish communities and to allow managers to sustainable manage exploited fish populations in recreational fisheries.
Pressure algorithm for elliptic flow calculations with the PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.; Pope, S. B.; Mongia, H. C.
1991-01-01
An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.
Possible Origin of Efficient Navigation in Small Worlds
NASA Astrophysics Data System (ADS)
Hu, Yanqing; Wang, Yougui; Li, Daqing; Havlin, Shlomo; di, Zengru
2011-03-01
The small-world phenomenon is one of the most important properties found in social networks. It includes both short path lengths and efficient navigation between two individuals. It is found by Kleinberg that navigation is efficient only if the probability density distribution of an individual to have a friend at distance r scales as P(r)˜r-1. Although this spatial scaling is found in many empirical studies, the origin of how this scaling emerges is still missing. In this Letter, we propose the origin of this scaling law using the concept of entropy from statistical physics and show that this scaling is the result of optimization of collecting information in social networks.
Reactions and Transport: Diffusion, Inertia, and Subdiffusion
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Fedotov, Sergei; Horsthemke, Werner
Particles, such as molecules, atoms, or ions, and individuals, such as cells or animals, move in space driven by various forces or cues. In particular, particles or individuals can move randomly, undergo velocity jump processes or spatial jump processes [333]. The steps of the random walk can be independent or correlated, unbiased or biased. The probability density function (PDF) for the jump length can decay rapidly or exhibit a heavy tail. Similarly, the PDF for the waiting time between successive jumps can decay rapidly or exhibit a heavy tail. We will discuss these various possibilities in detail in Chap. 3. Below we provide an introduction to three transport processes: standard diffusion, transport with inertia, and anomalous diffusion.
NASA Astrophysics Data System (ADS)
Issaadi, N.; Hamami, A. A.; Belarbi, R.; Aït-Mokhtar, A.
2017-10-01
In this paper, spatial variabilities of some transfer and storage properties of a concrete wall were assessed. The studied parameters deal with water porosity, water vapor permeability, intrinsic permeability and water vapor sorption isotherms. For this purpose, a concrete wall was built in the laboratory and specimens were periodically taken and tested. The obtained results allow highlighting a statistical estimation of the mean value, the standard deviation and the spatial correlation length of the studied fields for each parameter. These results were discussed and a statistical analysis was performed in order to assess for each of these parameters the appropriate probability density function.
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Effects of footwear and stride length on metatarsal strains and failure in running.
Firminger, Colin R; Fung, Anita; Loundagin, Lindsay L; Edwards, W Brent
2017-11-01
The metatarsal bones of the foot are particularly susceptible to stress fracture owing to the high strains they experience during the stance phase of running. Shoe cushioning and stride length reduction represent two potential interventions to decrease metatarsal strain and thus stress fracture risk. Fourteen male recreational runners ran overground at a 5-km pace while motion capture and plantar pressure data were collected during four experimental conditions: traditional shoe at preferred and 90% preferred stride length, and minimalist shoe at preferred and 90% preferred stride length. Combined musculoskeletal - finite element modeling based on motion analysis and computed tomography data were used to quantify metatarsal strains and the probability of failure was determined using stress-life predictions. No significant interactions between footwear and stride length were observed. Running in minimalist shoes increased strains for all metatarsals by 28.7% (SD 6.4%; p<0.001) and probability of failure for metatarsals 2-4 by 17.3% (SD 14.3%; p≤0.005). Running at 90% preferred stride length decreased strains for metatarsal 4 by 4.2% (SD 2.0%; p≤0.007), and no differences in probability of failure were observed. Significant increases in metatarsal strains and the probability of failure were observed for recreational runners acutely transitioning to minimalist shoes. Running with a 10% reduction in stride length did not appear to be a beneficial technique for reducing the risk of metatarsal stress fracture, however the increased number of loading cycles for a given distance was not detrimental either. Copyright © 2017 Elsevier Ltd. All rights reserved.
Storkel, Holly L.; Lee, Jaehoon; Cox, Casey
2016-01-01
Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276
Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey
2016-11-01
Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
From Fractal Trees to Deltaic Networks
NASA Astrophysics Data System (ADS)
Cazanacli, D.; Wolinsky, M. A.; Sylvester, Z.; Cantelli, A.; Paola, C.
2013-12-01
Geometric networks that capture many aspects of natural deltas can be constructed from simple concepts from graph theory and normal probability distributions. Fractal trees with symmetrical geometries are the result of replicating two simple geometric elements, line segments whose lengths decrease and bifurcation angles that are commonly held constant. Branches could also have a thickness, which in the case of natural distributary systems is the equivalent of channel width. In river- or wave-dominated natural deltas, the channel width is a function of discharge. When normal variations around the mean values for length, bifurcating angles, and discharge are applied, along with either pruning of 'clashing' branches or merging (equivalent to channel confluence), fractal trees start resembling natural deltaic networks, except that the resulting channels are unnaturally straight. Introducing a bifurcation probability fewer, naturally curved channels are obtained. If there is no bifurcation, the direction of each new segment depends on the direction the previous segment upstream (correlated random walk) and, to a lesser extent, on a general direction of growth (directional bias). When bifurcation occurs, the resulting two directions also depend on the bifurcation angle and the discharge split proportions, with the dominant branch following the direction of the upstream parent channel closely. The bifurcation probability controls the channel density and, in conjunction with the variability of the directional angles, the overall curvature of the channels. The growth of the network in effect is associated with net delta progradation. The overall shape and shape evolution of the delta depend mainly on the bifurcation angle average size and angle variability coupled with the degree of dominant direction dependency (bias). The proposed algorithm demonstrates how, based on only a few simple rules, a wide variety of channel networks resembling natural deltas, can be replicated. Network Example
NASA Astrophysics Data System (ADS)
Khalaf, E.; Skvortsov, M. A.; Ostrovsky, P. M.
2016-03-01
We study electron transport at the edge of a generic disordered two-dimensional topological insulator, where some channels are topologically protected from backscattering. Assuming the total number of channels is large, we consider the edge as a quasi-one-dimensional quantum wire and describe it in terms of a nonlinear sigma model with a topological term. Neglecting localization effects, we calculate the average distribution function of transmission probabilities as a function of the sample length. We mainly focus on the two experimentally relevant cases: a junction between two quantum Hall (QH) states with different filling factors (unitary class) and a relatively thick quantum well exhibiting quantum spin Hall (QSH) effect (symplectic class). In a QH sample, the presence of topologically protected modes leads to a strong suppression of diffusion in the other channels already at scales much shorter than the localization length. On the semiclassical level, this is accompanied by the formation of a gap in the spectrum of transmission probabilities close to unit transmission, thereby suppressing shot noise and conductance fluctuations. In the case of a QSH system, there is at most one topologically protected edge channel leading to weaker transport effects. In order to describe `topological' suppression of nearly perfect transparencies, we develop an exact mapping of the semiclassical limit of the one-dimensional sigma model onto a zero-dimensional sigma model of a different symmetry class, allowing us to identify the distribution of transmission probabilities with the average spectral density of a certain random-matrix ensemble. We extend our results to other symmetry classes with topologically protected edges in two dimensions.
Faster computation of exact RNA shape probabilities.
Janssen, Stefan; Giegerich, Robert
2010-03-01
Abstract shape analysis allows efficient computation of a representative sample of low-energy foldings of an RNA molecule. More comprehensive information is obtained by computing shape probabilities, accumulating the Boltzmann probabilities of all structures within each abstract shape. Such information is superior to free energies because it is independent of sequence length and base composition. However, up to this point, computation of shape probabilities evaluates all shapes simultaneously and comes with a computation cost which is exponential in the length of the sequence. We device an approach called RapidShapes that computes the shapes above a specified probability threshold T by generating a list of promising shapes and constructing specialized folding programs for each shape to compute its share of Boltzmann probability. This aims at a heuristic improvement of runtime, while still computing exact probability values. Evaluating this approach and several substrategies, we find that only a small proportion of shapes have to be actually computed. For an RNA sequence of length 400, this leads, depending on the threshold, to a 10-138 fold speed-up compared with the previous complete method. Thus, probabilistic shape analysis has become feasible in medium-scale applications, such as the screening of RNA transcripts in a bacterial genome. RapidShapes is available via http://bibiserv.cebitec.uni-bielefeld.de/rnashapes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prentice, John K.; Gardner, David Randall
A methodology was developed for computing the probability that the sensor dart for the 'Near Real-Time Site Characterization for Assured HDBT Defeat' Grand-Challenge LDRD project will survive deployment over a forested region. The probability can be decomposed into three approximately independent probabilities that account for forest coverage, branch density and the physics of an impact between the dart and a tree branch. The probability that a dart survives an impact with a tree branch was determined from the deflection induced by the impact. If a dart that was deflected so that it impacted the ground at an angle of attackmore » exceeding a user-specified, threshold value, the dart was assumed to not survive the impact with the branch; otherwise it was assumed to have survived. A computer code was developed for calculating dart angle of attack at impact with the ground and a Monte Carlo scheme was used to calculate the probability distribution of a sensor dart surviving an impact with a branch as a function of branch radius, length, and height from the ground. Both an early prototype design and the current dart design were used in these studies. As a general rule of thumb, it we observed that for reasonably generic trees and for a threshold angle of attack of 5{sup o} (which is conservative for dart survival), the probability of reaching the ground with an angle of attack less than the threshold is on the order of 30% for the prototype dart design and 60% for the current dart design, though these numbers should be treated with some caution.« less
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
NASA Astrophysics Data System (ADS)
Finkelshtein, D.; Kondratiev, Yu.; Kutoviy, O.; Molchanov, S.; Zhizhina, E.
2014-10-01
We consider birth-and-death stochastic evolution of genotypes with different lengths. The genotypes might mutate, which provides a stochastic changing of lengths by a free diffusion law. The birth and death rates are length dependent, which corresponds to a selection effect. We study an asymptotic behavior of a density for an infinite collection of genotypes. The cases of space homogeneous and space heterogeneous densities are considered.
The KTB apatite fission-track profiles: Building on a firm foundation?
NASA Astrophysics Data System (ADS)
Wauschkuhn, B.; Jonckheere, R.; Ratschbacher, L.
2015-10-01
Deep boreholes serve as natural laboratories for testing thermochronometers under geological conditions. The Kontinentale Tiefbohrung (KTB) is an interesting candidate because the geological evidence suggests that approximate isothermal holding since the last documented exhumation in the Late Cretaceous to Palaeocene is a reasonable assumption for the thermal histories of the KTB samples. We report 30 new apatite fission-track ages and 50 new mean confined track lengths determined on cores from the 4 km deep pilot hole. The ϕ- and ζ-external detector ages are consistent with the population ages from earlier studies and together define a clear age profile. The mean track lengths from this and earlier studies reveal the effects of experimental factors. The measured age and length profiles are compared with the predictions of 24 annealing models for isothermal holding. There are clear discrepancies between the measured and calculated profiles. Down to 1.5 km depth, the measured mean track lengths are shorter than the predicted. The balance of methodological evidence indicates that this is due to seasoning, i.e., a shortening of the fossil confined tracks without attendant age reduction. From 2.5 to 4.0 km depth, the mean track lengths are longer than the predictions. This suggests that the bias model that weights the probabilities of observing tracks of different length and which is based on experiments relating surface track densities to mean track lengths is not appropriate for confined tracks. Experimental and methodological factors are sometimes difficult to disentangle, but present a sufficient margin for there to be no need to go against the independent geological evidence. Unknown geological events cannot be ruled out but their existence cannot be inferred from the fission-track data alone, much less can the nature or magnitude of such events be specified.
Immortality of Cu damascene interconnects
NASA Astrophysics Data System (ADS)
Hau-Riege, Stefan P.
2002-04-01
We have studied short-line effects in fully-integrated Cu damascene interconnects through electromigration experiments on lines of various lengths and embedded in different dielectric materials. We compare these results with results from analogous experiments on subtractively-etched Al-based interconnects. It is known that Al-based interconnects exhibit three different behaviors, depending on the magnitude of the product of current density, j, and line length, L: For small values of (jL), no void nucleation occurs, and the line is immortal. For intermediate values, voids nucleate, but the line does not fail because the current can flow through the higher-resistivity refractory-metal-based shunt layers. Here, the resistance of the line increases but eventually saturates, and the relative resistance increase is proportional to (jL/B), where B is the effective elastic modulus of the metallization system. For large values of (jL/B), voiding leads to an unacceptably high resistance increase, and the line is considered failed. By contrast, we observed only two regimes for Cu-based interconnects: Either the resistance of the line stays constant during the duration of the experiment, and the line is considered immortal, or the line fails due to an abrupt open-circuit failure. The absence of an intermediate regime in which the resistance saturates is due to the absence of a shunt layer that is able to support a large amount of current once voiding occurs. Since voids nucleate much more easily in Cu- than in Al-based interconnects, a small fraction of short Cu lines fails even at low current densities. It is therefore more appropriate to consider the probability of immortality in the case of Cu rather than assuming a sharp boundary between mortality and immortality. The probability of immortality decreases with increasing amount of material depleted from the cathode, which is proportional to (jL2/B) at steady state. By contrast, the immortality of Al-based interconnects is described by (jL) if no voids nucleate, and (jL/B) if voids nucleate.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
Effect of Segmented Electrode Length on the Performances of an Aton-Type Hall Thruster
NASA Astrophysics Data System (ADS)
Duan, Ping; Bian, Xingyu; Cao, Anning; Liu, Guangrui; Chen, Long; Yin, Yan
2016-05-01
The influences of the low-emissive graphite segmented electrode placed near the channel exit on the discharge characteristics of a Hall thruster are studied using the particle-in-cell method. A two-dimensional physical model is established according to the Hall thruster discharge channel configuration. The effects of electrode length on the potential, ion density, electron temperature, ionization rate and discharge current are investigated. It is found that, with the increasing of the segmented electrode length, the equipotential lines bend towards the channel exit, and approximately parallel to the wall at the channel surface, the radial velocity and radial flow of ions are increased, and the electron temperature is also enhanced. Due to the conductive characteristic of electrodes, the radial electric field and the axial electron conductivity near the wall are enhanced, and the probability of the electron-atom ionization is reduced, which leads to the degradation of the ionization rate in the discharge channel. However, the interaction between electrons and the wall enhances the near wall conductivity, therefore the discharge current grows along with the segmented electrode length, and the performance of the thruster is also affected. supported by National Natural Science Foundation of China (Nos. 11375039 and 11275034) and the Key Project of Science and Technology of Liaoning Province, China (No. 2011224007) and the Fundamental Research Funds for the Central Universities, China (No. 3132014328)
This paper presents measurements of roughness length performed in a wind tunnel for low roughness density. The experiments were performed with both compact and porous obstacles (clusters), in order to simulate the behavior of sparsely vegetated surfaces.
Fracture network evaluation program (FraNEP): A software for analyzing 2D fracture trace-line maps
NASA Astrophysics Data System (ADS)
Zeeb, Conny; Gomez-Rivas, Enrique; Bons, Paul D.; Virgo, Simon; Blum, Philipp
2013-10-01
Fractures, such as joints, faults and veins, strongly influence the transport of fluids through rocks by either enhancing or inhibiting flow. Techniques used for the automatic detection of lineaments from satellite images and aerial photographs, LIDAR technologies and borehole televiewers significantly enhanced data acquisition. The analysis of such data is often performed manually or with different analysis software. Here we present a novel program for the analysis of 2D fracture networks called FraNEP (Fracture Network Evaluation Program). The program was developed using Visual Basic for Applications in Microsoft Excel™ and combines features from different existing software and characterization techniques. The main novelty of FraNEP is the possibility to analyse trace-line maps of fracture networks applying the (1) scanline sampling, (2) window sampling or (3) circular scanline and window method, without the need of switching programs. Additionally, binning problems are avoided by using cumulative distributions, rather than probability density functions. FraNEP is a time-efficient tool for the characterisation of fracture network parameters, such as density, intensity and mean length. Furthermore, fracture strikes can be visualized using rose diagrams and a fitting routine evaluates the distribution of fracture lengths. As an example of its application, we use FraNEP to analyse a case study of lineament data from a satellite image of the Oman Mountains.
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Failure Maps for Rectangular 17-4PH Stainless Steel Sandwiched Foam Panels
NASA Technical Reports Server (NTRS)
Raj, S. V.; Ghosn, L. J.
2007-01-01
A new and innovative concept is proposed for designing lightweight fan blades for aircraft engines using commercially available 17-4PH precipitation hardened stainless steel. Rotating fan blades in aircraft engines experience a complex loading state consisting of combinations of centrifugal, distributed pressure and torsional loads. Theoretical failure plastic collapse maps, showing plots of the foam relative density versus face sheet thickness, t, normalized by the fan blade span length, L, have been generated for rectangular 17-4PH sandwiched foam panels under these three loading modes assuming three failure plastic collapse modes. These maps show that the 17-4PH sandwiched foam panels can fail by either the yielding of the face sheets, yielding of the foam core or wrinkling of the face sheets depending on foam relative density, the magnitude of t/L and the loading mode. The design envelop of a generic fan blade is superimposed on the maps to provide valuable insights on the probable failure modes in a sandwiched foam fan blade.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Density estimation in a wolverine population using spatial capture-recapture models
Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin
2011-01-01
Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.
Stovall, W.K.; Houghton, Bruce F.; Gonnermann, H.; Fagents, S.A.; Swanson, D.A.
2011-01-01
Hawaiian eruptions are characterized by fountains of gas and ejecta, sustained for hours to days that reach tens to hundreds of meters in height. Quantitative analysis of the pyroclastic products from the 1959 eruption of K??lauea Iki, K??lauea volcano, Hawai'i, provides insights into the processes occurring during typical Hawaiian fountaining activity. This short-lived but powerful eruption contained 17 fountaining episodes and produced a cone and tephra blanket as well as a lava lake that interacted with the vent and fountain during all but the first episode of the eruption, the focus of this paper. Microtextural analysis of Hawaiian fountaining products from this opening episode is used to infer vesiculation processes within the fountain and shallow conduit. Vesicle number densities for all clasts are high (106-107 cm-3). Post-fragmentation expansion of bubbles within the thermally-insulated fountain overprints the pre-fragmentation bubble populations, leading to a reduction in vesicle number density and increase in mean vesicle size. However, early quenched rims of some clasts, with vesicle number densities approaching 107 cm-3, are probably a valid approximation to magma conditions near fragmentation. The extent of clast evolution from low vesicle-to-melt ratio and corresponding high vesicle number density to higher vesicle-to-melt ratio and lower vesicle-number density corresponds to the length of residence time within the fountain. ?? 2010 Springer-Verlag.
Chen, S. N.; Iwawaki, T.; Morita, K.; Antici, P.; Baton, S. D.; Filippi, F.; Habara, H.; Nakatsutsumi, M.; Nicolaï , P.; Nazarov, W.; Rousseaux, C.; Starodubstev, M.; Tanaka, K. A.; Fuchs, J.
2016-01-01
The ability to produce long-scale length (i.e. millimeter scale-length), homogeneous plasmas is of interest in studying a wide range of fundamental plasma processes. We present here a validated experimental platform to create and diagnose uniform plasmas with a density close or above the critical density. The target consists of a polyimide tube filled with an ultra low-density plastic foam where it was heated by x-rays, produced by a long pulse laser irradiating a copper foil placed at one end of the tube. The density and temperature of the ionized foam was retrieved by using x-ray radiography and proton radiography was used to verify the uniformity of the plasma. Plasma temperatures of 5–10 eV and densities around 1021 cm−3 are measured. This well-characterized platform of uniform density and temperature plasma is of interest for experiments using large-scale laser platforms conducting High Energy Density Physics investigations. PMID:26923471
Evaluating detection probabilities for American marten in the Black Hills, South Dakota
Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.
2007-01-01
Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Fritts, Karen R.; Kilb, Debi
2009-01-01
It has been traditionally held that aftershocks occur within one to two fault lengths of the mainshock. Here we demonstrate that this perception has been shaped by the sensitivity of seismic networks. The 31 October 2001 Mw 5.0 and 12 June 2005 Mw 5.2 Anza mainshocks in southern California occurred in the middle of the densely instrumented ANZA seismic network and thus were unusually well recorded. For the June 2005 event, aftershocks as small as M 0.0 could be observed stretching for at least 50 km along the San Jacinto fault even though the mainshock fault was only ∼4.5 km long. It was hypothesized that an observed aseismic slipping patch produced a spatially extended aftershock-triggering source, presumably slowing the decay of aftershock density with distance and leading to a broader aftershock zone. We find, however, the decay of aftershock density with distance for both Anza sequences to be similar to that observed elsewhere in California. This indicates there is no need for an additional triggering mechanism and suggests that given widespread dense instrumentation, aftershock sequences would routinely have footprints much larger than currently expected. Despite the large 2005 aftershock zone, we find that the probability that the 2005 Anza mainshock triggered the M 4.9 Yucaipa mainshock, which occurred 4.2 days later and 72 km away, to be only 14%±1%. This probability is a strong function of the time delay; had the earthquakes been separated by only an hour, the probability of triggering would have been 89%.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
The large-scale gravitational bias from the quasi-linear regime.
NASA Astrophysics Data System (ADS)
Bernardeau, F.
1996-08-01
It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by <δ^p^({vec}(x)_1_)δ^q^({vec}(x)_2_)>_c_=C_p,q_ <δ^2^({vec}(x))>^p+q-2^ <δ({vec}(x)_1_)δ({vec}(x)_2_)> where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry
2013-02-20
This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).
Effect of plantation density on kraft pulp production from red pine (Pinus resinosa Ait.)
J.Y. Zhu; G.C. Myers
2006-01-01
Red pine (Pinus resinosa Ait.) butt logs from 38 year old research plots were used to study the effect of plantation stand density on kraft pulp production. Results indicate that plantation stand density can affect pulp yield, unrefined pulp mean fibre length, and the response of pulp fibre length to pulp refining. However, the effect of plantation stand density on...
MIZUMACHI, ERI; MORI, AKIRA; OSAWA, NAOYA; AKIYAMA, REIKO; TOKUCHI, NAOKO
2006-01-01
• Background and Aims Plants have the ability to compensate for damage caused by herbivores. This is important to plant growth, because a plant cannot always avoid damage, even if it has developed defence mechanisms against herbivores. In previous work, we elucidated the herbivory-induced compensatory response of Quercus (at both the individual shoot and whole sapling levels) in both low- and high-nutrient conditions throughout one growing season. In this study, we determine how the compensatory growth of Quercus serrata saplings is achieved at different nutrient levels. • Methods Quercus serrata saplings were grown under controlled conditions. Length, number of leaves and percentage of leaf area lost on all extension units (EUs) were measured. • Key Results Both the probability of flushing and the length of subsequent EUs significantly increased with an increase in the length of the parent EU. The probability of flushing increased with an increase in leaf damage of the parent EU, but the length of subsequent EUs decreased. This indicates that EU growth is fundamentally regulated at the individual EU level. The probabilities of a second and third flush were significantly higher in plants in high-nutrient soil than those in low-nutrient soil. The subsequent EUs of damaged saplings were also significantly longer at high-nutrient conditions. • Conclusions An increase in the probability of flushes in response to herbivore damage is important for damaged saplings to produce new EUs; further, shortening the length of EUs helps to effectively reproduce foliage lost by herbivory. The probability of flushing also varied according to soil nutrient levels, suggesting that the compensatory growth of individual EUs in response to local damage levels is affected by the nutrients available to the whole sapling. PMID:16709576
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Kallio, Eva R.; Koskela, Esa; Lonn, Eija
2017-01-01
The loci arginine vasopressin receptor 1a (avpr1a) and oxytocin receptor (oxtr) have evolutionarily conserved roles in vertebrate social and sexual behaviour. Allelic variation at a microsatellite locus in the 5′ regulatory region of these genes is associated with fitness in the bank vole Myodes glareolus. Given the low frequency of long and short alleles at these microsatellite loci in wild bank voles, we used breeding trials to determine whether selection acts against long and short alleles. Female bank voles with intermediate length avpr1a alleles had the highest probability of breeding, while male voles whose avpr1a alleles were very different in length had reduced probability of breeding. Moreover, there was a significant interaction between male and female oxtr genotypes, where potential breeding pairs with dissimilar length alleles had reduced probability of breeding. These data show how genetic variation at microsatellite loci associated with avpr1a and oxtr is associated with fitness, and highlight complex patterns of selection at these loci. More widely, these data show how stabilizing selection might act on allele length frequency distributions at gene-associated microsatellite loci. PMID:29237850
NASA Astrophysics Data System (ADS)
Yoon, Min-Seung; Ko, Min-Ku; Kim, Bit-Na; Kim, Byung-Joon; Park, Yong-Bae; Joo, Young-Chang
2008-04-01
The relationship between the threshold current density and the critical line length in eutectic SnPb and SnAgCu electromigrations were examined using solder lines with the various lengths ranging from 100to1000μm. When the electron wind-force was balanced by the back-stress gradient force, the net flux of electromigration is zero, at which the current density and line length are defined as the threshold current density and the critical length, respectively. It was found that in SnAgCu electromigration, the 1/L dependence on the threshold current density showed good agreement, whereas the threshold current densities of the eutectic SnPb deviated from the 1/L dependence. The balance between the electron wind-force and the back-stress gradient force was the main factor determining the threshold product of SnAgCu electromigration. On the other hand, in the case of eutectic SnPb, the chemical driving force is contributed as a back-flux force in addition to the back-stress gradient force. The existence of the chemical driving force was caused by the nonequilibrium Pb concentration inside the Pb-rich phases between the cathode and anode during the electromigration procedure.
The role of skin biopsy in differentiating small-fiber neuropathy from ganglionopathy.
Provitera, V; Gibbons, C H; Wendelschafer-Crabb, G; Donadio, V; Vitale, D F; Loavenbruck, A; Stancanelli, A; Caporaso, G; Liguori, R; Wang, N; Santoro, L; Kennedy, W R; Nolano, M
2018-06-01
We aimed to test the clinical utility of the leg:thigh intraepidermal nerve-fiber (IENF) density ratio as a parameter to discriminate between length-dependent small-fiber neuropathy (SFN) and small-fiber sensory ganglionopathy (SFSG) in subjects with signs and symptoms of small-fiber pathology. We retrospectively evaluated thigh and leg IENF density in 314 subjects with small-fiber pathology (173 with distal symmetrical length-dependent SFN and 141 with non-length-dependent SFSG). A group of 288 healthy subjects was included as a control group. The leg:thigh IENF density ratio was calculated for all subjects. We used receiver operating characteristic curve analyses to assess the ability of this parameter to discriminate between length-dependent SFN and SFSG, and the decision curve analysis to estimate its net clinical benefit. In patients with neuropathy, the mean IENF density was 14.8 ± 6.8/mm at the thigh (14.0 ± 6.9/mm in length-dependent SFN and 15.9 ± 6.7/mm in patients with SFSG) and 7.5 ± 4.5/mm at the distal leg (5.4 ± 3.2/mm in patients with length-dependent SFN and 10.1 ± 4.6/mm in patients with SFSG). The leg:thigh IENF density ratio was significantly (P < 0.01) lower in patients with length-dependent SFN (0.44 ± 0.23) compared with patients with SFSG (0.68 ± 0.28). The area under the curve of the receiver operating characteristic analysis to discriminate between patients with length-dependent SFN and SFSG was 0.79. The decision curve analysis demonstrated the clinical utility of this parameter. The leg:thigh IENF ratio represents a valuable tool in the differential diagnosis between SFSG and length-dependent SFN. © 2018 EAN.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.
Chevallier, Maguelonne; Krauth, Werner
2007-11-01
We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.
NASA Astrophysics Data System (ADS)
Li, Mei; Parrot, Michel
2018-02-01
Results of a statistical variation of total ion density observed in the vicinity of epicenters as well as around magnetically conjugated points of earthquakes are presented in this paper. Two data sets are used: the ion density measured by DEMETER during about 6.5 years and the list of strong earthquakes (MW ≥ 4.8) occurring globally during this period (14,764 earthquakes in total). First of all, ionospheric perturbations with 23-120 s observation time corresponding to spatial scales of 160-840 km are automatically detected by a software (64,287 anomalies in total). Second, it is checked if a perturbation could be associated either with the epicenter of an earthquake or with its magnetically conjugated point (distance < 1500 km and time < 15 days before the earthquake). The index Kp < 3 is also considered in order to reduce the effect of the geomagnetic activity on the ionosphere during this period. The results show that it is possible to detect variations of the ionospheric parameters above the epicenter areas as well as above their conjugated points. About one third of the earthquakes are detected with ionospheric influence on both sides of the Earth. There is a trend showing that the perturbation length increases as the magnitude of the detected EQs but it is more obvious for large magnitude. The probability that a perturbation appears is higher on the day of the earthquake and then gradually decreases when the time before the earthquake increases. The spatial distribution of perturbations shows that the probability of perturbations appearing southeast of the epicenter before an earthquake is a little bit higher and that there is an obvious trend because perturbations appear west of the conjugated point of an earthquake.
Quasineutral plasma expansion into infinite vacuum as a model for parallel ELM transport
NASA Astrophysics Data System (ADS)
Moulton, D.; Ghendrih, Ph; Fundamenski, W.; Manfredi, G.; Tskhakaya, D.
2013-08-01
An analytic solution for the expansion of a plasma into vacuum is assessed for its relevance to the parallel transport of edge localized mode (ELM) filaments along field lines. This solution solves the 1D1V Vlasov-Poisson equations for the adiabatic (instantaneous source), collisionless expansion of a Gaussian plasma bunch into an infinite space in the quasineutral limit. The quasineutral assumption is found to hold as long as λD0/σ0 ≲ 0.01 (where λD0 is the initial Debye length at peak density and σ0 is the parallel length of the Gaussian filament), a condition that is physically realistic. The inclusion of a boundary at x = L and consequent formation of a target sheath is found to have a negligible effect when L/σ0 ≳ 5, a condition that is physically plausible. Under the same condition, the target flux densities predicted by the analytic solution are well approximated by the ‘free-streaming’ equations used in previous experimental studies, strengthening the notion that these simple equations are physically reasonable. Importantly, the analytic solution predicts a zero heat flux density so that a fluid approach to the problem can be used equally well, at least when the source is instantaneous. It is found that, even for JET-like pedestal parameters, collisions can affect the expansion dynamics via electron temperature isotropization, although this is probably a secondary effect. Finally, the effect of a finite duration, τsrc, for the plasma source is investigated. As is found for an instantaneous source, when L/σ0 ≳ 5 the presence of a target sheath has a negligible effect, at least up to the explored range of τsrc = L/cs (where cs is the sound speed at the initial temperature).
Yeganegi, Saeid; Soltanabadi, Azim; Farmanzadeh, Davood
2012-09-20
Structures and dynamics of nine geminal dicationic ionic liquids (DILs) Cn(mim)2X2, where n = 3, 6, and 9 and X = PF6(-), BF4(-), and Br(-), were studied by molecular dynamic simulations (J. Phys. Chem.B2004, 108, 2038-2047). A force field with a minor modification for C3(mim)2 × 2 was adopted for the simulations. Densities, detailed microscopic structures, mean-square displacements (MSD), and self-diffusivities for various ion pairs from MD simulations have been presented. The calculated densities for C9(mim)2X2 (X = Br(-) and BF4(-)) agreed well with the experimental values. The calculated RDFs show that anions are well organized around the imidazolium rings. The calculated RDFs indicate that, unlike the mono cationic ILs, the anions and cations in DILs distribute homogeneously. Enthalpies of vaporization were calculated and correlated with the structural features of DILs. The local structure of C9(mim)2X2 (X = Br, PF6) was examined by the spatial distribution function (SDF). The calculated SDFs show that similar trends were found by other groups for mono cationic ionic liquids (ILs). The highest probability densities are located around the imidazolium ring hydrogens. The calculated diffusion coefficients show that the ion diffusivities are 1 order of magnitude smaller than that of the mono cationic ionic liquids. The effects of alkyl chain length and anion type on the diffusion coefficient were also studied. The dynamics of the imidazolium rings and the alkyl chain in different time scales have also discussed. The calculated transference numbers show that the anions have the major role in carrying the electric current in a DIL.
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
Ding, Mingnan; Lu, Bing-Sui; Xing, Xiangjun
2016-10-01
Self-consistent field theory (SCFT) is used to study the mean potential near a charged plate inside a m:-n electrolyte. A perturbation series is developed in terms of g=4πκb, where band1/κ are Bjerrum length and bare Debye length, respectively. To the zeroth order, we obtain the nonlinear Poisson-Boltzmann theory. For asymmetric electrolytes (m≠n), the first order (one-loop) correction to mean potential contains a secular term, which indicates the breakdown of the regular perturbation method. Using a renormalizaton group transformation, we remove the secular term and obtain a globally well-behaved one-loop approximation with a renormalized Debye length and a renormalized surface charge density. Furthermore, we find that if the counterions are multivalent, the surface charge density is renormalized substantially downwards and may undergo a change of sign, if the bare surface charge density is sufficiently large. Our results agrees with large MC simulation even when the density of electrolytes is relatively high.
NASA Technical Reports Server (NTRS)
Generazio, E. R.
1986-01-01
Microstructural images may be tone pulse encoded and subsequently Fourier transformed to determine the two-dimensional density of frequency components. A theory is developed relating the density of frequency components to the density of length components. The density of length components corresponds directly to the actual grain size distribution function from which the mean grain shape, size, and orientation can be obtained.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
NASA Astrophysics Data System (ADS)
Karlsen, P.; Shuba, M. V.; Beckerleg, C.; Yuko, D. I.; Kuzhir, P. P.; Maksimenko, S. A.; Ksenevich, V.; Viet, Ho; Nasibulin, A. G.; Tenne, R.; Hendry, E.
2018-01-01
We measure the conductivity spectra of thin films comprising bundled single-walled carbon nanotubes (CNTs) of different average lengths in the frequency range 0.3-1000 THz and temperature interval 10-530 K. The observed temperature-induced changes in the terahertz conductivity spectra are shown to depend strongly on the average CNT length, with a conductivity around 1 THz that increases/decreases as the temperature increases for short/long tubes. This behaviour originates from the temperature dependence of the electron scattering rate, which we obtain from Drude fits of the measured conductivity in the range 0.3-2 THz for 10 μm length CNTs. This increasing scattering rate with temperature results in a subsequent broadening of the observed THz conductivity peak at higher temperatures and a shift to lower frequencies for increasing CNT length. Finally, we show that the change in conductivity with temperature depends not only on tube length, but also varies with tube density. We record the effective conductivities of composite films comprising mixtures of WS2 nanotubes and CNTs versus CNT density for frequencies in the range 0.3-1 THz, finding that the conductivity increases/decreases for low/high density films as the temperature increases. This effect arises due to the density dependence of the effective length of conducting pathways in the composite films, which again leads to a shift and temperature dependent broadening of the THz conductivity peak.
Step styles of pedestrians at different densities
NASA Astrophysics Data System (ADS)
Wang, Jiayue; Weng, Wenguo; Boltes, Maik; Zhang, Jun; Tordeux, Antoine; Ziemer, Verena
2018-02-01
Stepping locomotion is the basis of human movement. The investigation of stepping locomotion and its affecting factors is necessary for a more realistic knowledge of human movement, which is usually referred to as walking with equal step lengths for the right and left leg. To study pedestrians’ stepping locomotion, a set of single-file movement experiments involving 39 participants of the same age walking on a highly curved oval course is conducted. The microscopic characteristics of the pedestrians including 1D Voronoi density, speed, and step length are calculated based on a projected coordinate. The influence of the projection lines with different radii on the measurement of these quantities is investigated. The step lengths from the straight and curved parts are compared using the Kolmogorov-Smirnov test. During the experiments, six different step styles are observed and the proportions of different step styles change with the density. At low density, the main step style is the stable-large step style and the step lengths of one pedestrian are almost constant. At high density, some pedestrians adjust and decrease their step lengths. Some pedestrians take relatively smaller and larger steps alternately to adapt to limited space.
Hydrologic control on the root growth of Salix cuttings at the laboratory scale
NASA Astrophysics Data System (ADS)
Bau', Valentina; Calliari, Baptiste; Perona, Paolo
2017-04-01
Riparian plant roots contribute to the ecosystem functioning and, to a certain extent, also directly affect fluvial morphodynamics, e.g. by influencing sediment transport via mechanical stabilization and trapping. There is much both scientific and engineering interest in understanding the complex interactions among riparian vegetation and river processes. For example, to investigate plant resilience to uprooting by flow, one should quantify the probability that riparian plants may be uprooted during specific flooding event. Laboratory flume experiments are of some help to this regard, but are often limited to use grass (e.g., Avena and Medicago sativa) as vegetation replicate with a number of limitations due to fundamental scaling problems. Hence, the use of small-scale real plants grown undisturbed in the actual sediment and within a reasonable time frame would be particularly helpful to obtain more realistic flume experiments. The aim of this work is to develop and tune an experimental technique to control the growth of the root vertical density distribution of small-scale Salix cuttings of different sizes and lengths. This is obtained by controlling the position of the saturated water table in the sedimentary bed according to the sediment size distribution and the cutting length. Measurements in the rhizosphere are performed by scanning and analysing the whole below-ground biomass by means of the root analysis software WinRhizo, from which root morphology statistics and the empirical vertical density distribution are obtained. The model of Tron et al. (2015) for the vertical density distribution of the below-ground biomass is used to show that experimental conditions that allow to develop the desired root density distribution can be fairly well predicted. This augments enormously the flexibility and the applicability of the proposed methodology in view of using such plants for novel flow erosion experiments. Tron, S., Perona, P., Gorla, L., Schwarz, M., Laio, F., and L. Ridolfi (2015). The signature of randomness in riparian plant root distributions. Geophys. Res. Letts., 42, 7098-7106
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac
1993-01-01
Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.
Genetic variation in natural honeybee populations, Apis mellifera capensis
NASA Astrophysics Data System (ADS)
Hepburn, Randall; Neumann, Peter; Radloff, Sarah E.
2004-09-01
Genetic variation in honeybee, Apis mellifera, populations can be considerably influenced by breeding and commercial introductions, especially in areas with abundant beekeeping. However, in southern Africa apiculture is based on the capture of wild swarms, and queen rearing is virtually absent. Moreover, the introduction of European subspecies constantly failed in the Cape region. We therefore hypothesize a low human impact on genetic variation in populations of Cape honeybees, Apis mellifera capensis. A novel solution to studying genetic variation in honeybee populations based on thelytokous worker reproduction is applied to test this hypothesis. Environmental effects on metrical morphological characters of the phenotype are separated to obtain a genetic residual component. The genetic residuals are then re-calculated as coefficients of genetic variation. Characters measured included hair length on the abdomen, width and length of wax plate, and three wing angles. The data show for the first time that genetic variation in Cape honeybee populations is independent of beekeeping density and probably reflects naturally occurring processes such as gene flow due to topographic and climatic variation on a microscale.
Extended length microchannels for high density high throughput electrophoresis systems
Davidson, James C.; Balch, Joseph W.
2000-01-01
High throughput electrophoresis systems which provide extended well-to-read distances on smaller substrates, thus compacting the overall systems. The electrophoresis systems utilize a high density array of microchannels for electrophoresis analysis with extended read lengths. The microchannel geometry can be used individually or in conjunction to increase the effective length of a separation channel while minimally impacting the packing density of channels. One embodiment uses sinusoidal microchannels, while another embodiment uses plural microchannels interconnected by a via. The extended channel systems can be applied to virtually any type of channel confined chromatography.
A variable mixing-length ratio for convection theory
NASA Technical Reports Server (NTRS)
Chan, K. L.; Wolff, C. L.; Sofia, S.
1981-01-01
It is argued that a natural choice for the local mixing length in the mixing-length theory of convection has a value proportional to the local density scale height of the convective bubbles. The resultant variable mixing-length ratio (the ratio between the mixing length and the pressure scale height) of this theory is enhanced in the superadiabatic region and approaches a constant in deeper layers. Numerical tests comparing the new mixing length successfully eliminate most of the density inversion that typically plagues conventional results. The new approach also seems to indicate the existence of granular motion at the top of the convection zone.
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
He, Nianpeng; Wu, Ling; Zhou, Daowei
2004-12-01
This paper studied the clonal architecture of two divergent Leymus chinensis types (grey-green type and yellow-green type) in Songnen grassland, and compared their internode length, spacer length, interbranching length, interbranching angle, and ramet population density and height under the same habitat. The results showed that there was no significant difference in these clonal characteristics except spacer length and ramet population density between the two types of L. chinensis, and yellow-green type, with less spacer length and more ramet density than grey-green type, should be more adaptable to the resourceful habitat. Moreover, the V-indices of the clonal architecture of two divergent L. chinensis types were all close to 1, and the difference was not significant. Therefore, both of the two types belonged to typical guerilla clonal plant.
Ray, Chris; Saracco, James; Holmgren, Mandy; Wilkerson, Robert; Siegel, Rodney; Jenkins, Kurt J.; Ransom, Jason I.; Happe, Patricia J.; Boetsch, John; Huff, Mark
2017-01-01
Monitoring species in National Parks facilitates inference regarding effects of climate change on population dynamics because parks are relatively unaffected by other forms of anthropogenic disturbance. Even at early points in a monitoring program, identifying climate covariates of population density can suggest vulnerabilities to future change. Monitoring landbird populations in parks during the breeding season brings the added benefit of allowing a comparative approach to inference across a large suite of species with diverse requirements. For example, comparing resident and migratory species that vary in exposure to non-park habitats can reveal the relative importance of park effects, such as those related to local climate. We monitored landbirds using breeding-season point-count data collected during 2005–2014 in three wilderness areas of the Pacific Northwest (Mount Rainier, North Cascades, and Olympic National Parks). For 39 species, we estimated recent trends in population density while accounting for individual detection probability using Bayesian hierarchical N-mixture models. Our analyses integrated several recent developments in N-mixture modeling, incorporating interval and distance sampling to estimate distinct components of detection probability while also accommodating count intervals of varying duration, annual variation in the length and number of point-count transects, spatial autocorrelation, random effects, and covariates of detection and density. As covariates of density, we considered metrics of precipitation and temperature hypothesized to affect breeding success. We also considered effects of park and elevational stratum on trend. Regardless of model structure, we estimated stable or increasing densities during 2005–2014 for most populations. Mean trends across species were positive for migrants in every park and for residents in one park. A recent snowfall deficit in this region might have contributed to the positive trend, because population density varied inversely with precipitation-as-snow for both migrants and residents. Densities varied directly but much more weakly with mean spring temperature. Our approach exemplifies an analytical framework for estimating trends from point-count data, and for assessing the role of climatic and other spatiotemporal variables in driving those trends. Understanding population trends and the factors that drive them is critical for adaptive management and resource stewardship in the context of climate change.
Information retrieval from wide-band meteorological data - An example
NASA Technical Reports Server (NTRS)
Adelfang, S. I.; Smith, O. E.
1983-01-01
The methods proposed by Smith and Adelfang (1981) and Smith et al. (1982) are used to calculate probabilities over rectangles and sectors of the gust magnitude-gust length plane; probabilities over the same regions are also calculated from the observed distributions and a comparison is also presented to demonstrate the accuracy of the statistical model. These and other statistical results are calculated from samples of Jimsphere wind profiles at Cape Canaveral. The results are presented for a variety of wavelength bands, altitudes, and seasons. It is shown that wind perturbations observed in Jimsphere wind profiles in various wavelength bands can be analyzed by using digital filters. The relationship between gust magnitude and gust length is modeled with the bivariate gamma distribution. It is pointed out that application of the model to calculate probabilities over specific areas of the gust magnitude-gust length plane can be useful in aerospace design.
Variables Affecting Probability of Detection in Bolt Hole Eddy Current Inspection
NASA Astrophysics Data System (ADS)
Lemire, H.; Krause, T. W.; Bunn, M.; Butcher, D. J.
2009-03-01
Physical variables affecting probability of detection (POD) in a bolt-hole eddy current inspection were examined. The POD study involved simulated bolt holes in 7075-T6 aluminum coupons representative of wing areas on CC-130 and CP-140 aircraft. The data were obtained from 24 inspectors who inspected 468 coupons, containing a subset of coupons with 45 electric discharge machined notches and 72 laboratory grown fatigue cracks located at the inner surface corner of the bi-layer structures. A comparison of physical features of cracks and notches in light of skin depth effects and probe geometry was used to identify length rather than depth as the significant variable producing signal variation. Probability of detection based on length produced similar results for the two discontinuity types, except at lengths less than 0.4 mm, where POD for cracks was found to be higher than that of notches.
Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities
Duross, Christopher; Olig, Susan; Schwartz, David
2015-01-01
Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.
NASA Astrophysics Data System (ADS)
Suzuki, Yohichi; Seki, Kazuhiko
2018-03-01
We studied ion concentration profiles and the charge density gradient caused by electrode reactions in weak electrolytes by using the Poisson-Nernst-Planck equations without assuming charge neutrality. In weak electrolytes, only a small fraction of molecules is ionized in bulk. Ion concentration profiles depend on not only ion transport but also the ionization of molecules. We considered the ionization of molecules and ion association in weak electrolytes and obtained analytical expressions for ion densities, electrostatic potential profiles, and ion currents. We found the case that the total ion density gradient was given by the Kuramoto length which characterized the distance over which an ion diffuses before association. The charge density gradient is characterized by the Debye length for 1:1 weak electrolytes. We discuss the role of these length scales for efficient water splitting reactions using photo-electrocatalytic electrodes.
Data Encoding using Periodic Nano-Optical Features
NASA Astrophysics Data System (ADS)
Vosoogh-Grayli, Siamack
Successful trials have been made through a designed algorithm to quantize, compress and optically encode unsigned 8 bit integer values in the form of images using Nano optical features. The periodicity of the Nano-scale features (Nano-gratings) have been designed and investigated both theoretically and experimentally to create distinct states of variation (three on states and one off state). The use of easy to manufacture and machine readable encoded data in secured authentication media has been employed previously in bar-codes for bi-state (binary) models and in color barcodes for multiple state models. This work has focused on implementing 4 states of variation for unit information through periodic Nano-optical structures that separate an incident wavelength into distinct colors (variation states) in order to create an encoding system. Compared to barcodes and magnetic stripes in secured finite length storage media the proposed system encodes and stores more data. The benefits of multiple states of variation in an encoding unit are 1) increased numerically representable range 2) increased storage density and 3) decreased number of typical set elements for any ergodic or semi-ergodic source that emits these encoding units. A thorough investigation has targeted the effects of the use of multi-varied state Nano-optical features on data storage density and consequent data transmission rates. The results show that use of Nano-optical features for encoding data yields a data storage density of circa 800 Kbits/in2 via the implementation of commercially available high resolution flatbed scanner systems for readout. Such storage density is far greater than commercial finite length secured storage media such as Barcode family with maximum practical density of 1kbits/in2 and highest density magnetic stripe cards with maximum density circa 3 Kbits/in2. The numerically representable range of the proposed encoding unit for 4 states of variation is [0 255]. The number of typical set elements for an ergodic source emitting the optical encoding units compared to a bi-state encoding unit (bit) shows a 36 orders of magnitude decrease for the error probability interval of [0 0.01]. The algorithms for the proposed encoding system have been implemented in MATLAB and the Nano-optical structures have been fabricated using Electron Beam Lithography on optical medium.
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
Effect of α-damage on fission-track annealing in zircon
Kasuya, Masao; Naeser, Charles W.
1988-01-01
The thermal stability of confined fission-track lengths in four zircon samples having different spontaneous track densities (i.e., different amounts of ??-damage) has been studied by one-hour isochronal annealing experiments. The thermal stability of spontaneous track lengths is independent of initial spontaneous track density. The thermal stability of induced track lengths in pre-annealed zircon, however, is significantly higher than that of spontaneous track lengths. The results indicate that the presence of ??-damage lowers the thermal stability of fission-tracks in zircon.
Point count length and detection of forest neotropical migrant birds
Dawson, D.K.; Smith, D.R.; Robbins, C.S.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences existed among years or observers in both the probability of detecting the species and in the rate at which individuals are counted. We demonstrate the consequence that variability in species' detection probabilities can have on estimates of population change, and discuss ways for reducing this source of bias in point count studies.
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
3D radiation belt diffusion model results using new empirical models of whistler chorus and hiss
NASA Astrophysics Data System (ADS)
Cunningham, G.; Chen, Y.; Henderson, M. G.; Reeves, G. D.; Tu, W.
2012-12-01
3D diffusion codes model the energization, radial transport, and pitch angle scattering due to wave-particle interactions. Diffusion codes are powerful but are limited by the lack of knowledge of the spatial & temporal distribution of waves that drive the interactions for a specific event. We present results from the 3D DREAM model using diffusion coefficients driven by new, activity-dependent, statistical models of chorus and hiss waves. Most 3D codes parameterize the diffusion coefficients or wave amplitudes as functions of magnetic activity indices like Kp, AE, or Dst. These functional representations produce the average value of the wave intensities for a given level of magnetic activity; however, the variability of the wave population at a given activity level is lost with such a representation. Our 3D code makes use of the full sample distributions contained in a set of empirical wave databases (one database for each wave type, including plasmaspheric hiss, lower and upper hand chorus) that were recently produced by our team using CRRES and THEMIS observations. The wave databases store the full probability distribution of observed wave intensity binned by AE, MLT, MLAT and L*. In this presentation, we show results that make use of the wave intensity sample probability distributions for lower-band and upper-band chorus by sampling the distributions stochastically during a representative CRRES-era storm. The sampling of the wave intensity probability distributions produces a collection of possible evolutions of the phase space density, which quantifies the uncertainty in the model predictions caused by the uncertainty of the chorus wave amplitudes for a specific event. A significant issue is the determination of an appropriate model for the spatio-temporal correlations of the wave intensities, since the diffusion coefficients are computed as spatio-temporal averages of the waves over MLT, MLAT and L*. The spatiotemporal correlations cannot be inferred from the wave databases. In this study we use a temporal correlation of ~1 hour for the sampled wave intensities that is informed by the observed autocorrelation in the AE index, a spatial correlation length of ~100 km in the two directions perpendicular to the magnetic field, and a spatial correlation length of 5000 km in the direction parallel to the magnetic field, according to the work of Santolik et al (2003), who used multi-spacecraft measurements from Cluster to quantify the correlation length scales for equatorial chorus . We find that, despite the small correlation length scale for chorus, there remains significant variability in the model outcomes driven by variability in the chorus wave intensities.
Quantitative Risk Mapping of Urban Gas Pipeline Networks Using GIS
NASA Astrophysics Data System (ADS)
Azari, P.; Karimi, M.
2017-09-01
Natural gas is considered an important source of energy in the world. By increasing growth of urbanization, urban gas pipelines which transmit natural gas from transmission pipelines to consumers, will become a dense network. The increase in the density of urban pipelines will influence probability of occurring bad accidents in urban areas. These accidents have a catastrophic effect on people and their property. Within the next few years, risk mapping will become an important component in urban planning and management of large cities in order to decrease the probability of accident and to control them. Therefore, it is important to assess risk values and determine their location on urban map using an appropriate method. In the history of risk analysis of urban natural gas pipeline networks, the pipelines has always been considered one by one and their density in urban area has not been considered. The aim of this study is to determine the effect of several pipelines on the risk value of a specific grid point. This paper outlines a quantitative risk assessment method for analysing the risk of urban natural gas pipeline networks. It consists of two main parts: failure rate calculation where the EGIG historical data are used and fatal length calculation that involves calculation of gas release and fatality rate of consequences. We consider jet fire, fireball and explosion for investigating the consequences of gas pipeline failure. The outcome of this method is an individual risk and is shown as a risk map.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Lizana, L; Ambjörnsson, T
2009-11-01
We solve a nonequilibrium statistical-mechanics problem exactly, namely, the single-file dynamics of N hard-core interacting particles (the particles cannot pass each other) of size Delta diffusing in a one-dimensional system of finite length L with reflecting boundaries at the ends. We obtain an exact expression for the conditional probability density function rhoT(yT,t|yT,0) that a tagged particle T (T=1,...,N) is at position yT at time t given that it at time t=0 was at position yT,0. Using a Bethe ansatz we obtain the N -particle probability density function and, by integrating out the coordinates (and averaging over initial positions) of all particles but particle T , we arrive at an exact expression for rhoT(yT,t|yT,0) in terms of Jacobi polynomials or hypergeometric functions. Going beyond previous studies, we consider the asymptotic limit of large N , maintaining L finite, using a nonstandard asymptotic technique. We derive an exact expression for rhoT(yT,t|yT,0) for a tagged particle located roughly in the middle of the system, from which we find that there are three time regimes of interest for finite-sized systems: (A) for times much smaller than the collision time t
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Characterizing the Recurrence of Hydrologic Droughts
NASA Astrophysics Data System (ADS)
Cancelliere, A.; Salas, J. D.
2002-12-01
Characterizing periods of deficit and drought has been an important aspect in planning and management of water resources systems for many decades. An extreme drought is a complex phenomenon that evolves through time and space in a random fashion. It may be characterized by its initiation, duration, severity (magnitude or intensity), spatial extent, and termination. These characteristics may be determined by comparing the water supply time series versus the corresponding water demand series in the area of consideration. Because the water supply quantities such as rainfall and streamflow are stochastic variables the ensuing drought characteristics are random and must be described in probabilistic terms. Let us consider a periodic stochastic water supply and a variable water demand series. A drought event is defined as a succession of consecutive periods (run) in which the water supply remains below the water demand. Thus, the drought length L (negative run length) is the number of consecutive time intervals (seasons) in which the water supply remains below the water demand, preceded and followed by (at least one season where) the water supply is equal or greater than the demand. Likewise, the difference between the water demand and the supply at time t is the magnitude of the deficit at time t so that the accumulated deficit D (drought magnitude) is the sum of the deficits over the drought duration L. In the study reported herein, the probability density functions (pdf) of drought length and drought magnitude and their low order moments are derived assuming that the underlying water supply series after is clipped by a constant or periodic water demand results in a periodic dependent binary series that is represented by a periodic two-state Markov chain. The derived pdfs allow estimating the occurrence probabilities of droughts of a given length where either the drought begins in a given season or regardless of the initial season. In addition, the return periods of droughts (based on length and magnitude) are determined. The applicability of the drought formulations is illustrated using several series of precipitation and streamflow in Sicily, Italy and Colorado, USA. The results obtained show an excellent agreement between the observed and theoretical results. In conclusion, the proposed methods appear to be a useful addition for drought analysis and characterization using stochastic methods.
The roles of the trading time risks on stock investment return and risks in stock price crashes
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Dong, Zhi-Wei; Yang, Guo-Hui; Long, Chao
2017-03-01
The roles of the trading time risks (TTRs) on stock investment return and risks are investigated in the condition of stock price crashes with Hushen300 data (CSI300) and Dow Jones Industrial Average (ˆDJI), respectively. In order to describe the TTR, we employ the escape time that the stock price drops from the maximum to minimum value in a data window length (DWL). After theoretical and empirical research on probability density function of return, the results in both ˆDJI and CSI300 indicate that: (i) As increasing DWL, the expectation of returns and its stability are weakened. (ii) An optimal TTR is related to a maximum return and minimum risk of stock investment in stock price crashes.
On the origin of Hawking mini black-holes and the cold early universe
NASA Technical Reports Server (NTRS)
Canuto, V.
1978-01-01
A simple argument is outlined leading to the result that the mass of mini black holes exploding today is 10 to the 15th power g. A mathematical model is discussed which indicates that the equation of state is greatly softened in the high-density regime and a phase transition may exist, such that any length (particularly very small sizes) will grow with time irrespective of its relation to the size of the particle horizon. It is shown that the effect of spin-2 mesons with respect to the equation of state is to soften the pressure and make it negative. An analytical expression is given for the probability that any particular region in a hot early universe will evolve into a black hole.
High-efficiency acceleration in the laser wakefield by a linearly increasing plasma density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Kegong; Wu, Yuchi; Zhu, Bin
The acceleration length and the peak energy of the electron beam are limited by the dephasing effect in the laser wakefield acceleration with uniform plasma density. Based on 2D-3V particle in cell simulations, the effects of a linearly increasing plasma density on the electron acceleration are investigated broadly. Comparing with the uniform plasma density, because of the prolongation of the acceleration length and the gradually increasing accelerating field due to the increasing plasma density, the electron beam energy is twice higher in moderate nonlinear wakefield regime. Because of the lower plasma density, the linearly increasing plasma density can also avoidmore » the dark current caused by additional injection. At the optimal acceleration length, the electron energy can be increased from 350 MeV (uniform) to 760 MeV (linearly increasing) with the energy spread of 1.8%, the beam duration is 5 fs and the beam waist is 1.25 μm. This linearly increasing plasma density distribution can be achieved by a capillary with special gas-filled structure, and is much more suitable for experiment.« less
Measurement of operator workload in an information processing task
NASA Technical Reports Server (NTRS)
Jenney, L. L.; Older, H. J.; Cameron, B. J.
1972-01-01
This was an experimental study to develop an improved methodology for measuring workload in an information processing task and to assess the effects of shift length and communication density (rate of information flow) on the ability to process and classify verbal messages. Each of twelve subjects was exposed to combinations of three shift lengths and two communication densities in a counterbalanced, repeated measurements experimental design. Results indicated no systematic variation in task performance measures or in other dependent measures as a function of shift length or communication density. This is attributed to the absence of a secondary loading task, an insufficiently taxing work schedule, and the lack of psychological stress. Subjective magnitude estimates of workload showed fatigue (and to a lesser degree, tension) to be a power function of shift length. Estimates of task difficulty and fatigue were initially lower but increased more sharply over time under low density than under high density conditions. An interpretation of findings and recommedations for furture research are included. This research has major implications to human workload problems in information processing of air traffic control verbal data.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
A Two-length Scale Turbulence Model for Single-phase Multi-fluid Mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwarzkopf, J. D.; Livescu, D.; Baltzer, J. R.
2015-09-08
A two-length scale, second moment turbulence model (Reynolds averaged Navier-Stokes, RANS) is proposed to capture a wide variety of single-phase flows, spanning from incompressible flows with single fluids and mixtures of different density fluids (variable density flows) to flows over shock waves. The two-length scale model was developed to address an inconsistency present in the single-length scale models, e.g. the inability to match both variable density homogeneous Rayleigh-Taylor turbulence and Rayleigh-Taylor induced turbulence, as well as the inability to match both homogeneous shear and free shear flows. The two-length scale model focuses on separating the decay and transport length scales,more » as the two physical processes are generally different in inhomogeneous turbulence. This allows reasonable comparisons with statistics and spreading rates over such a wide range of turbulent flows using a common set of model coefficients. The specific canonical flows considered for calibrating the model include homogeneous shear, single-phase incompressible shear driven turbulence, variable density homogeneous Rayleigh-Taylor turbulence, Rayleigh-Taylor induced turbulence, and shocked isotropic turbulence. The second moment model shows to compare reasonably well with direct numerical simulations (DNS), experiments, and theory in most cases. The model was then applied to variable density shear layer and shock tube data and shows to be in reasonable agreement with DNS and experiments. Additionally, the importance of using DNS to calibrate and assess RANS type turbulence models is highlighted.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
NASA Technical Reports Server (NTRS)
Schwartz, H. J.
1976-01-01
A Monte Carlo simulation process was used to develop the U.S. daily range requirements for an electric vehicle from probability distributions of trip lengths and frequencies and average annual mileage data. The analysis shows that a car in the U.S. with a practical daily range of 82 miles (132 km) can meet the needs of the owner on 95% of the days of the year, or at all times other than his long vacation trips. Increasing the range of the vehicle beyond this point will not make it more useful to the owner because it will still not provide intercity transportation. A daily range of 82 miles can be provided by an intermediate battery technology level characterized by an energy density of 30 to 50 watt-hours per pound (66 to 110 W-hr/kg). Candidate batteries in this class are nickel-zinc, nickel-iron, and iron-air. The implication of these results for the research goals of far-term battery systems suggests a shift in emphasis toward lower cost and greater life and away from high energy density.
The effect of α-damage on fission-track annealing in zircon
Kasuya, M.; Naeser, C.W.
1988-01-01
The thermal stability of confined fission-track lengths in four zircon samples having different spontaneous track densities (i.e. different amounts of ??-damage) has been studied by one hour isochronal annealing experiments. The thermal stability of spontaneous track lengths is independent of initial spontaneous track density. The thermal stability of induced track lengths in pre-annealed zircon, however, is significantly higher than that of spontaneous track lengths. The results indicate that the presence of ??-damage lowers the thermal stability of fission-tracks in zircon. ?? 1988.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Within-Event and Between-Events Ground Motion Variability from Earthquake Rupture Scenarios
NASA Astrophysics Data System (ADS)
Crempien, Jorge G. F.; Archuleta, Ralph J.
2017-09-01
Measurement of ground motion variability is essential to estimate seismic hazard. Over-estimation of variability can lead to extremely high annual hazard estimates of ground motion exceedance. We explore different parameters that affect the variability of ground motion such as the spatial correlations of kinematic rupture parameters on a finite fault and the corner frequency of the moment-rate spectra. To quantify the variability of ground motion, we simulate kinematic rupture scenarios on several vertical strike-slip faults and compute ground motion using the representation theorem. In particular, for the entire suite of rupture scenarios, we quantify the within-event and the between-events ground motion variability of peak ground acceleration (PGA) and response spectra at several periods, at 40 stations—all approximately at an equal distance of 20 and 50 km from the fault. Both within-event and between-events ground motion variability increase when the slip correlation length on the fault increases. The probability density functions of ground motion tend to truncate at a finite value when the correlation length of slip decreases on the fault, therefore, we do not observe any long-tail distribution of peak ground acceleration when performing several rupture simulations for small correlation lengths. Finally, for a correlation length of 6 km, the within-event and between-events PGA log-normal standard deviations are 0.58 and 0.19, respectively, values slightly smaller than those reported by Boore et al. (Earthq Spectra, 30(3):1057-1085, 2014). The between-events standard deviation is consistently smaller than the within-event for all correlations lengths, a feature that agrees with recent ground motion prediction equations.
NASA Astrophysics Data System (ADS)
Ferreira, Rui M. L.; Ferrer-Boix, Carles; Hassan, Marwan
2015-04-01
Initiation of sediment motion is a classic problem of sediment and fluid mechanics that has been studied at wide range of scales. By analysis at channel scale one means the investigation of a reach of a stream, sufficiently large to encompass a large number of sediment grains but sufficiently small not to experience important variations in key hydrodynamic variables. At this scale, and for poorly-sorted hydraulically rough granular beds, existing studies show a wide variation of the value of the critical Shields parameter. Such uncertainty constitutes a problem for engineering studies. To go beyond Shields paradigm for the study of incipient motion at channel scale this problem can be can be cast in probabilistic terms. An empirical probability of entrainment, which will naturally account for size-selective transport, can be calculated at the scale of the bed reach, using a) the probability density functions (PDFs) of the flow velocities {{f}u}(u|{{x}n}) over the bed reach, where u is the flow velocity and xn is the location, b) the PDF of the variability of competent velocities for the entrainment of individual particles, {{f}{{up}}}({{u}p}), where up is the competent velocity, and c) the concept of joint probability of entrainment and grain size. One must first divide the mixture in into several classes M and assign a correspondent frequency p_M. For each class, a conditional PDF of the competent velocity {{f}{{up}}}({{u}p}|M) is obtained, from the PDFs of the parameters that intervene in the model for the entrainment of a single particle: [ {{u}p}/√{g(s-1){{di}}}={{Φ }u}( { {{C}k} },{{{φ}k}},ψ,{{u}p/{di}}{{{ν}(w)}} )) ] where { Ck } is a set of shape parameters that characterize the non-sphericity of the grain, { φk} is a set of angles that describe the orientation of particle axes and its positioning relatively to its neighbours, ψ is the skin friction angle of the particles, {{{u}p}{{d}i}}/{{{ν}(w)}} is a particle Reynolds number, di is the sieving diameter of the particle, g is the acceleration of gravity and {{Φ }u} is a general function. For the same class, the probability density function of the instantaneous turbulent velocities {{f}u}(u|M) can be obtained from judicious laboratory or field work. From these probability densities, the empirical conditional probability of entrainment of class M is [ P(E|M)=int-∞ +∞ {P(u>{{u}p}|M) {{f}{{up}}}({{u}p}|M)d{{u}p}} ] where P(u>{{u}p}|M)=int{{up}}+∞ {{{f}u}(u|M)du}. Employing a frequentist interpretation of probability, in an actual bed reach subjected to a succession of N (turbulent) flows, the above equation states that the fraction N P(E|M) is the number of flows in which the grains of class M are entrained. The joint probability of entrainment and class M is given by the product P(E|M){{p}M}. Hence, the channel scale empirical probability of entrainment is the marginal probability [ P(E)=sumlimitsM{P(E|M){{p}M}} ] since the classes M are mutually exclusive. Fractional bedload transport rates can be obtained from the probability of entrainment through [ {{q}s_M}={{E}M}{{ℓ }s_M} ] where {{q}s_M} is the bedload discharge in volume per unit width of size fraction M, {{E}M} is the entrainment rate per unit bed area of that size fraction, calculated from the probability of entrainment as {{E}M}=P(E|M){{p}M}(1-&lambda )d/(2T) where d is a characteristic diameter of grains on the bed surface, &lambda is the bed porosity, T is the integral length scale of the longitudinal velocity at the elevation of crests of the roughness elements and {{ℓ }s_M} is the mean displacement length of class M. Fractional transport rates were computed and compared with experimental data, determined from bedload samples collected in a 12 m long 40 cm wide channel under uniform flow conditions and sediment recirculation. The median diameter of the bulk bed mixture was 3.2 mm and the geometric standard deviation was 1.7. Shields parameters ranged from 0.027 and 0.067 while the boundary Reynolds number ranged between 220 and 376. Instantaneous velocities were measured with 2-component Laser Doppler Anemometry. The results of the probabilist model exhibit a general good agreement with the laboratory data. However the probability of entrainment of the smallest size fractions is systematically underestimated. This may be caused by phenomena that is absent from the model, for instance the increased magnitude of hydrodynamic actions following the displacement of a larger sheltering grain and the fact that the collective entrainment of smaller grains following one large turbulent event is not accounted for. This work was partially funded by FEDER, program COMPETE, and by national funds through Portuguese Foundation for Science and Technology (FCT) project RECI/ECM-HID/0371/2012.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
LINDENS: A program for lineament length and density analysis*1
NASA Astrophysics Data System (ADS)
Casas, Antonio M.; Cortés, Angel L.; Maestro, Adolfo; Soriano, M. Asunción; Riaguas, Andres; Bernal, Javier
2000-11-01
Analysis of lineaments from satellite images normally includes the determination of their orientation and density. The spatial variation in the orientation and/or number of lineaments must be obtained by means of a network of cells, the lineaments included in each cell being analysed separately. The program presented in this work, LINDENS, allows the density of lineaments (number of lineaments per km 2 and length of lineaments per km 2) to be estimated. It also provides a tool for classifying the lineaments contained in different cells, so that their orientation can be represented in frequency histograms and/or rose diagrams. The input file must contain the planar coordinates of the beginning and end of each lineament. The density analysis is done by creating a network of square cells, and counting the number of lineaments that are contained within each cell, that have one of their ends within the cell or that cross-cut the cell boundary. The lengths of lineaments are then calculated. To obtain a representative density map the cell size must be fixed according to: (1) the average lineament length; (2) the distance between the lineaments; and (3) the boundaries of zones with low densities due to lithology or outcrop features. An example from the Neogene Duero Basin (Northern Spain) is provided to test the reliability of the density maps obtained with different cell sizes.
NASA Astrophysics Data System (ADS)
Bril, A.; Oshchepkov, S.; Yokota, T.; Yoshida, Y.; Morino, I.; Uchino, O.; Belikov, D. A.; Maksyutov, S. S.
2014-12-01
We retrieved the column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) and methane (XCH4) from the radiance spectra measured by Greenhouse gases Observing SATellite (GOSAT) for 48 months of the satellite operation from June 2009. Recent version of the Photon path-length Probability Density Function (PPDF)-based algorithm was used to estimate XCO2 and optical path modifications in terms of PPDF parameters. We also present results of numerical simulations for over-land observations and "sharp edge" tests for sun-glint mode to discuss the algorithm accuracy under conditions of strong optical path modification. For the methane abundance retrieved from 1.67-µm-absorption band we applied optical path correction based on PPDF parameters from 1.6-µm carbon dioxide (CO2) absorption band. Similarly to CO2-proxy technique, this correction assumes identical light path modifications in 1.67-µm and 1.6-µm bands. However, proxy approach needs pre-defined XCO2 values to compute XCH4, whilst the PPDF-based approach does not use prior assumptions on CO2 concentrations.Post-processing data correction for XCO2 and XCH4 over land observations was performed using regression matrix based on multivariate analysis of variance (MANOVA). The MANOVA statistics was applied to the GOSAT retrievals using reference collocated measurements of Total Carbon Column Observing Network (TCCON). The regression matrix was constructed using the parameters that were found to correlate with GOSAT-TCCON discrepancies: PPDF parameters α and ρ, that are mainly responsible for shortening and lengthening of the optical path due to atmospheric light scattering; solar and satellite zenith angles; surface pressure; surface albedo in three GOSAT short wave infrared (SWIR) bands. Application of the post-correction generally improves statistical characteristics of the GOSAT-TCCON correlation diagrams for individual stations as well as for aggregated data.In addition to the analysis of the observations over 12 TCCON stations we estimated temporal and spatial trends (interannual XCO2 and XCH4 variations, seasonal cycles, latitudinal gradients) and compared them with modeled results as well as with similar estimates from other GOSAT retrievals.
Nazikian, R; Shinohara, K; Kramer, G J; Valeo, E; Hill, K; Hahm, T S; Rewoldt, G; Ide, S; Koide, Y; Oyama, Y; Shirai, H; Tang, W
2005-04-08
A low power polychromatic beam of microwaves is used to diagnose the behavior of turbulent fluctuations in the core of the JT-60U tokamak during the evolution of the internal transport barrier. A continuous reduction in the size of turbulent structures is observed concomitant with the reduction of the density scale length during the evolution of the internal transport barrier. The density correlation length decreases to the order of the ion gyroradius, in contrast with the much longer scale lengths observed earlier in the discharge, while the density fluctuation level remain similar to the level before transport barrier formation.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ocean Surface Wave Optical Roughness - Innovative Measurement and Modeling
2007-09-30
whitecap crest length spectral density (Phillips et al, 2001, Gemmrich, 2005) and microscale breaker crest length spectral density (Jessup and Phadnis ...open sea, Journal of Physical Oceanography, 16, 290-297. Jessup, A.T. & Phadnis , K.R. 2005 Measurement of the geometric and kinematic properties of
Ocean Surface Wave Optical Roughness - Innovative Measurement and Modeling
2006-09-30
length spectral density (eg. Phillips et al, 2001, Gemmrich, 2005) and microscale breaker crest length spectral density (eg. Jessup and Phadnis , 2005...Oceanography, 16, 290-297. Jessup, A.T. & Phadnis , K.R. 2005 Measurement of the geometric and kinematic properties of microsacle breaking waves from
Ocean Surface Wave Optical Roughness - Innovative Measurement and Modeling
2006-09-30
crest length spectral density (eg. Phillips et al, 2001, Gemmrich, 2005) and microscale breaker crest length spectral density (eg. Jessup and Phadnis ...Jessup, A.T. & Phadnis , K.R. 2005 Measurement of the geometric and kinematic properties of microsacle breaking waves from infrared imagery using a
Rioja, Eva; Cernicchiaro, Natalia; Costa, Maria Carolina; Valverde, Alexander
2012-01-01
This study investigated associations between perioperative factors and probability of death and length of hospitalization of mares with dystocia that survived following general anesthesia. Demographics and perioperative characteristics from 65 mares were reviewed retrospectively and used in a risk factor analysis. Mortality rate was 21.5% during the first 24 h post-anesthesia. The mean ± standard deviation number of days of hospitalization of surviving mares was 6.3 ± 5.4 d. Several factors were found in the univariable analysis to be significantly associated (P < 0.1) with increased probability of perianesthetic death, including: low preoperative total protein, high temperature and severe dehydration on presentation, prolonged dystocia, intraoperative hypotension, and drugs used during recovery. Type of delivery and day of the week the surgery was performed were significantly associated with length of hospitalization in the multivariable mixed effects model. The study identified some risk factors that may allow clinicians to better estimate the probability of mortality and morbidity in these mares. PMID:23115362
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
The influence of landscape features on road development in a loess region, China.
Bi, Xiaoli; Wang, Hui; Zhou, Rui
2011-10-01
Many ecologists focus on the effects of roads on landscapes, yet few consider how landscapes affect road systems. In this study, therefore, we quantitatively evaluated how land cover, topography, and building density affected the length density, node density, spatial pattern, and location of roads in Dongzhi Yuan, a typical loess region in China. Landscape factors and roads were mapped using images from SPOT satellite (Système Probatoire d'Observation de la Terre), initiated by the French space agency and a digital elevation model (DEM). Detrended canonical correspondence analysis (DCCA), a useful ordination technique to explain species-environment relations in community ecology, was applied to evaluate the ways in which landscapes may influence roads. The results showed that both farmland area and building density were positively correlated with road variables, whereas gully density and the coefficient of variation (CV of DEM) showed negative correlations. The CV of DEM, farmland area, grassland area, and building density explained variation in node density, length density, and the spatial pattern of roads, whereas gully density and building density explained variation in variables representing road location. In addition, node density, rather than length density, was the primary road variable affected by landscape variables. The results showed that the DCCA was effective in explaining road-landscape relations. Understanding these relations can provide information for landscape managers and transportation planners.
Survival and selection of migrating salmon from capture-recapture models with individual traits
Zabel, R.W.; Wagner, T.; Congleton, J.L.; Smith, S.G.; Williams, J.G.
2005-01-01
Capture-recapture studies are powerful tools for studying animal population dynamics, providing information on population abundance, survival rates, population growth rates, and selection for phenotypic traits. In these studies, the probability of observing a tagged individual reflects both the probability of the individual surviving to the time of recapture and the probability of recapturing an animal, given that it is alive. If both of these probabilities are related to the same phenotypic trait, it can be difficult to distinguish effects on survival probabilities from effects on recapture probabilities. However, when animals are individually tagged and have multiple opportunities for recapture, we can properly partition observed trait-related variability into survival and recapture components. We present an overview of capture-recapture models that incorporate individual variability and develop methods to incorporate results from these models into estimates of population survival and selection for phenotypic traits. We conducted a series of simulations to understand the performance of these estimators and to assess the consequences of ignoring individual variability when it exists. In addition, we analyzed a large data set of > 153 000 juvenile chinook salmon (Oncorhynchus tshawytscha) and steelhead (O. mykiss) of known length that were PIT-tagged during their seaward migration. Both our simulations and the case study indicated that the ability to precisely estimate selection for phenotypic traits was greatly compromised when differential recapture probabilities were ignored. Estimates of population survival, however, were far more robust. In the chinook salmon and steelhead study, we consistently found that smaller fish had a greater probability of recapture. We also uncovered length-related survival relationships in over half of the release group/river segment combinations that we observed, but we found both positive and negative relationships between length and survival probability. These results have important implications for the management of salmonid populations. ?? 2005 by the Ecological Society of America.
A risk assessment method for multi-site damage
NASA Astrophysics Data System (ADS)
Millwater, Harry Russell, Jr.
This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Lin, Jiaqi; Zhang, Heng; Morovati, Vahid; Dargazany, Roozbeh
2017-10-15
PEGylation on nanoparticles (NPs) is widely used to prevent aggregation and to mask NPs from the fast clearance system in the body. Understanding the molecular details of the PEG layer could facilitate rational design of PEGylated NPs that maximize their solubility and stealth ability without significantly compromising the targeting efficiency and cellular uptake. Here, we use molecular dynamics (MD) simulation to understand the structural and dynamic the PEG coating of mixed monolayer gold NPs. Specifically, we modeled gold NPs with PEG grafting densities ranging from 0-2.76chain/nm 2 , chain length with 0-10 PEG monomers, NP core diameter from 5nm to 500nm. It is found that the area accessed by individual PEG chains gradually transits from a "mushroom" to a "brush" conformation as NP surface curvature become flatter, whereas such a transition is not evident on small NPs when grafting density increases. It is shown that moderate grafting density (∼1.0chain/nm 2 ) and short chain length are sufficient enough to prevent NPs from aggregating in an aqueous medium. The effect of grafting density on solubility is also validated by dynamic light scattering measurements of PEGylated 5nm gold NPs. With respect to the shielding ability, simulations predict that increase either grafting density, chain length, or NP diameter will reduce the accessibility of the protected content to a certain size molecule. Interestingly, reducing NP surface curvature is estimated to be most effective in promoting shielding ability. For shielding against small molecules, increasing PEG grafting density is more effective than increasing chain length. A simple model that includes these three investigated parameters is developed based on the simulations to roughly estimate the shielding ability of the PEG layer with respect to molecules of different sizes. The findings can help expand our current understanding of the PEG layer and guide rational design of PEGylated gold NPs for a particular application by tuning the PEG grafting density, chain length, and particle size. Copyright © 2017 Elsevier Inc. All rights reserved.
Inflation of the screening length induced by Bjerrum pairs.
Zwanikken, Jos; van Roij, René
2009-10-21
Within a modified Poisson-Boltzmann theory we study the effect of Bjerrum pairs on the typical length scale [Formula: see text] over which electric fields are screened in electrolyte solutions, taking into account a simple association-dissociation equilibrium between free ions and Bjerrum pairs. At low densities of Bjerrum pairs, this length scale is well approximated by the Debye length [Formula: see text], with ρ(s) the free-ion density. At high densities of Bjerrum pairs, however, we find [Formula: see text], which is significantly larger than 1/κ due to the enhanced effective permittivity of the electrolyte, caused by the polarization of Bjerrum pairs. We argue that this mechanism may explain the recently observed anomalously large colloid-free zones between an oil-dispersed colloidal crystal and a colloidal monolayer at the oil-water interface.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
NASA Astrophysics Data System (ADS)
Butler, Rhett; Frazer, L. Neil; Templeton, William J.
2016-05-01
We use the global rate of Mw ≥ 9.0 earthquakes, and standard Bayesian procedures, to estimate the probability of such mega events in the Aleutian Islands, where they pose a significant risk to Hawaii. We find that the probability of such an earthquake along the Aleutians island arc is 6.5% to 12% over the next 50 years (50% credibility interval) and that the annualized risk to Hawai'i is about $30 M. Our method (the regionally scaled global rate method or RSGR) is to scale the global rate of Mw 9.0+ events in proportion to the fraction of global subduction (units of area per year) that takes place in the Aleutians. The RSGR method assumes that Mw 9.0+ events are a Poisson process with a rate that is both globally and regionally stationary on the time scale of centuries, and it follows the principle of Burbidge et al. (2008) who used the product of fault length and convergence rate, i.e., the area being subducted per annum, to scale the Poisson rate for the GSS to sections of the Indonesian subduction zone. Before applying RSGR to the Aleutians, we first apply it to five other regions of the global subduction system where its rate predictions can be compared with those from paleotsunami, paleoseismic, and geoarcheology data. To obtain regional rates from paleodata, we give a closed-form solution for the probability density function of the Poisson rate when event count and observation time are both uncertain.
Transition between Two Regimes Describing Internal Fluctuation of DNA in a Nanochannel
Su, Tianxiang; Das, Somes K.; Xiao, Ming; Purohit, Prashant K.
2011-01-01
We measure the thermal fluctuation of the internal segments of a piece of DNA confined in a nanochannel about 50100 nm wide. This local thermodynamic property is key to accurate measurement of distances in genomic analysis. For DNA in 100 nm channels, we observe a critical length scale 10 m for the mean extension of internal segments, below which the de Gennes' theory describes the fluctuations with no fitting parameters, and above which the fluctuation data falls into Odijk's deflection theory regime. By analyzing the probability distributions of the extensions of the internal segments, we infer that folded structures of length 150250 nm, separated by 10 m exist in the confined DNA during the transition between the two regimes. For 50 nm channels we find that the fluctuation is significantly reduced since the Odijk regime appears earlier. This is critical for genomic analysis. We further propose a more detailed theory based on small fluctuations and incorporating the effects of confinement to explicitly calculate the statistical properties of the internal fluctuations. Our theory is applicable to polymers with heterogeneous mechanical properties confined in non-uniform channels. We show that existing theories for the end-to-end extension/fluctuation of polymers can be used to study the internal fluctuations only when the contour length of the polymer is many times larger than its persistence length. Finally, our results suggest that introducing nicks in the DNA will not change its fluctuation behavior when the nick density is below 1 nick per kbp DNA. PMID:21423606
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.
Scalar decay in two-dimensional chaotic advection and Batchelor-regime turbulence
NASA Astrophysics Data System (ADS)
Fereday, D. R.; Haynes, P. H.
2004-12-01
This paper considers the decay in time of an advected passive scalar in a large-scale flow. The relation between the decay predicted by "Lagrangian stretching theories," which consider evolution of the scalar field within a small fluid element and then average over many such elements, and that observed at large times in numerical simulations, associated with emergence of a "strange eigenmode" is discussed. Qualitative arguments are supported by results from numerical simulations of scalar evolution in two-dimensional spatially periodic, time aperiodic flows, which highlight the differences between the actual behavior and that predicted by the Lagrangian stretching theories. In some cases the decay rate of the scalar variance is different from the theoretical prediction and determined globally and in other cases it apparently matches the theoretical prediction. An updated theory for the wavenumber spectrum of the scalar field and a theory for the probability distribution of the scalar concentration are presented. The wavenumber spectrum and the probability density function both depend on the decay rate of the variance, but can otherwise be calculated from the statistics of the Lagrangian stretching history. In cases where the variance decay rate is not determined by the Lagrangian stretching theory, the wavenumber spectrum for scales that are much smaller than the length scale of the flow but much larger than the diffusive scale is argued to vary as k-1+ρ, where k is wavenumber, and ρ is a positive number which depends on the decay rate of the variance γ2 and on the Lagrangian stretching statistics. The probability density function for the scalar concentration is argued to have algebraic tails, with exponent roughly -3 and with a cutoff that is determined by diffusivity κ and scales roughly as κ-1/2 and these predictions are shown to be in good agreement with numerical simulations.
Optimizations for the EcoPod field identification tool
Manoharan, Aswath; Stamberger, Jeannie; Yu, YuanYuan; Paepcke, Andreas
2008-01-01
Background We sketch our species identification tool for palm sized computers that helps knowledgeable observers with census activities. An algorithm turns an identification matrix into a minimal length series of questions that guide the operator towards identification. Historic observation data from the census geographic area helps minimize question volume. We explore how much historic data is required to boost performance, and whether the use of history negatively impacts identification of rare species. We also explore how characteristics of the matrix interact with the algorithm, and how best to predict the probability of observing a previously unseen species. Results Point counts of birds taken at Stanford University's Jasper Ridge Biological Preserve between 2000 and 2005 were used to examine the algorithm. A computer identified species by correctly answering, and counting the algorithm's questions. We also explored how the character density of the key matrix and the theoretical minimum number of questions for each bird in the matrix influenced the algorithm. Our investigation of the required probability smoothing determined whether Laplace smoothing of observation probabilities was sufficient, or whether the more complex Good-Turing technique is required. Conclusion Historic data improved identification speed, but only impacted the top 25% most frequently observed birds. For rare birds the history based algorithms did not impose a noticeable penalty in the number of questions required for identification. For our dataset neither age of the historic data, nor the number of observation years impacted the algorithm. Density of characters for different taxa in the identification matrix did not impact the algorithms. Intrinsic differences in identifying different birds did affect the algorithm, but the differences affected the baseline method of not using historic data to exactly the same degree. We found that Laplace smoothing performed better for rare species than Simple Good-Turing, and that, contrary to expectation, the technique did not then adversely affect identification performance for frequently observed birds. PMID:18366649
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Continuous-time random-walk model for financial distributions
NASA Astrophysics Data System (ADS)
Masoliver, Jaume; Montero, Miquel; Weiss, George H.
2003-02-01
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
Measurements of heavy solar wind and higher energy solar particles during the Apollo 17 mission
NASA Technical Reports Server (NTRS)
Walker, R. M.; Zinner, E.; Maurette, M.
1973-01-01
The lunar surface cosmic ray experiment, consisting of sets of mica, glass, plastic, and metal foil detectors, was successfully deployed on the Apollo 17 mission. One set of detectors was exposed directly to sunlight and another set was placed in shade. Preliminary scanning of the mica detectors shows the expected registration of heavy solar wind ions in the sample exposed directly to the sun. The initial results indicate a depletion of very-heavy solar wind ions. The effect is probably not real but is caused by scanning inefficiencies. Despite the lack of any pronounced solar activity, energetic heavy particles with energies extending to 1 MeV/nucleon were observed. Equal track densities of approximately 6000 tracks/cm sq 0.5 microns in length were measured in mica samples exposed in both sunlight and shade.
Khan, Adnan; Akhtar, Naveed; Kamran, Saadat; Ponirakis, Georgios; Petropoulos, Ioannis N; Tunio, Nahel A; Dargham, Soha R; Imam, Yahia; Sartaj, Faheem; Parray, Aijaz; Bourke, Paula; Khan, Rabia; Santos, Mark; Joseph, Sujatha; Shuaib, Ashfaq; Malik, Rayaz A
2017-11-01
Corneal confocal microscopy can identify corneal nerve damage in patients with peripheral and central neurodegeneration. However, the use of corneal confocal microscopy in patients presenting with acute ischemic stroke is unknown. One hundred thirty patients (57 without diabetes mellitus [normal glucose tolerance], 32 with impaired glucose tolerance, and 41 with type 2 diabetes mellitus) admitted with acute ischemic stroke, and 28 age-matched healthy control participants underwent corneal confocal microscopy to quantify corneal nerve fiber density, corneal nerve branch density, and corneal nerve fiber length. There was a significant reduction in corneal nerve fiber density, corneal nerve branch density, and corneal nerve fiber length in stroke patients with normal glucose tolerance ( P <0.001, P <0.001, P <0.001), impaired glucose tolerance ( P =0.004, P <0.001, P =0.002), and type 2 diabetes mellitus ( P <0.001, P <0.001, P <0.001) compared with controls. HbA1c and triglycerides correlated with corneal nerve fiber density ( r =-0.187, P =0.03; r =-0.229 P =0.01), corneal nerve fiber length ( r =-0.228, P =0.009; r =-0.285; P =0.001), and corneal nerve branch density ( r =-0.187, P =0.033; r =-0.229, P =0.01). Multiple linear regression showed no independent associations between corneal nerve fiber density, corneal nerve branch density, and corneal nerve fiber length and relevant risk factors for stroke. Corneal confocal microscopy is a rapid noninvasive ophthalmic imaging technique that identifies corneal nerve fiber loss in patients with acute ischemic stroke. © 2017 American Heart Association, Inc.
Turbulent statistics in flow field due to interaction of two plane parallel jets
NASA Astrophysics Data System (ADS)
Bisoi, Mukul; Das, Manab Kumar; Roy, Subhransu; Patel, Devendra Kumar
2017-12-01
Turbulent characteristics of flow fields due to the interaction of two plane parallel jets separated by the jet width distance are studied. Numerical simulation is carried out by large eddy simulation with a dynamic Smagorinsky model for the sub-grid scale stresses. The energy spectra are observed to follow the -5/3 power law for the inertial sub-range. A proper orthogonal decomposition study indicates that the energy carrying large coherent structures is present close to the nozzle exit. It is shown that these coherent structures interact with each other and finally disintegrate into smaller vortices further downstream. The turbulent fluctuations in the longitudinal and lateral directions are shown to follow a similarity. The mean flow at the same time also maintains a close similarity. Prandtl's mixing length, the Taylor microscale, and the Kolmogorov length scales are shown along the lateral direction for different downstream locations. The autocorrelation in the longitudinal and transverse directions is seen to follow a similarity profile. By plotting the probability density function, the skewness and the flatness (kurtosis) are analyzed. The Reynolds stress anisotropy tensor is calculated, and the anisotropy invariant map known as Lumley's triangle is presented and analyzed.
Analysis of traffic congestion induced by the work zone
NASA Astrophysics Data System (ADS)
Fei, L.; Zhu, H. B.; Han, X. L.
2016-05-01
Based on the cellular automata model, a meticulous two-lane cellular automata model is proposed, in which the driving behavior difference and the difference of vehicles' accelerations between the moving state and the starting state are taken into account. Furthermore the vehicles' motion is refined by using the small cell of one meter long. Then accompanied by coming up with a traffic management measure, a two-lane highway traffic model containing a work zone is presented, in which the road is divided into normal area, merging area and work zone. The vehicles in different areas move forward according to different lane changing rules and position updating rules. After simulation it is found that when the density is small the cluster length in front of the work zone increases with the decrease of the merging probability. Then the suitable merging length and the appropriate speed limit value are recommended. The simulation result in the form of the speed-flow diagram is in good agreement with the empirical data. It indicates that the presented model is efficient and can partially reflect the real traffic. The results may be meaningful for traffic optimization and road construction management.
Modeling Shear Induced Von Willebrand Factor Binding to Collagen
NASA Astrophysics Data System (ADS)
Dong, Chuqiao; Wei, Wei; Morabito, Michael; Webb, Edmund; Oztekin, Alparslan; Zhang, Xiaohui; Cheng, Xuanhong
2017-11-01
Von Willebrand factor (vWF) is a blood glycoprotein that binds with platelets and collagen on injured vessel surfaces to form clots. VWF bioactivity is shear flow induced: at low shear, binding between VWF and other biological entities is suppressed; for high shear rate conditions - as are found near arterial injury sites - VWF elongates, activating its binding with platelets and collagen. Based on parameters derived from single molecule force spectroscopy experiments, we developed a coarse-grain molecular model to simulate bond formation probability as a function of shear rate. By introducing a binding criterion that depends on the conformation of a sub-monomer molecular feature of our model, the model predicts shear-induced binding, even for conditions where binding is highly energetically favorable. We further investigate the influence of various model parameters on the ability to predict shear-induced binding (vWF length, collagen site density and distribution, binding energy landscape, and slip/catch bond length) and demonstrate parameter ranges where the model provides good agreement with existing experimental data. Our results may be important for understanding vWF activity and also for achieving targeted drug therapy via biomimetic synthetic molecules. National Science Foundation (NSF),Division of Mathematical Sciences (DMS).
Geometric structure of percolation clusters.
Xu, Xiao; Wang, Junfeng; Zhou, Zongzheng; Garoni, Timothy M; Deng, Youjin
2014-01-01
We investigate the geometric properties of percolation clusters by studying square-lattice bond percolation on the torus. We show that the density of bridges and nonbridges both tend to 1/4 for large system sizes. Using Monte Carlo simulations, we study the probability that a given edge is not a bridge but has both its loop arcs in the same loop and find that it is governed by the two-arm exponent. We then classify bridges into two types: branches and junctions. A bridge is a branch iff at least one of the two clusters produced by its deletion is a tree. Starting from a percolation configuration and deleting the branches results in a leaf-free configuration, whereas, deleting all bridges produces a bridge-free configuration. Although branches account for ≈43% of all occupied bonds, we find that the fractal dimensions of the cluster size and hull length of leaf-free configurations are consistent with those for standard percolation configurations. By contrast, we find that the fractal dimensions of the cluster size and hull length of bridge-free configurations are given by the backbone and external perimeter dimensions, respectively. We estimate the backbone fractal dimension to be 1.643 36(10).
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovchinnikov, Mikhail; Lim, Kyo-Sun Sunny; Larson, Vincent E.
Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When sub-columns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified By Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continentalmore » and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species, - a distinction that the current SILHS implementation does not make. In CRM simulations that explicitly resolve cloud and precipitation structures, faster falling species, such as rain and graupel, exhibit significantly higher coherence in their vertical distributions than slow falling cloud liquid and ice. These results suggest that to improve the overlap treatment in the sub-column generator, the PDF correlations need to depend on hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale.« less
A probabilistic analysis of electrical equipment vulnerability to carbon fibers
NASA Technical Reports Server (NTRS)
Elber, W.
1980-01-01
The statistical problems of airborne carbon fibers falling onto electrical circuits were idealized and analyzed. The probability of making contact between randomly oriented finite length fibers and sets of parallel conductors with various spacings and lengths was developed theoretically. The probability of multiple fibers joining to bridge a single gap between conductors, or forming continuous networks is included. From these theoretical considerations, practical statistical analyses to assess the likelihood of causing electrical malfunctions was produced. The statistics obtained were confirmed by comparison with results of controlled experiments.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
Q-Space Truncation and Sampling in Diffusion Spectrum Imaging
Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D.; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L.; McNab, Jennifer A.
2015-01-01
Purpose To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). Methods DSI data were acquired using the MGH-USC connectome scanner (Gmax=300mT/m) with bmax=30,000s/mm2, 17×17×17, 15×15×15 and 11×11×11 grids in ex vivo human brains and bmax=10,000s/mm2, 11×11×11 grid in vivo. An additional in vivo scan using bmax=7,000s/mm2, 11×11×11 grid was performed with a derated gradient strength of 40mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Results Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. Conclusion The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11×11×11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. PMID:26762670
NASA Astrophysics Data System (ADS)
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Competition between harvester ants and rodents in the cold desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.
1979-09-30
Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less
Ab initio calculation of transport properties between PbSe quantum dots facets with iodide ligands
NASA Astrophysics Data System (ADS)
Wang, B.; Patterson, R.; Chen, W.; Zhang, Z.; Yang, J.; Huang, S.; Shrestha, S.; Conibeer, G.
2018-01-01
The transport properties between Lead Selenide (PbSe) quantum dots decorated with iodide ligands has been studied using density functional theory (DFT). Quantum conductance at each selected energy levels has been calculated along with total density of states and projected density of states. The DFT calculation is carried on using a grid-based planar augmented wave (GPAW) code incorporated with the linear combination of atomic orbital (LCAO) mode and Perdew Burke Ernzerhof (PBE) exchange-correlation functional. Three iodide ligand attached low index facets including (001), (011), (111) are investigated in this work. P-orbital of iodide ligand majorly contributes to density of state (DOS) at near top valence band resulting a significant quantum conductance, whereas DOS of Pb p-orbital shows minor influence. Various values of quantum conductance observed along different planes are possibly reasoned from a combined effect electrical field over topmost surface and total distance between adjacent facets. Ligands attached to (001) and (011) planes possess similar bond length whereas it is significantly shortened in (111) plane, whereas transport between (011) has an overall low value due to newly formed electric field. On the other hand, (111) plane with a net surface dipole perpendicular to surface layers leading to stronger electron coupling suggests an apparent increase of transport probability. Apart from previously mentioned, the maximum transport energy levels located several eVs (1 2 eVs) from the edge of valence band top.
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
Detection of foreign bodies in foods using continuous wave terahertz imaging.
Lee, Young-Ki; Choi, Sung-Wook; Han, Seong-Tae; Woo, Deog Hyun; Chun, Hyang Sook
2012-01-01
Foreign bodies (FBs) in food are health hazards and quality issues for many food manufacturers and enforcement authorities. In this study, continuous wave (CW) terahertz (THz) imaging at 0.2 THz with an output power of 10 mW was compared with X-ray imaging as techniques for inspection of food for FBs. High-density FBs, i.e., aluminum and granite pieces of various sizes, were embedded in a powdered instant noodle product and detected using THz and X-ray imaging. All aluminum and granite pieces (regular hexahedrons with an edge length of 1 to 5 mm) were visualized by both CW THz and X-ray imaging. THz imaging also detected maggots (length = 8 to 22 mm) and crickets (length = 35 and 50 mm), which were embedded in samples as low density FBs. However, not all sizes of maggot pieces embedded in powdered instant noodle were detected with X-ray imaging, although larger crickets (length = 50 mm and thickness = 10 mm) were detected. These results suggest that CW THz imaging has potential for detecting both high-density and low-density FBs embedded in food.
NASA Astrophysics Data System (ADS)
Foster, Peter J.; Yan, Wen; Fürthauer, Sebastian; Shelley, Michael J.; Needleman, Daniel J.
2017-12-01
The cellular cytoskeleton is an active material, driven out of equilibrium by molecular motor proteins. It is not understood how the collective behaviors of cytoskeletal networks emerge from the properties of the network’s constituent motor proteins and filaments. Here we present experimental results on networks of stabilized microtubules in Xenopus oocyte extracts, which undergo spontaneous bulk contraction driven by the motor protein dynein, and investigate the effects of varying the initial microtubule density and length distribution. We find that networks contract to a similar final density, irrespective of the length of microtubules or their initial density, but that the contraction timescale varies with the average microtubule length. To gain insight into why this microscopic property influences the macroscopic network contraction time, we developed simulations where microtubules and motors are explicitly represented. The simulations qualitatively recapitulate the variation of contraction timescale with microtubule length, and allowed stress contributions from different sources to be estimated and decoupled.
Albuquerque, F S; Peso-Aguiar, M C; Assunção-Albuquerque, M J T; Gálvez, L
2009-08-01
The length-weight relationship and condition factor have been broadly investigated in snails to obtain the index of physical condition of populations and evaluate habitat quality. Herein, our goal was to describe the best predictors that explain Achatina fulica biometrical parameters and well being in a recently introduced population. From November 2001 to November 2002, monthly snail samples were collected in Lauro de Freitas City, Bahia, Brazil. Shell length and total weight were measured in the laboratory and the potential curve and condition factor were calculated. Five environmental variables were considered: temperature range, mean temperature, humidity, precipitation and human density. Multiple regressions were used to generate models including multiple predictors, via model selection approach, and then ranked with AIC criteria. Partial regressions were used to obtain the separated coefficients of determination of climate and human density models. A total of 1.460 individuals were collected, presenting a shell length range between 4.8 to 102.5 mm (mean: 42.18 mm). The relationship between total length and total weight revealed that Achatina fulica presented a negative allometric growth. Simple regression indicated that humidity has a significant influence on A. fulica total length and weight. Temperature range was the main variable that influenced the condition factor. Multiple regressions showed that climatic and human variables explain a small proportion of the variance in shell length and total weight, but may explain up to 55.7% of the condition factor variance. Consequently, we believe that the well being and biometric parameters of A. fulica can be influenced by climatic and human density factors.
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Measurement of Two-Plasmon-Decay Dependence on Plasma Density Scale Length
NASA Astrophysics Data System (ADS)
Haberberger, D.
2013-10-01
An accurate understanding of the plasma scale-length (Lq) conditions near quarter-critical density is important in quantifying the hot electrons generated by the two-plasmon-decay (TPD) instability in long-scale-length plasmas. A novel target platform was developed to vary the density scale length and an innovative diagnostic was implemented to measure the density profiles above 1021 cm-3 where TPD is expected to have the largest growth. A series of experiments was performed using the four UV (351-nm) beams on OMEGA EP that varied the Lq by changing the radius of curvature of the target while maintaining a constant Iq/Tq. The fraction of laser energy converted to hot electrons (fhot) was observed to increase rapidly from 0.005% to 1% by increasing the plasma scale length from 130 μm to 300 μm, corresponding to target diameters of 0.4 mm to 8 mm. A new diagnostic was developed based on refractometry using angular spectral filters to overcome the large phase accumulation in standard interferometric techniques. The angular filter refractometer measures the refraction angles of a 10-ps, 263-nm probe laser after propagating through the plasma. An angular spectral filter is used in the Fourier plane of the probe beam, where the refractive angles of the rays are mapped to space. The edges of the filter are present in the image plane and represent contours of constant refraction angle. These contours are used to infer the phase of the probe beam, which are used to calculate the plasma density profile. In long-scale-length plasmas, the diagnostic currently measures plasma densities from ~1019 cm-3 to ~2 × 1021 cm-3. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944. In collaboration with D. H. Edgell, S. X. Hu, S. Ivancic, R. Boni, C. Dorrer, and D. H. Froula (Laboratory for Laser Energetics, U. of Rochester).
Unified description of the slip phenomena in sheared polymer films: A molecular dynamics study
NASA Astrophysics Data System (ADS)
Priezjev, Nikolai
2010-03-01
The dynamic behavior of the slip length in shear flow of polymer melts past atomically smooth surfaces is investigated using MD simulations. The polymer melt was modeled as a collection of FENE-LJ bead-spring chains. We consider shear flow conditions at low pressures and weak wall-fluid interaction energy so that fluid velocity profiles are linear throughout the channel at all shear rates examined. In agreement with earlier studies we confirm that for shear- thinning fluids the slip length passes through a local minimum at low shear rates and then increases rapidly at higher shear rates. We found that the rate dependence of the slip length depends on the lattice orientation at high shear rates. The MD results show that the ratio of slip length to viscosity follows a master curve when plotted as a function of a single variable that depends on the structure factor, contact density and temperature of the first fluid layer near the solid wall. The universal dependence of the slip length holds for a number of parameters of the interface: fluid density and structure (chain length), wall-fluid interaction energy, wall density, lattice orientation, thermal or solid walls.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
Plasma influence on the dispersion properties of finite-length, corrugated waveguides
NASA Astrophysics Data System (ADS)
Shkvarunets, A.; Kobayashi, S.; Weaver, J.; Carmel, Y.; Rodgers, J.; Antonsen, T. M., Jr.; Granatstein, V. L.; Destler, W. W.; Ogura, K.; Minami, K.
1996-03-01
We present an experimental study of the electromagnetic properties of transverse magnetic modes in a corrugated-wall cavity filled with a radially inhomogeneous plasma. The shifts of the resonant frequencies of a finite-length, corrugated cavity were measured as a function of the background plasma density and the dispersion diagram was reconstructed up to a peak plasma density of 1012 cm-3. Good agreement with a calculated dispersion diagram is obtained for plasma densities below 5×1011 cm-3.
Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment
NASA Astrophysics Data System (ADS)
Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.
NASA Astrophysics Data System (ADS)
Massoudieh, A.; Dentz, M.; Le Borgne, T.
2017-12-01
In heterogeneous media, the velocity distribution and the spatial correlation structure of velocity for solute particles determine the breakthrough curves and how they evolve as one moves away from the solute source. The ability to predict such evolution can help relating the spatio-statistical hydraulic properties of the media to the transport behavior and travel time distributions. While commonly used non-local transport models such as anomalous dispersion and classical continuous time random walk (CTRW) can reproduce breakthrough curve successfully by adjusting the model parameter values, they lack the ability to relate model parameters to the spatio-statistical properties of the media. This in turns limits the transferability of these models. In the research to be presented, we express concentration or flux of solutes as a distribution over their velocity. We then derive an integrodifferential equation that governs the evolution of the particle distribution over velocity at given times and locations for a particle ensemble, based on a presumed velocity correlation structure and an ergodic cross-sectional velocity distribution. This way, the spatial evolution of breakthrough curves away from the source is predicted based on cross-sectional velocity distribution and the connectivity, which is expressed by the velocity transition probability density. The transition probability is specified via a copula function that can help construct a joint distribution with a given correlation and given marginal velocities. Using this approach, we analyze the breakthrough curves depending on the velocity distribution and correlation properties. The model shows how the solute transport behavior evolves from ballistic transport at small spatial scales to Fickian dispersion at large length scales relative to the velocity correlation length.
How a short double-stranded DNA bends
NASA Astrophysics Data System (ADS)
Shin, Jaeoh; Lee, O.-Chul; Sung, Wokyung
2015-04-01
A recent experiment using fluorescence microscopy showed that double-stranded DNA fragments shorter than 100 base pairs loop with the probabilities higher by the factor of 102-106 than predicted by the worm-like chain (WLC) model [R. Vafabakhsh and T. Ha, Science 337, 1101(2012)]. Furthermore, the looping probabilities were found to be nearly independent of the loop size. The results signify a breakdown of the WLC model for DNA mechanics which works well on long length scales and calls for fundamental understanding for stressed DNA on shorter length scales. We develop an analytical, statistical mechanical model to investigate what emerges to the short DNA under a tight bending. A bending above a critical level initiates nucleation of a thermally induced bubble, which could be trapped for a long time, in contrast to the bubbles in both free and uniformly bent DNAs, which are either transient or unstable. The trapped bubble is none other than the previously hypothesized kink, which releases the bending energy more easily as the contour length decreases. It leads to tremendous enhancement of the cyclization probabilities, in a reasonable agreement with experiment.
The effects of intragrain defects on the local photoresponse of polycrystalline silicon solar cells
NASA Astrophysics Data System (ADS)
Inoue, N.; Wilmsen, C. W.; Jones, K. A.
1981-02-01
Intragrain defects in Wacker cast and Monsanto zone-refined polycrystalline silicon materials were investigated using the electron-beam-induced current (EBIC) technique. The EBIC response maps were compared with etch pit, local diffusion length and local photoresponse measurements. It was determined that the Wacker polycrystalline silicon has a much lower density of defects than does the Monsanto polycrystalline silicon and that most of the defects in the Wacker material are not active recombination sites. A correlation was found between the recombination site density, as determined by EBIC, and the local diffusion length. It is shown that a large density of intragrain recombination sites greatly reduces the minority carrier diffusion length and thus can significantly reduce the photoresponse of solar cells.
Zhou, Han; Li, Fang; Weir, Michael D.; Xu, Hockin H.K.
2013-01-01
Objectives Antibacterial bonding agents are promising to combat bacteria and caries at tooth-restoration margins. The objectives of this study were to incorporate new quaternary ammonium methacrylates (QAMs) to bonding agent and determine the effects of alkyl chain length (CL) and quaternary amine charge density on dental plaque microcosm bacteria response for the first time. Methods Six QAMs were synthesized with CL = 3, 6, 9, 12, 16, 18. Each QAM was incorporated into Scotchbond Multi-purpose (SBMP). To determine the charge density effect, dimethylaminododecyl methacrylate (DMAHDM, CL = 16) was mixed into SBMP at mass fraction = 0%, 2.5%, 5%, 7.5%, 10%. Charge density was measured using a fluorescein dye method. Dental plaque microcosm using saliva from ten donors was tested. Bacteria were inoculated on resins. Early-attachment was tested at 4 hours. Biofilm colony-forming units (CFU) were measured at 2 days. Results Incorporating QAMs into SBMP reduced bacteria early-attachment. Microcosm biofilm CFU for CL = 16 was 4 log lower than SBMP control. Charge density of bonding agent increased with DMAHDM content. Bacteria early-attachment decreased with increasing charge density. Biofilm CFU at 10% DMAHDM was reduced by 4 log. The killing effect was similarly-strong against total microorganisms, total streptococci, and mutans streptococci. Conclusions Increasing alkyl chain length and charge density of bonding agent was shown for the first time to decrease microcosm bacteria attachment and reduce biofilm CFU by 4 orders of magnitude. Novel antibacterial resins with tailored chain length and charge density are promising for wide applications in bonding, cements, sealants and composites to inhibit biofilms and caries. PMID:23948394
Zhou, Han; Li, Fang; Weir, Michael D; Xu, Hockin H K
2013-11-01
Antibacterial bonding agents are promising to combat bacteria and caries at tooth-restoration margins. The objectives of this study were to incorporate new quaternary ammonium methacrylates (QAMs) to bonding agent and determine the effects of alkyl chain length (CL) and quaternary amine charge density on dental plaque microcosm bacteria response for the first time. Six QAMs were synthesized with CL=3, 6, 9, 12, 16, 18. Each QAM was incorporated into Scotchbond multi-purpose (SBMP). To determine the charge density effect, dimethylaminododecyl methacrylate (DMAHDM, CL=16) was mixed into SBMP at mass fraction=0%, 2.5%, 5%, 7.5%, 10%. Charge density was measured using a fluorescein dye method. Dental plaque microcosm using saliva from ten donors was tested. Bacteria were inoculated on resins. Early-attachment was tested at 4h. Biofilm colony-forming units (CFU) were measured at 2 days. Incorporating QAMs into SBMP reduced bacteria early-attachment. Microcosm biofilm CFU for CL=16 was 4 log lower than SBMP control. Charge density of bonding agent increased with DMAHDM content. Bacteria early-attachment decreased with increasing charge density. Biofilm CFU at 10% DMAHDM was reduced by 4 log. The killing effect was similarly-strong against total microorganisms, total streptococci, and mutans streptococci. Increasing alkyl chain length and charge density of bonding agent was shown for the first time to decrease microcosm bacteria attachment and reduce biofilm CFU by 4 orders of magnitude. Novel antibacterial resins with tailored chain length and charge density are promising for wide applications in bonding, cements, sealants and composites to inhibit biofilms and caries. Copyright © 2013 Elsevier Ltd. All rights reserved.
McCormack, Gavin R
2017-06-01
The aim of this study was to estimate the associations between neighbourhood built environment characteristics and transportation walking (TW), recreational walking (RW), and moderate-intensity (MPA) and vigorous-intensity physical activity (VPA) in adults independent of sociodemographic characteristics and residential self-selection (i.e. the reasons related to physical activity associated with a person's choice of neighbourhood). In 2007 and 2008, 4423 Calgary adults completed land-based telephone interviews capturing physical activity, sociodemographic characteristics and reasons for residential self-selection. Using spatial data, we estimated population density, proportion of green space, path/cycleway length, business density, bus stop density, city-managed tree density, sidewalk length, park type mix and recreational destination mix within a 1.6 km street network distance from the participants' geolocated residential postal code. Generalized linear models estimated the associations between neighbourhood built environment characteristics and weekly neighbourhood-based physical activity participation (≥ 10 minutes/week; odds ratios [ORs]) and, among those who reported participation, duration of activity (unstandardized beta coefficients [B]). The sample included more women (59.7%) than men (40.3%) and the mean (standard deviation) age was 47.1 (15.6) years. TW participation was associated with intersection (OR = 1.11; 95% CI: 1.03 to 1.20) and business (OR = 1.52; 1.29 to 1.78) density, and sidewalk length (OR = 1.19; 1.09 to 1.29), while TW minutes was associated with business (B = 19.24 minutes/week; 11.28 to 27.20) and tree (B = 6.51; 2.29 to 10.72 minutes/week) density, and recreational destination mix (B = -8.88 minutes/ week; -12.49 to -5.28). RW participation was associated with path/cycleway length (OR = 1.17; 1.05 to 1.31). MPA participation was associated with recreational destination mix (OR = 1.09; 1.01 to 1.17) and sidewalk length (OR = 1.10; 1.02 to 1.19); however, MPA minutes was negatively associated with population density (B = -8.65 minutes/ week; -15.32 to -1.98). VPA participation was associated with sidewalk length (OR = 1.11; 1.02 to 1.20), path/cycleway length (OR = 1.12; 1.02 to 1.24) and proportion of neighbourhood green space (OR = 0.89; 0.82 to 0.98). VPA minutes was associated with tree density (B = 7.28 minutes/week; 0.39 to 14.17). Some neighbourhood built environment characteristics appear important for supporting physical activity participation while others may be more supportive of increasing physical activity duration. Modifications that increase the density of utilitarian destinations and the quantity of available sidewalks in established neighbourhoods could increase overall levels of neighbourhood-based physical activity.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
The Camassa-Holm equation as an incompressible Euler equation: A geometric point of view
NASA Astrophysics Data System (ADS)
Gallouët, Thomas; Vialard, François-Xavier
2018-04-01
The group of diffeomorphisms of a compact manifold endowed with the L2 metric acting on the space of probability densities gives a unifying framework for the incompressible Euler equation and the theory of optimal mass transport. Recently, several authors have extended optimal transport to the space of positive Radon measures where the Wasserstein-Fisher-Rao distance is a natural extension of the classical L2-Wasserstein distance. In this paper, we show a similar relation between this unbalanced optimal transport problem and the Hdiv right-invariant metric on the group of diffeomorphisms, which corresponds to the Camassa-Holm (CH) equation in one dimension. Geometrically, we present an isometric embedding of the group of diffeomorphisms endowed with this right-invariant metric in the automorphisms group of the fiber bundle of half densities endowed with an L2 type of cone metric. This leads to a new formulation of the (generalized) CH equation as a geodesic equation on an isotropy subgroup of this automorphisms group; On S1, solutions to the standard CH thus give radially 1-homogeneous solutions of the incompressible Euler equation on R2 which preserves a radial density that has a singularity at 0. An other application consists in proving that smooth solutions of the Euler-Arnold equation for the Hdiv right-invariant metric are length minimizing geodesics for sufficiently short times.
Car accidents induced by a bottleneck
NASA Astrophysics Data System (ADS)
Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid
2017-12-01
Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.
Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination
Sinkkonen, Aki
2005-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163
Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination
Sinkkonen, Aki
2006-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
ERIC Educational Resources Information Center
Rispens, Judith; Baker, Anne; Duinmeijer, Iris
2015-01-01
Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…
A phenomenological pulsar model
NASA Technical Reports Server (NTRS)
Michel, F. C.
1978-01-01
Particle injection energies and rates previously calculated for the stellar wind generation by rotating magnetized neutron stars are adopted. It is assumed that the ambient space-charge density being emitted to form this wind is bunched. These considerations immediately place the coherent radio frequency luminosity from such bunches near 10 to the 28th erg/s for typical pulsar parameters. A comparable amount of incoherent radiation is emitted for typical (1 second) pulsars. For very rapid pulsars, however, the latter component grows more rapidly than the available energy sources. The comparatively low radio luminosity of the Crab and Vela pulsars is attributed to both components being limited in the same ratio. The incoherent radiation essentially has a synchotron spectrum and extends to gamma-ray energies; consequently the small part of the total luminosity that is at optical wavelengths is unobservable. Assuming full coherence at all wavelengths short of a critical length gives a spectral index for the flux density of -8/3 at higher frequencies. The finite energy available from the injected particles would force the spectrum to roll over below about 100 MHz, although intrinsic morphological factors probably enter for any specific pulsar as well.
NASA Astrophysics Data System (ADS)
Saniz, R.; Partoens, B.; Peeters, F. M.
2013-02-01
The Green function approach to the Bardeen-Cooper-Schrieffer theory of superconductivity is used to study nanofilms. We go beyond previous models and include effects of confinement on the strength of the electron-phonon coupling as well as on the electronic spectrum and on the phonon modes. Within our approach, we find that in ultrathin films, confinement effects on the electronic screening become very important. Indeed, contrary to what has been advanced in recent years, the sudden increases of the density of states when new bands start to be occupied as the film thickness increases, tend to suppress the critical temperature rather than to enhance it. On the other hand, the increase of the number of phonon modes with increasing number of monolayers in the film leads to an increase in the critical temperature. As a consequence, the superconducting critical parameters in such nanofilms are determined by these two competing effects. Furthermore, in sufficiently thin films, the condensate consists of well-defined subcondensates associated with the occupied bands, each with a distinct coherence length. The subcondensates can interfere constructively or destructively giving rise to an interference pattern in the Cooper pair probability density.
Structural and thermodynamic properties of WB at high pressure and high temperature
NASA Astrophysics Data System (ADS)
Chen, Hai-Hua; Bi, Yan; Cheng, Yan; Ji, Guangfu; Peng, Fang; Hu, Yan-Fei
2012-12-01
The structure parameters and electronic structures of tungsten boride (WB) have been investigated by using the density functional theory (DFT). Our calculating results display the bulk modulus of WB are 352±2 GPa (K‧0=4.29) and 322±3 GPa (K‧0=4.21) by LDA and GGA methods, respectively. We have analyzed the probable reason of the discrepancy from the bulk modulus between theoretical and experimental results. The compression behavior of the unit cell axes is anisotropic, with the c-axis being more compressible than the a-axis. By analyzing the bond lengths information, it also demonstrated that WB has a lower compressibility at high pressure. From the partial densities of states (PDOS) of WB, we found that the Fermi lever is mostly contributed by the d states of W atom and p states of B atom and that the contributions from the s, p states of W atom and s states of B atom are small. Moreover, using the Gibbs 2 program, the thermodynamic properties of WB are obtained in a wide temperature range at high pressure for the first time in this work.
Managing environmental noise in Hong Kong
NASA Astrophysics Data System (ADS)
Li, Kai Ming
2004-05-01
Hong Kong is well known for its economic vibrancy and its hyper densely population: more than 7 million people living in a total area of slightly over 1000 square kilometers of hilly areas. Most of these people live and work in about 20% of the total land area, resulting in probably the highest densities in the world. The high population density is also matched by a large number of vehicles running in the roads. At present, there are over 400
Preliminary report of results from the plasma science experiment on Mariner 10
NASA Technical Reports Server (NTRS)
Bridge, H. S.; Lazarus, A. J.; Ogilvie, K. W.; Scudder, J. D.; Hartle, R. E.; Asbridge, J. R.; Bame, S. J.; Feldman, W. C.; Siscoe, G. L.; Yeates, C. M.
1974-01-01
Preliminary measurements of electron number density and temperature near Venus and Mercury and some results on flow speeds are presented. It is concluded that the interaction of the solar wind with Venus probably results in a bow shock characterized by H/r = 0.01 (ratio of the ionospheric scale height to the planetocentric distance of the nose of the ionopause); an extended exosphere appears unlikely. This direct interaction is indicated by the behavior of electrons with energies of 100-500 eV. Some unusual downstream effects suggest a comet-like tail several hundred scale lengths long. Near Mercury, a fully developed bow shock and magnetosheath were observed. Inside the magnetosheath there is a region analogous to the magnetosphere of the earth and populated by electrons of lower density and temperature than those found in the solar wind. The solar wind ram pressure corresponds to a stagnation pressure equivalent to a 170 gamma magnetic field. The strong solar wind interaction with Mercury is definitely magnetic, but not ionospheric or atmospheric. Spectra and particle flux varied widely while the spaceship was within the magnetosphere itself; temporal events like substorms may be responsible.
Length of Stay, Conditional Length of Stay, and Prolonged Stay in Pediatric Asthma
Silber, Jeffrey H; Rosenbaum, Paul R; Even-Shoshan, Orit; Shabbout, Mayadah; Zhang, Xuemei; Bradlow, Eric T; Marsh, Roger R
2003-01-01
Objective To understand differences in length of stay for asthma patients between New York State and Pennsylvania across children's and general hospitals in order to better guide policy. Data Sources/Study Setting All pediatric admissions for asthma in the states of Pennsylvania and New York using claims data obtained from each state for the years 1996–1998, n=38,310. Study Design A retrospective cohort design to model length of stay (LOS), the probability of prolonged stay, conditional length of stay (CLOS or the LOS after stay is prolonged), and the probability of readmission, controlling for patient factors, state, location and hospital type. Analytic Methods Logit models were used to estimate the probability of prolonged stay and readmission. The LOS and the CLOS were estimated with Cox regression. Model variables included comorbidities, income, race, distance from hospital, and insurance type. Prolonged stay was based on a Hollander-Proschan “New-Worse-Than-Used” test, corresponding to a three-day stay. Principal Findings The LOS was longer in New York than Pennsylvania, and the probabilities of prolonged stay and readmission were much higher in New York than Pennsylvania. However, once an admission was prolonged, there were no differences in CLOS between states (when readmissions were not added to the LOS calculation). In both states, children's hospitals and general hospitals had similar adjusted LOS. Conclusions Management of asthma appears more efficient in Pennsylvania than New York: Less severe patients are discharged faster in Pennsylvania than New York; once discharged, patients are less likely to be readmitted in Pennsylvania than New York. However, once a stay is prolonged, there is little difference between New York and Pennsylvania, suggesting medical care for severely ill patients is similar across states. Differences between children's and general hospitals were small as compared to differences between states. We conclude that policy initiatives in New York, and other states, should focus their efforts on improving the care provided to less severe patients in order to help reduce overall length of stay. PMID:12822916
Assessing the performance of a covert automatic target recognition algorithm
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2005-05-01
Passive radar systems exploit illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. Doing so allows them to operate covertly and inexpensively. Our research seeks to enhance passive radar systems by adding automatic target recognition (ATR) capabilities. In previous papers we proposed conducting ATR by comparing the radar cross section (RCS) of aircraft detected by a passive radar system to the precomputed RCS of aircraft in the target class. To effectively model the low-frequency setting, the comparison is made via a Rician likelihood model. Monte Carlo simulations indicate that the approach is viable. This paper builds on that work by developing a method for quickly assessing the potential performance of the ATR algorithm without using exhaustive Monte Carlo trials. This method exploits the relation between the probability of error in a binary hypothesis test under the Bayesian framework to the Chernoff information. Since the data are well-modeled as Rician, we begin by deriving a closed-form approximation for the Chernoff information between two Rician densities. This leads to an approximation for the probability of error in the classification algorithm that is a function of the number of available measurements. We conclude with an application that would be particularly cumbersome to accomplish via Monte Carlo trials, but that can be quickly addressed using the Chernoff information approach. This application evaluates the length of time that an aircraft must be tracked before the probability of error in the ATR algorithm drops below a desired threshold.
Influence of the random walk finite step on the first-passage probability
NASA Astrophysics Data System (ADS)
Klimenkova, Olga; Menshutin, Anton; Shchur, Lev
2018-01-01
A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Cosgrove, Gregory P.; Janssen, William J.; Huie, Tristan J.; Burnham, Ellen L.; Heinz, David E.; Curran-Everett, Douglas; Sahin, Hakan; Schwarz, Marvin I.; Cool, Carlyne D.; Groshong, Steve D.; Geraci, Mark W.; Tuder, Rubin M.; Hyde, Dallas M.; Henson, Peter M.
2012-01-01
Background: Lymphangiogenesis responds to tissue injury as a key component of normal wound healing. The development of fibrosis in the idiopathic interstitial pneumonias may result from abnormal wound healing in response to injury. We hypothesize that increased lymphatic vessel (LV) length, a marker of lymphangiogenesis, is associated with parenchymal components of the fibroblast reticulum (organizing collagen, fibrotic collagen, and fibroblast foci), and its extent correlates with disease severity. Methods: We assessed stereologically the parenchymal structure of fibrotic lungs and its associated lymphatic network, which was highlighted immunohistochemically in age-matched samples of usual interstitial pneumonia (UIP), nonspecific interstitial pneumonia (NSIP) with FVC < 80%, COPD with a Global Initiative for Obstructive Lung Disease stage 0, and normal control lungs. Results: LV length density, as opposed to vessel volume density, was found to be associated with organizing and fibrotic collagen density (P < .0001). Length density of LVs and the volume density of organizing and fibrotic collagen were significantly associated with severity of both % FVC (P < .001) and diffusing capacity of the lung for carbon monoxide (P < .001). Conclusions: Severity of disease in UIP and NSIP is associated with increased LV length and is strongly associated with components of the fibroblast reticulum, namely organizing and fibrotic collagen, which supports a pathogenic role of LVs in these two diseases. Furthermore, the absence of definable differences between UIP and NSIP suggests that LVs are a unifying mechanism for the development of fibrosis in these fibrotic lung diseases. PMID:22797508
Ewing, R.D.; Sheahan, J.E.; Lewis, M.A.; Palmisano, Aldo N.
2000-01-01
Four brood years of juvenile spring chinook salmon Oncorhynchus tshawytscha were reared in conventional and baffled raceways at various rearing densities and loads at Willamette Hatchery, Oregon. A period of rapid linear growth occurred from August to November, but there was little or no growth from November to March when the fish were released. Both fall and winter growth rates were inversely related to rearing density. Final weight and length were also inversely related to rearing density. No significant relationship between load and any growth variable was observed. Fish reared at lower densities in conventional raceways tended to develop bimodal length distributions in winter and early spring. Fish reared in conventional raceways showed significantly larger growth rates and final lengths and weights than those reared in baffled raceways. Food conversions and average delivery times for feed were significantly greater in baffled than in conventional raceways. No significant relationships were observed between either rearing density or load and condition factor, food conversion, or mortality. Mortality was not significantly different between the two raceway types. When fish were transported to seawater for further rearing, there were no significant relationships between mortality in seawater and rearing density or load, but fish reared in baffled raceways had significantly higher mortality than those reared in conventional raceways.
Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words
Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.
2012-01-01
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774
Fractional Brownian motion with a reflecting wall
NASA Astrophysics Data System (ADS)
Wada, Alexander H. O.; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior
NASA Astrophysics Data System (ADS)
Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah
2018-01-01
The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.
Effects of heterogeneous traffic with speed limit zone on the car accidents
NASA Astrophysics Data System (ADS)
Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.
2016-06-01
Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Density Gradients in Chemistry Teaching
ERIC Educational Resources Information Center
Miller, P. J.
1972-01-01
Outlines experiments in which a density gradient might be used to advantage. A density gradient consists of a column of liquid, the composition and density of which varies along its length. The procedure can be used in analysis of solutions and mixtures and in density measures of solids. (Author/TS)
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
The correlation function for density perturbations in an expanding universe. I - Linear theory
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1977-01-01
The evolution of the two-point correlation function for adiabatic density perturbations in the early universe is studied. Analytical solutions are obtained for the evolution of linearized spherically symmetric adiabatic density perturbations and the two-point correlation function for these perturbations in the radiation-dominated portion of the early universe. The results are then extended to the regime after decoupling. It is found that: (1) adiabatic spherically symmetric perturbations comparable in scale with the maximum Jeans length would survive the radiation-dominated regime; (2) irregular fluctuations are smoothed out up to the scale of the maximum Jeans length in the radiation era, but regular fluctuations might survive on smaller scales; (3) in general, the only surviving structures for irregularly shaped adiabatic density perturbations of arbitrary but finite scale in the radiation regime are the size of or larger than the maximum Jeans length in that regime; (4) infinite plane waves with a wavelength smaller than the maximum Jeans length but larger than the critical dissipative damping scale could survive the radiation regime; and (5) black holes would also survive the radiation regime and might accrete sufficient mass after decoupling to nucleate the formation of galaxies.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
A Molecular Dynamics Simulation of the Turbulent Couette Minimal Flow Unit
NASA Astrophysics Data System (ADS)
Smith, Edward
2016-11-01
What happens to turbulent motions below the Kolmogorov length scale? In order to explore this question, a 300 million molecule Molecular Dynamics (MD) simulation is presented for the minimal Couette channel in which turbulence can be sustained. The regeneration cycle and turbulent statistics show excellent agreement to continuum based computational fluid dynamics (CFD) at Re=400. As MD requires only Newton's laws and a form of inter-molecular potential, it captures a much greater range of phenomena without requiring the assumptions of Newton's law of viscosity, thermodynamic equilibrium, fluid isotropy or the limitation of grid resolution. The fundamental nature of MD means it is uniquely placed to explore the nature of turbulent transport. A number of unique insights from MD are presented, including energy budgets, sub-grid turbulent energy spectra, probability density functions, Lagrangian statistics and fluid wall interactions. EPSRC Post Doctoral Prize Fellowship.
A DNS study of turbulent mixing of two passive scalars
NASA Astrophysics Data System (ADS)
Juneja, A.; Pope, S. B.
1996-08-01
We employ direct numerical simulations to study the mixing of two passive scalars in stationary, homogeneous, isotropic turbulence. The present work is a direct extension of that of Eswaran and Pope from one scalar to two scalars and the focus is on examining the evolution states of the scalar joint probability density function (jpdf) and the conditional expectation of the scalar diffusion to motivate better models for multi-scalar mixing. The initial scalar fields are chosen to conform closely to a ``triple-delta function'' jpdf corresponding to blobs of fluid in three distinct states. The effect of the initial length scales and diffusivity of the scalars on the evolution of the jpdf and the conditional diffusion is investigated in detail as the scalars decay from their prescribed initial state. Also examined is the issue of self-similarity of the scalar jpdf at large times and the rate of decay of the scalar variance and dissipation.
Effect of chiral symmetry on chaotic scattering from Majorana zero modes.
Schomerus, H; Marciani, M; Beenakker, C W J
2015-04-24
In many of the experimental systems that may host Majorana zero modes, a so-called chiral symmetry exists that protects overlapping zero modes from splitting up. This symmetry is operative in a superconducting nanowire that is narrower than the spin-orbit scattering length, and at the Dirac point of a superconductor-topological insulator heterostructure. Here we show that chiral symmetry strongly modifies the dynamical and spectral properties of a chaotic scatterer, even if it binds only a single zero mode. These properties are quantified by the Wigner-Smith time-delay matrix Q=-iℏS^{†}dS/dE, the Hermitian energy derivative of the scattering matrix, related to the density of states by ρ=(2πℏ)^{-1}TrQ. We compute the probability distribution of Q and ρ, dependent on the number ν of Majorana zero modes, in the chiral ensembles of random-matrix theory. Chiral symmetry is essential for a significant ν dependence.
Cross over of recurrence networks to random graphs and random geometric graphs
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2017-02-01
Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.
Statistical properties of edge plasma turbulence in the Large Helical Device
NASA Astrophysics Data System (ADS)
Dewhurst, J. M.; Hnat, B.; Ohno, N.; Dendy, R. O.; Masuzaki, S.; Morisaki, T.; Komori, A.
2008-09-01
Ion saturation current (Isat) measurements made by three tips of a Langmuir probe array in the Large Helical Device are analysed for two plasma discharges. Absolute moment analysis is used to quantify properties on different temporal scales of the measured signals, which are bursty and intermittent. Strong coherent modes in some datasets are found to distort this analysis and are consequently removed from the time series by applying bandstop filters. Absolute moment analysis of the filtered data reveals two regions of power-law scaling, with the temporal scale τ ≈ 40 µs separating the two regimes. A comparison is made with similar results from the Mega-Amp Spherical Tokamak. The probability density function is studied and a monotonic relationship between connection length and skewness is found. Conditional averaging is used to characterize the average temporal shape of the largest intermittent bursts.
Preparation and transport properties of superconducting layers in the Ca-Sr-Bi-Cu-O system
NASA Astrophysics Data System (ADS)
Klee, M.; Stollman, G. M.; Stotz, S.; de Vries, J. W. C.
1988-08-01
Superconducting layers in the CaSrBiCuO system are prepared by thermal decomposition of metal carboxylates using a spin-coating and a dip-coating method onto ceramic MgO substrates. The samples consist of a tetragonal calcium-strontium-bismuth-cuprate and two bismuth-free calcium-strontium-cuprates. A step in the resistance versus temperature curve is observed which, together with the influence of magnetic fields, is interpreted as typical for a granular superconductor. The analysis shows that the critical current density is determined by domains of the order of some unit cells. The strong dependence of the superconducting transition on the orientation of an applied magnetic field is probably caused by the anisotropic layer structure. The coherence length perpendicular to the c-axis of the material is estimated to be ξab(0) = 4.0 nm and parallel to the c-axis ξc(0) = 0.6 nm.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Rogers, Lauren A.; Stige, Leif C.; Olsen, Esben M.; Knutsen, Halvor; Chan, Kung-Sik; Stenseth, Nils Chr.
2011-01-01
Understanding how populations respond to changes in climate requires long-term, high-quality datasets, which are rare for marine systems. We estimated the effects of climate warming on cod lengths and length variability using a unique 91-y time series of more than 100,000 individual juvenile cod lengths from surveys that began in 1919 along the Norwegian Skagerrak coast. Using linear mixed-effects models, we accounted for spatial population structure and the nested structure of the survey data to reveal opposite effects of spring and summer warming on juvenile cod lengths. Warm summer temperatures in the coastal Skagerrak have limited juvenile growth. In contrast, warmer springs have resulted in larger juvenile cod, with less variation in lengths within a cohort, possibly because of a temperature-driven contraction in the spring spawning period. A density-dependent reduction in length was evident only at the highest population densities in the time series, which have rarely been observed in the last decade. If temperatures rise because of global warming, nonlinearities in the opposing temperature effects suggest that negative effects of warmer summers will increasingly outweigh positive effects of warmer springs, and the coastal Skagerrak will become ill-suited for Atlantic cod. PMID:21245301
Fishnet statistics for probabilistic strength and scaling of nacreous imbricated lamellar materials
NASA Astrophysics Data System (ADS)
Luo, Wen; Bažant, Zdeněk P.
2017-12-01
Similar to nacre (or brick masonry), imbricated (or staggered) lamellar structures are widely found in nature and man-made materials, and are of interest for biomimetics. They can achieve high defect insensitivity and fracture toughness, as demonstrated in previous studies. But the probability distribution with a realistic far-left tail is apparently unknown. Here, strictly for statistical purposes, the microstructure of nacre is approximated by a diagonally pulled fishnet with quasibrittle links representing the shear bonds between parallel lamellae (or platelets). The probability distribution of fishnet strength is calculated as a sum of a rapidly convergent series of the failure probabilities after the rupture of one, two, three, etc., links. Each of them represents a combination of joint probabilities and of additive probabilities of disjoint events, modified near the zone of failed links by the stress redistributions caused by previously failed links. Based on previous nano- and multi-scale studies at Northwestern, the strength distribution of each link, characterizing the interlamellar shear bond, is assumed to be a Gauss-Weibull graft, but with a deeper Weibull tail than in Type 1 failure of non-imbricated quasibrittle materials. The autocorrelation length is considered equal to the link length. The size of the zone of failed links at maximum load increases with the coefficient of variation (CoV) of link strength, and also with fishnet size. With an increasing width-to-length aspect ratio, a rectangular fishnet gradually transits from the weakest-link chain to the fiber bundle, as the limit cases. The fishnet strength at failure probability 10-6 grows with the width-to-length ratio. For a square fishnet boundary, the strength at 10-6 failure probability is about 11% higher, while at fixed load the failure probability is about 25-times higher than it is for the non-imbricated case. This is a major safety advantage of the fishnet architecture over particulate or fiber reinforced materials. There is also a strong size effect, partly similar to that of Type 1 while the curves of log-strength versus log-size for different sizes could cross each other. The predicted behavior is verified by about a million Monte Carlo simulations for each of many fishnet geometries, sizes and CoVs of link strength. In addition to the weakest-link or fiber bundle, the fishnet becomes the third analytically tractable statistical model of structural strength, and has the former two as limit cases.
Score distributions of gapped multiple sequence alignments down to the low-probability tail
NASA Astrophysics Data System (ADS)
Fieth, Pascal; Hartmann, Alexander K.
2016-08-01
Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.
NASA Astrophysics Data System (ADS)
Sheidaii, Mohammad Reza; TahamouliRoudsari, Mehrzad; Gordini, Mehrdad
2016-06-01
In knee braced frames, the braces are attached to the knee element rather than the intersection of beams and columns. This bracing system is widely used and preferred over the other commonly used systems for reasons such as having lateral stiffness while having adequate ductility, damage concentration on the second degree convenience of repairing and replacing of these elements after Earthquake. The lateral stiffness of this system is supplied by the bracing member and the ductility of the frame attached to the knee length is supplied through the bending or shear yield of the knee member. In this paper, the nonlinear seismic behavior of knee braced frame systems has been investigated using incremental dynamic analysis (IDA) and the effects of the number of stories in a building, length and the moment of inertia of the knee member on the seismic behavior, elastic stiffness, ductility and the probability of failure of these systems has been determined. In the incremental dynamic analysis, after plotting the IDA diagrams of the accelerograms, the collapse diagrams in the limit states are determined. These diagrams yield that for a constant knee length with reduced moment of inertia, the probability of collapse in limit states heightens and also for a constant knee moment of inertia with increasing length, the probability of collapse in limit states increases.
Knotting probability of self-avoiding polygons under a topological constraint.
Uehara, Erica; Deguchi, Tetsuo
2017-09-07
We define the knotting probability of a knot K by the probability for a random polygon or self-avoiding polygon (SAP) of N segments having the knot type K. We show fundamental and generic properties of the knotting probability particularly its dependence on the excluded volume. We investigate them for the SAP consisting of hard cylindrical segments of unit length and radius r ex . For various prime and composite knots, we numerically show that a compact formula describes the knotting probabilities for the cylindrical SAP as a function of segment number N and radius r ex . It connects the small-N to the large-N behavior and even to lattice knots in the case of large values of radius. As the excluded volume increases, the maximum of the knotting probability decreases for prime knots except for the trefoil knot. If it is large, the trefoil knot and its descendants are dominant among the nontrivial knots in the SAP. From the factorization property of the knotting probability, we derive a sum rule among the estimates of a fitting parameter for all prime knots, which suggests the local knot picture and the dominance of the trefoil knot in the case of large excluded volumes. Here we remark that the cylindrical SAP gives a model of circular DNA which is negatively charged and semiflexible, where radius r ex corresponds to the screening length.
Knotting probability of self-avoiding polygons under a topological constraint
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2017-09-01
We define the knotting probability of a knot K by the probability for a random polygon or self-avoiding polygon (SAP) of N segments having the knot type K. We show fundamental and generic properties of the knotting probability particularly its dependence on the excluded volume. We investigate them for the SAP consisting of hard cylindrical segments of unit length and radius rex. For various prime and composite knots, we numerically show that a compact formula describes the knotting probabilities for the cylindrical SAP as a function of segment number N and radius rex. It connects the small-N to the large-N behavior and even to lattice knots in the case of large values of radius. As the excluded volume increases, the maximum of the knotting probability decreases for prime knots except for the trefoil knot. If it is large, the trefoil knot and its descendants are dominant among the nontrivial knots in the SAP. From the factorization property of the knotting probability, we derive a sum rule among the estimates of a fitting parameter for all prime knots, which suggests the local knot picture and the dominance of the trefoil knot in the case of large excluded volumes. Here we remark that the cylindrical SAP gives a model of circular DNA which is negatively charged and semiflexible, where radius rex corresponds to the screening length.
The Probability of Exceedance as a Nonparametric Person-Fit Statistic for Tests of Moderate Length
ERIC Educational Resources Information Center
Tendeiro, Jorge N.; Meijer, Rob R.
2013-01-01
To classify an item score pattern as not fitting a nonparametric item response theory (NIRT) model, the probability of exceedance (PE) of an observed response vector x can be determined as the sum of the probabilities of all response vectors that are, at most, as likely as x, conditional on the test's total score. Vector x is to be considered…
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
ERIC Educational Resources Information Center
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
ERIC Educational Resources Information Center
van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.
2016-01-01
The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…
Karin L. Riley; Rachel A. Loehman
2016-01-01
Climate changes are expected to increase fire frequency, fire season length, and cumulative area burned in the western United States. We focus on the potential impact of mid-21st- century climate changes on annual burn probability, fire season length, and large fire characteristics including number and size for a study area in the Northern Rocky Mountains....
Mixing in High Schmidt Number Turbulent Jets
1991-01-01
the higher Sc jet is less well mixed. The difference is less pronounced at higher Re. Flame length estimates imply either an increase in entrainment...72 8.0 Estimation of flame lengths ....................................... 74 8.1 Estim ation m...A.4) Lf flame length N number of trials (Eq. 3.1) p exponent in fits of the variance behavior with Re p probability of a binomial event (Eq. 3.1) p
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
Assessing hypotheses about nesting site occupancy dynamics
Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle
2011-01-01
Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.
NASA Astrophysics Data System (ADS)
Morales, V. L.; Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2017-12-01
Biofilms are ubiquitous bacterial communities growing in various porous media including soils, trickling and sand filters and are relevant for applications such as the degradation of pollutants for bioremediation, waste water or drinking water production purposes. By their development, biofilms dynamically change the structure of porous media, increasing the heterogeneity of the pore network and the non-Fickian or anomalous dispersion. In this work, we use an experimental approach to investigate the influence of biofilm growth on pore scale hydrodynamics and transport processes and propose a correlated continuous time random walk model capturing these observations. We perform three-dimensional particle tracking velocimetry at four different time points from 0 to 48 hours of biofilm growth. The biofilm growth notably impacts pore-scale hydrodynamics, as shown by strong increase of the average velocity and in tailing of Lagrangian velocity probability density functions. Additionally, the spatial correlation length of the flow increases substantially. This points at the formation of preferential flow pathways and stagnation zones, which ultimately leads to an increase of anomalous transport in the porous media considered, characterized by non-Fickian scaling of mean-squared displacements and non-Gaussian distributions of the displacement probability density functions. A gamma distribution provides a remarkable approximation of the bulk and the high tail of the Lagrangian pore-scale velocity magnitude, indicating a transition from a parallel pore arrangement towards a more serial one. Finally, a correlated continuous time random walk based on a stochastic relation velocity model accurately reproduces the observations and could be used to predict transport beyond the time scales accessible to the experiment.
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
Process and apparatus for separation of components of a gas stream
Bryan, Charles R.; Torczynski, John R.; Brady, Patrick V.; Gallis, Michail; Brooks, Carlton F.
2014-06-17
A process and apparatus for separating a gas mixture comprising providing a slot in a gas separation channel (conceptualized as a laterally elongated Clusius-Dickel column), having a length through which a net cross-flow of the gas mixture may be established; applying a higher temperature to one side of the channel and a lower temperature on an opposite side of the channel thereby causing thermal-diffusion and buoyant-convection flow to occur in the slot; and establishing a net cross-flow of a gas mixture comprising at least one higher density gas component and at least one lower density gas component along the length of the slot, wherein the cross-flow causes, in combination with the convection flow, a spiraling flow in the slot; and wherein the spiral flow causes an increasing amount of separation of the higher density gas from the lower density gas along the length of the channel. The process may use one or more slots and/or channels.
NASA Astrophysics Data System (ADS)
Kononenko, O.; Lopes, N. C.; Cole, J. M.; Kamperidis, C.; Mangles, S. P. D.; Najmudin, Z.; Osterhoff, J.; Poder, K.; Rusby, D.; Symes, D. R.; Warwick, J.; Wood, J. C.; Palmer, C. A. J.
2016-09-01
In this work, two-dimensional (2D) hydrodynamic simulations of a variable length gas cell were performed using the open source fluid code OpenFOAM. The gas cell was designed to study controlled injection of electrons into a laser-driven wakefield at the Astra Gemini laser facility. The target consists of two compartments: an accelerator and an injector section connected via an aperture. A sharp transition between the peak and plateau density regions in the injector and accelerator compartments, respectively, was observed in simulations with various inlet pressures. The fluid simulations indicate that the length of the down-ramp connecting the sections depends on the aperture diameter, as does the density drop outside the entrance and the exit cones. Further studies showed, that increasing the inlet pressure leads to turbulence and strong fluctuations in density along the axial profile during target filling, and consequently, is expected to negatively impact the accelerator stability.
Process and apparatus for separation of components of a gas stream
Bryan, Charles R; Torczynski, John R; Brady, Patrick V; Gallis, Michail; Brooks, Carlton F
2013-09-17
A process and apparatus for separating a gas mixture comprising providing a slot in a gas separation channel (conceptualized as a laterally elongated Clusius-Dickel column), having a length through which a net cross-flow of the gas mixture may be established; applying a higher temperature to one side of the channel and a lower temperature on an opposite side of the channel thereby causing thermal-diffusion and buoyant-convection flow to occur in the slot; and establishing a net cross-flow of a gas mixture comprising at least one higher density gas component and at least one lower density gas component along the length of the slot, wherein the cross-flow causes, in combination with the convection flow, a spiraling flow in the slot; and wherein the spiral flow causes an increasing amount of separation of the higher density gas from the lower density gas along the length of the channel. The process may use one or more slots and/or channels.
Process and apparatus for separation of components of a gas stream
Bryan, Charles R; Torczynski, John R; Brady, Patrick V; Gallis, Michail; Brooks, Carlton F
2013-11-19
A process and apparatus for separating a gas mixture comprising providing a slot in a gas separation channel (conceptualized as a laterally elongated Clusius-Dickel column), having a length through which a net cross-flow of the gas mixture may be established; applying a higher temperature to one side of the channel and a lower temperature on an opposite side of the channel thereby causing thermal-diffusion and buoyant-convection flow to occur in the slot; and establishing a net cross-flow of a gas mixture comprising at least one higher density gas component and at least one lower density gas component along the length of the slot, wherein the cross-flow causes, in combination with the convection flow, a spiraling flow in the slot; and wherein the spiral flow causes an increasing amount of separation of the higher density gas from the lower density gas along the length of the channel. The process may use one or more slots and/or channels.
Critical transition in the constrained traveling salesman problem.
Andrecut, M; Ali, M K
2001-04-01
We investigate the finite size scaling of the mean optimal tour length as a function of density of obstacles in a constrained variant of the traveling salesman problem (TSP). The computational experience pointed out a critical transition (at rho(c) approximately 85%) in the dependence between the excess of the mean optimal tour length over the Held-Karp lower bound and the density of obstacles.
Density probability distribution functions of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2008-10-01
In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of
Bowers, D T; Chhabra, P; Langman, L; Botchwey, E A; Brayman, K L
2011-11-01
Nanofiber scaffolds could improve islet transplant success by physically mimicking the shape of extracellular matrix and by acting as a drug-delivery vehicle. Scaffolds implanted in alternate transplant sites must be prevascularized or very quickly vascularized following transplantation to prevent hypoxia-induced islet necrosis. The local release of the S1P prodrug FTY720 induces diameter enlargement and increases in length density. The objective of this preliminary study was to evaluate length and diameter differences between diabetic and nondiabetic animals implanted with FTY720-containing electrospun scaffolds using intravital imaging of dorsal skinfold window chambers. Electrospun mats of randomly oriented fibers we created from polymer solutions of PLAGA (50:50 LA:GA) with and without FTY720 loaded at a ratio of 1:200 (FTY720:PLAGA by wt). The implanted fiber mats were 4 mm in diameter and ∼0.2 mm thick. Increases in length density and vessel diameter were assessed by automated analysis of images over 7 days in RAVE, a Matlab program. Image analysis of repeated measures of microvessel metrics demonstrated a significant increase in the length density from day 0 to day 7 in the moderately diabetic animals of this preliminary study (P < .05). Furthermore, significant differences in length density at day 0 and day 3 were found between recently STZ-induced moderately diabetic and nondiabetic animals in response to FTY720 local release (P < .05, Student t test). Driving the islet revascularization process using local release of factors, such as FTY720, from biodegradable polymers makes an attractive system for the improvement of islet transplant success. Preliminary study results suggest that a recently induced moderately diabetic state may potentiate the mechanism by which local release of FTY720 from polymer fibers increases length density of microvessels. Therefore, local release of S1P receptor-targeted drugs is under further investigation for improvement of transplanted islet function. Copyright © 2011. Published by Elsevier Inc.
Singh, Niraj Kumar; Jha, Raghav Hira; Gargeshwari, Aditi; Kumar, Prawin
2018-01-01
Alteration in the process of bone remodelling is associated with falls and fractures due to increased bone fragility and altered calcium functioning. The auditory system consists of skeletal structures and is, therefore, prone to getting affected by altered bone remodelling. In addition, the vestibule consists of huge volumes of calcium (CaCO3) in the form of otoconia crystals and alteration in functioning calcium levels could, therefore, result in vestibular symptoms. Thus, the present study aimed at compiling information from various studies on assessment of auditory or vestibular systems in individuals with reduced bone mineral density (BMD). A total of 1977 articles were searched using various databases and 19 full-length articles which reported auditory and vestibular outcomes in persons with low BMD were reviewed. An intricate relationship between altered BMD and audio-vestibular function was evident from the studies; nonetheless, how one aspect of hearing or balance affects the other is not clear. Significant effect of reduced bone mineral density could probably be due to the metabolic changes at the level of cochlea, secondary to alterations in BMD. One could also conclude that sympathetic remodelling is associated with vestibular problems in individual; however, whether vestibular problems lead to altered BMD cannot be ascertained with confidence. The studies reviewed in the article provide an evidence of possible involvement of hearing and vestibular system abnormalities in individuals with reduced bone mineral density. Hence, the assessment protocol for these individuals must include hearing and balance evaluation as mandatory for planning appropriate management.
NASA Astrophysics Data System (ADS)
Jaafarian, Rokhsare; Ganjovi, Alireza; Etaati, Gholamreza
2018-01-01
In this work, a Particle in Cell-Monte Carlo Collision simulation technique is used to study the operating parameters of a typical helicon plasma source. These parameters mainly include the gas pressure, externally applied static magnetic field, the length and radius of the helicon antenna, and the frequency and voltage amplitude of the applied RF power on the helicon antenna. It is shown that, while the strong radial gradient of the formed plasma density in the proximity of the plasma surface is substantially proportional to the energy absorption from the existing Trivelpiece-Gould (TG) modes, the observed high electron temperature in the helicon source at lower static magnetic fields is significant evidence for the energy absorption from the helicon modes. Furthermore, it is found that, at higher gas pressures, both the plasma electron density and temperature are reduced. Besides, it is shown that, at higher static magnetic fields, owing to the enhancement of the energy absorption by the plasma charged species, the plasma electron density is linearly increased. Moreover, it is seen that, at the higher spatial dimensions of the antenna, both the plasma electron density and temperature are reduced. Additionally, while, for the applied frequencies of 13.56 MHz and 27.12 MHz on the helicon antenna, the TG modes appear, for the applied frequency of 18.12 MHz on the helicon antenna, the existence of helicon modes is proved. Moreover, by increasing the applied voltage amplitude on the antenna, the generation of mono-energetic electrons is more probable.
Asymmetric simple exclusion process on chains with a shortcut.
Bunzarova, Nadezhda; Pesheva, Nina; Brankov, Jordan
2014-03-01
We consider the asymmetric simple exclusion process (TASEP) on an open network consisting of three consecutively coupled macroscopic chain segments with a shortcut between the tail of the first segment and the head of the third one. The model was introduced by Y.-M. Yuan et al. [J. Phys. A 40, 12351 (2007)] to describe directed motion of molecular motors along twisted filaments. We report here unexpected results which revise the previous findings in the case of maximum current through the network. Our theoretical analysis, based on the effective rates' approximation, shows that the second (shunted) segment can exist in both low- and high-density phases, as well as in the coexistence (shock) phase. Numerical simulations demonstrate that the last option takes place in finite-size networks with head and tail chains of equal length, provided the injection and ejection rates at their external ends are equal and greater than one-half. Then the local density distribution and the nearest-neighbor correlations in the middle chain correspond to a shock phase with completely delocalized domain wall. Upon moving the shortcut to the head or tail of the network, the density profile takes a shape typical of a high- or low-density phase, respectively. Surprisingly, the main quantitative parameters of that shock phase are governed by a positive root of a cubic equation, the coefficients of which linearly depend on the probability of choosing the shortcut. Alternatively, they can be expressed in a universal way through the shortcut current. The unexpected conclusion is that a shortcut in the bulk of a single lane may create traffic jams.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
Dynamic probability of reinforcement for cooperation: Random game termination in the centipede game.
Krockow, Eva M; Colman, Andrew M; Pulford, Briony D
2018-03-01
Experimental games have previously been used to study principles of human interaction. Many such games are characterized by iterated or repeated designs that model dynamic relationships, including reciprocal cooperation. To enable the study of infinite game repetitions and to avoid endgame effects of lower cooperation toward the final game round, investigators have introduced random termination rules. This study extends previous research that has focused narrowly on repeated Prisoner's Dilemma games by conducting a controlled experiment of two-player, random termination Centipede games involving probabilistic reinforcement and characterized by the longest decision sequences reported in the empirical literature to date (24 decision nodes). Specifically, we assessed mean exit points and cooperation rates, and compared the effects of four different termination rules: no random game termination, random game termination with constant termination probability, random game termination with increasing termination probability, and random game termination with decreasing termination probability. We found that although mean exit points were lower for games with shorter expected game lengths, the subjects' cooperativeness was significantly reduced only in the most extreme condition with decreasing computer termination probability and an expected game length of two decision nodes. © 2018 Society for the Experimental Analysis of Behavior.
Stanley, T.R.; Newmark, W.D.
2010-01-01
In the East Usambara Mountains in northeast Tanzania, research on the effects of forest fragmentation and disturbance on nest survival in understory birds resulted in the accumulation of 1,002 nest records between 2003 and 2008 for 8 poorly studied species. Because information on the length of the incubation and nestling stages in these species is nonexistent or sparse, our objectives in this study were (1) to estimate the length of the incubation and nestling stage and (2) to compute nest survival using these estimates in combination with calculated daily survival probability. Because our data were interval censored, we developed and applied two new statistical methods to estimate stage length. In the 8 species studied, the incubation stage lasted 9.6-21.8 days and the nestling stage 13.9-21.2 days. Combining these results with estimates of daily survival probability, we found that nest survival ranged from 6.0% to 12.5%. We conclude that our methodology for estimating stage lengths from interval-censored nest records is a reasonable and practical approach in the presence of interval-censored data. ?? 2010 The American Ornithologists' Union.
Correcting length-frequency distributions for imperfect detection
Breton, André R.; Hawkins, John A.; Winkelman, Dana L.
2013-01-01
Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.
NASA Astrophysics Data System (ADS)
Sampson, Danuta M.; Gong, Peijun; An, Di; Menghini, Moreno; Hansen, Alex; Mackey, David A.; Sampson, David D.; Chen, Fred K.
2017-04-01
We examined the impact of axial length on superficial retinal vessel density (SRVD) and foveal avascular zone area (FAZA) measurement using optical coherence tomography angiography. The SRVD and FAZA were quantified before and after correction for magnification error associated with axial length variation. Although SRVD did not differ before and after correction for magnification error in the parafoveal region, change in foveal SRVD and FAZA were significant. This has implications for clinical trials outcome in diseased eyes where significant capillary dropout may occur in the parafovea.
Choi, Kai Yip; Yu, Wing Yan; Lam, Christie Hang I; Li, Zhe Chuang; Chin, Man Pan; Lakshmanan, Yamunadevi; Wong, Francisca Siu Yin; Do, Chi Wai; Lee, Paul Hong; Chan, Henry Ho Lung
2017-09-01
People in Hong Kong generally live in a densely populated area and their homes are smaller compared with most other cities worldwide. Interestingly, East Asian cities with high population densities seem to have higher myopia prevalence, but the association between them has not been established. This study investigated whether the crowded habitat in Hong Kong is associated with refractive error among children. In total, 1075 subjects [Mean age (S.D.): 9.95 years (0.97), 586 boys] were recruited. Information such as demographics, living environment, parental education and ocular status were collected using parental questionnaires. The ocular axial length and refractive status of all subjects were measured by qualified personnel. Ocular axial length was found to be significantly longer among those living in districts with a higher population density (F 2,1072 = 6.15, p = 0.002) and those living in a smaller home (F 2,1072 = 3.16, p = 0.04). Axial lengths were the same among different types of housing (F 3,1071 = 1.24, p = 0.29). Non-cycloplegic autorefraction suggested a more negative refractive error in those living in districts with a higher population density (F 2,1072 = 7.88, p < 0.001) and those living in a smaller home (F 2,1072 = 4.25, p = 0.02). After adjustment for other confounding covariates, the population density and home size also significantly predicted axial length and non-cycloplegic refractive error in the multiple linear regression model, while axial length and refractive error had no relationship with types of housing. Axial length in children and childhood refractive error were associated with high population density and small home size. A constricted living space may be an environmental threat for myopia development in children. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
The global short-period wavefield modelled with a Monte Carlo seismic phonon method
Shearer, Peter M.; Earle, Paul
2004-01-01
At high frequencies (∼1 Hz), much of the seismic energy arriving at teleseismic distances is not found in the main phases (e.g. P, PP, S, etc.) but is contained in the extended coda that follows these arrivals. This coda results from scattering off small-scale velocity and density perturbations within the crust and mantle and contains valuable information regarding the depth dependence and strength of this heterogeneity as well as the relative importance of intrinsic versus scattering attenuation. Most analyses of seismic coda to date have concentrated on S-wave coda generated from lithospheric scattering for events recorded at local and regional distances. Here, we examine the globally averaged vertical-component, 1-Hz wavefield (>10° range) for earthquakes recorded in the IRIS FARM archive from 1990 to 1999. We apply an envelope-function stacking technique to image the average time–distance behavior of the wavefield for both shallow (≤50 km) and deep (≥500 km) earthquakes. Unlike regional records, our images are dominated by P and P coda owing to the large effect of attenuation on PPand S at high frequencies. Modelling our results is complicated by the need to include a variety of ray paths, the likely contributions of multiple scattering and the possible importance of P-to-S and S-to-P scattering. We adopt a stochastic, particle-based approach in which millions of seismic phonons are randomly sprayed from the source and tracked through the Earth. Each phonon represents an energy packet that travels along the appropriate ray path until it is affected by a discontinuity or a scatterer. Discontinuities are modelled by treating the energy normalized reflection and transmission coefficients as probabilities. Scattering probabilities and scattering angles are computed in a similar fashion, assuming random velocity and density perturbations characterized by an exponential autocorrelation function. Intrinsic attenuation is included by reducing the energy contained in each particle as an appropriate function of traveltime. We find that most scattering occurs in the lithosphere and upper mantle, as previous results have indicated, but that some lower-mantle scattering is likely also required. A model with 3 to 4 per cent rms velocity heterogeneity at 4-km scale length in the upper mantle and 0.5 per cent rms velocity heterogeneity at 8-km scale length in the lower mantle (with intrinsic attenuation of Qα= 450 above 200 km depth andQα= 2500 below 200 km) provides a reasonable fit to both the shallow- and deep-earthquake observations, although many trade-offs exist between the scale length, depth extent and strength of the heterogeneity.
Fractional Brownian motion with a reflecting wall.
Wada, Alexander H O; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
Fluid overpressure estimates from the aspect ratios of mineral veins
NASA Astrophysics Data System (ADS)
Philipp, Sonja L.
2012-12-01
Several hundred calcite veins and (mostly) normal faults were studied in limestone and shale layers of a Mesozoic sedimentary basin next to the village of Kilve at the Bristol Channel (SW-England). The veins strike mostly E-W (239 measurements), that is, parallel with the associated normal faults. The mean vein dip is 73°N (44 measurements). Field observations indicate that these faults transported the fluids up into the limestone layers. The vein outcrop (trace) length (0.025-10.3 m) and thickness (0.1-28 mm) size distributions are log-normal. Taking the thickness as the dependent variable and the outcrop length as the independent variable, linear regression gives a coefficient of determination (goodness of fit) of R2 = 0.74 (significant with 99% confidence), but natural logarithmic transformation of the thickness-length data increases the coefficient of determination to R2 = 0.98, indicating that nearly all the variation in thickness can be explained in terms of variation in trace length. The geometric mean of the aspect (length/thickness) ratio, 451, gives the best representation of the data set. With 95% confidence, the true geometric mean of the aspect ratios of the veins lies in the interval 409-497. Using elastic crack theory, appropriate elastic properties of the host rock, and the mean aspect ratio, the fluid overpressure (that is, the total fluid pressure minus the normal stress on the fracture plane) at the time of vein formation is estimated at around 18 MPa. From these results, and using the average host rock and water densities, the depth to the sources of the fluids (below the present exposures) forming the veins is estimated at between around 300 m and 1200 m. These results are in agreement to those obtained by independent isotopic studies and indicate that the fluids were of rather local origin, probably injected from sill-like sources (water sills) inside the sedimentary basin.
NASA Astrophysics Data System (ADS)
Hashemi, Seyed Naser; Baizidi, Chavare
2018-04-01
In this paper, 2-D spatial variation of the frequency and length density and frequency-length relation of large-scale faults in the Zagros region (Iran), as a typical fold-and-thrust belt, were examined. Moreover, the directional analysis of these faults as well as the scale dependence of the orientations was studied. For this purpose, a number of about 8000 faults with L ≥ 1.0 km were extracted from the geological maps covering the region, and then, the data sets were analyzed. The overall pattern of the frequency/length distribution of the total faults of the region acceptably fits with a power-law relation with exponent 1.40, with an obvious change in the gradient in L = 12.0 km. In addition, maps showing the spatial variation of fault densities over the region indicate that the maximum values of the frequency and length density of the faults are attributed to the northeastern part of the region and parallel to the suture zone, respectively, and the fault density increases towards the central parts of the belt. Moreover, the directional analysis of the fault trends gives a dominant preferred orientation trend of 300°-330° and the assessment of the scale dependence of the fault directions demonstrates that larger faults show higher degrees of preferred orientations. As a result, it is concluded that the evolutionary path of the faulting process in this region can be explained by increasing the number of faults rather than the growth in the fault lengths and also it seems that the regional-scale faults in this region are generated by a nearly steady-state tectonic stress regime.
[The effects of glycemic control on ophthalmic refraction in diabetic patients].
Li, Hai-yan; Luo, Guo-chun; Guo, Jiang; Liang, Zhen
2010-10-01
To evaluate effects of glycemic control on refraction in diabetic patients. Twenty newly diagnosed diabetic patients were included in this study. The random blood glucose, glycosylated hemoglobin A1c (HbA1c) levels, fasting C-peptide and postprandial 2 h C-peptide levels were measured before treatment. The patients with random blood glucose ≥ 12.0 mmol/L and HbA1c ≥ 10.0% were selected. Refraction, intraocular pressure, radius of the anterior corneal curvature, depth of the anterior chamber, lens thickness, vitreous length, and axial length were measured on admission and at the end of week 1, 2, 3 and 4 during glycaemic control. A transient hyperopic change occurred in all the patients receiving glycemic control with a mean maximum hyperopic changes of 1.6 D (0.50 D ∼ 3.20 D). There was a positive correlation between the magnitude of the maximum hyperopic changes and the HbA1c levels on admission (r = 0.84, P < 0.05). There was a positive correlation between the magnitude of the maximum hyperopic changes and the daily rate of blood glucose reduction over the first 7 days of the treatment (r = 0.53, P < 0.05). There was no significant correlation between the magnitude of the maximum hyperopic changes and the levels of random blood glucose on admission. No significant correlation was observed between the maximum hyperopic changes and fasting C-peptide or postprandial 2 h C-peptide. There were no significant correlations between the magnitude of the maximum hyperopic changes and age, blood press, body mass index, triglyceride, total cholesterol, low-density lipoprotein or high-density lipoprotein. No significant changes were observed in the intraocular pressure, radius of the anterior corneal curvature, depth of the anterior chamber, lens thickness, vitreous length and axial length during glycemic control. Transient hyperopic changes occur after glycemic control in diabetic patients with severe hyperglycaemia. The degrees of transient hyperopia are highly dependent on HbA1c levels before treatment and the rate of reduction of glucose level over the first 7 days of treatment. This is probably due to the decrease of refractive power by lens hydration, not morphological change of lens.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Stochastic modelling of wall stresses in abdominal aortic aneurysms treated by a gene therapy.
Mohand-Kaci, Faïza; Ouni, Anissa Eddhahak; Dai, Jianping; Allaire, Eric; Zidi, Mustapha
2012-01-01
A stochastic mechanical model using the membrane theory was used to simulate the in vivo mechanical behaviour of abdominal aortic aneurysms (AAAs) in order to compute the wall stresses after stabilisation by gene therapy. For that, both length and diameter of AAAs rats were measured during their expansion. Four groups of animals, control and treated by an endovascular gene therapy during 3 or 28 days were included. The mechanical problem was solved analytically using the geometric parameters and assuming the shape of aneurysms by a 'parabolic-exponential curve'. When compared to controls, stress variations in the wall of AAAs for treated arteries during 28 days decreased, while they were nearly constant at day 3. The measured geometric parameters of AAAs were then investigated using probability density functions (pdf) attributed to every random variable. Different trials were useful to define a reliable confidence region in which the probability to have a realisation is equal to 99%. The results demonstrated that the error in the estimation of the stresses can be greater than 28% when parameters uncertainties are not considered in the modelling. The relevance of the proposed approach for the study of AAA growth may be studied further and extended to other treatments aimed at stabilisation AAAs, using biotherapies and pharmacological approaches.
NASA Astrophysics Data System (ADS)
Snow, Michael G.; Bajaj, Anil K.
2015-08-01
This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.
2011-01-01
Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne
2011-01-01
Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Physical Models of Layered Polar Firn Brightness Temperatures from 0.5 to 2 GHz
NASA Technical Reports Server (NTRS)
Tan, Shurun; Aksoy, Mustafa; Brogioni, Marco; Macelloni, Giovanni; Durand, Michael; Jezek, Kenneth C.; Wang, Tian-Lin; Tsang, Leung; Johnson, Joel T.; Drinkwater, Mark R.;
2015-01-01
We investigate physical effects influencing 0.5-2 GHz brightness temperatures of layered polar firn to support the Ultra Wide Band Software Defined Radiometer (UWBRAD) experiment to be conducted in Greenland and in Antarctica. We find that because ice particle grain sizes are very small compared to the 0.5-2 GHz wavelengths, volume scattering effects are small. Variations in firn density over cm- to m-length scales, however, cause significant effects. Both incoherent and coherent models are used to examine these effects. Incoherent models include a 'cloud model' that neglects any reflections internal to the ice sheet, and the DMRT-ML and MEMLS radiative transfer codes that are publicly available. The coherent model is based on the layered medium implementation of the fluctuation dissipation theorem for thermal microwave radiation from a medium having a nonuniform temperature. Density profiles are modeled using a stochastic approach, and model predictions are averaged over a large number of realizations to take into account an averaging over the radiometer footprint. Density profiles are described by combining a smooth average density profile with a spatially correlated random process to model density fluctuations. It is shown that coherent model results after ensemble averaging depend on the correlation lengths of the vertical density fluctuations. If the correlation length is moderate or long compared with the wavelength (approximately 0.6x longer or greater for Gaussian correlation function without regard for layer thinning due to compaction), coherent and incoherent model results are similar (within approximately 1 K). However, when the correlation length is short compared to the wavelength, coherent model results are significantly different from the incoherent model by several tens of kelvins. For a 10-cm correlation length, the differences are significant between 0.5 and 1.1 GHz, and less for 1.1-2 GHz. Model results are shown to be able to match the v-pol SMOS data closely and predict the h-pol data for small observation angles.
Korman, Josh; Yard, Mike
2017-01-01
Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.
Wavefronts, actions and caustics determined by the probability density of an Airy beam
NASA Astrophysics Data System (ADS)
Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón
2018-07-01
The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.
Fatttore, Giovanni; Numerato, Dino; Peltola, Mikko; Banks, Helen; Graziani, Rebecca; Heijink, Richard; Over, Eelco; Klitkou, Søren Toksvig; Fletcher, Eilidh; Mihalicza, Péter; Sveréus, Sofia
2015-12-01
The EuroHOPE very low birth weight and very low for gestational age infants study aimed to measure and explain variation in mortality and length of stay (LoS) in the populations of seven European nations (Finland, Hungary, Italy (only the province of Rome), the Netherlands, Norway, Scotland and Sweden). Data were linked from birth, hospital discharge and mortality registries. For each infant basic clinical and demographic information, infant mortality and LoS at 1 year were retrieved. In addition, socio-economic variables at the regional level were used. Results based on 16,087 infants confirm that gestational age and Apgar score at 5 min are important determinants of both mortality and LoS. In most countries, infants admitted or transferred to third-level hospitals showed lower probability of death and longer LoS. In the meta-analyses, the combined estimates show that being male, multiple births, presence of malformations, per capita income and low population density are significant risk factors for death. It is essential that national policies improve the quality of administrative datasets and address systemic problems in assigning identification numbers at birth. European policy should aim at improving the comparability of data across jurisdictions. Copyright © 2015 John Wiley & Sons, Ltd.
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
Kumar, Vijay; Taylor, Michael K; Mehrotra, Amit; Stagner, William C
2013-06-01
Focused beam reflectance measurement (FBRM) was used as a process analytical technology tool to perform inline real-time particle size analysis of a proprietary granulation manufactured using a continuous twin-screw granulation-drying-milling process. A significant relationship between D20, D50, and D80 length-weighted chord length and sieve particle size was observed with a p value of <0.0001 and R(2) of 0.886. A central composite response surface statistical design was used to evaluate the effect of granulator screw speed and Comil® impeller speed on the length-weighted chord length distribution (CLD) and particle size distribution (PSD) determined by FBRM and nested sieve analysis, respectively. The effect of granulator speed and mill speed on bulk density, tapped density, Compressibility Index, and Flowability Index were also investigated. An inline FBRM probe placed below the Comil-generated chord lengths and CLD data at designated times. The collection of the milled samples for sieve analysis and PSD evaluation were coordinated with the timing of the FBRM determinations. Both FBRM and sieve analysis resulted in similar bimodal distributions for all ten manufactured batches studied. Within the experimental space studied, the granulator screw speed (650-850 rpm) and Comil® impeller speed (1,000-2,000 rpm) did not have a significant effect on CLD, PSD, bulk density, tapped density, Compressibility Index, and Flowability Index (p value > 0.05).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zolghadr, S. H.; Jafari, S., E-mail: sjafari@guilan.ac.ir; Raghavi, A.
2016-05-15
Significant progress has been made employing plasmas in the free-electron lasers (FELs) interaction region. In this regard, we study the output power and saturation length of the plasma whistler wave-pumped FEL in a magnetized plasma channel. The small wavelength of the whistler wave (in sub-μm range) in plasma allows obtaining higher radiation frequency than conventional wiggler FELs. This configuration has a higher tunability by adjusting the plasma density relative to the conventional ones. A set of coupled nonlinear differential equations is employed which governs on the self-consistent evolution of an electromagnetic wave. The electron bunching process of the whistler-pumped FELmore » has been investigated numerically. The result reveals that for a long wiggler length, the bunching factor can appreciably change as the electron beam propagates through the wiggler. The effects of plasma frequency (or plasma density) and cyclotron frequency on the output power and saturation length have been studied. Simulation results indicate that with increasing the plasma frequency, the power increases and the saturation length decreases. In addition, when density of background plasma is higher than the electron beam density (i.e., for a dense plasma channel), the plasma effects are more pronounced and the FEL-power is significantly high. It is also found that with increasing the strength of the external magnetic field frequency, the power decreases and the saturation length increases, noticeably.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oakdale, James S.; Smith, Raymond F.; Forien, Jean -Baptiste
Monolithic porous bulk materials have many promising applications ranging from energy storage and catalysis to high energy density physics. High resolution additive manufacturing techniques, such as direct laser writing via two photon polymerization (DLW-TPP), now enable the fabrication of highly porous microlattices with deterministic morphology control. In this work, DLW-TPP is used to print millimeter-sized foam reservoirs (down to 0.06 g cm –3) with tailored density-gradient profiles, where density is varied by over an order of magnitude (for instance from 0.6 to 0.06 g cm –3) along a length of <100 µm. Taking full advantage of this technology, however, ismore » a multiscale materials design problem that requires detailed understanding of how the different length scales, from the molecular level to the macroscopic dimensions, affect each other. The design of these 3D-printed foams is based on the brickwork arrangement of 100 × 100 × 16 µm 3 log-pile blocks constructed from sub-micrometer scale features. A block-to-block interdigitated stitching strategy is introduced for obtaining high density uniformity at all length scales. Lastly, these materials are used to shape plasma-piston drives during ramp-compression of targets under high energy density conditions created at the OMEGA Laser Facility.« less
Oakdale, James S.; Smith, Raymond F.; Forien, Jean -Baptiste; ...
2017-09-27
Monolithic porous bulk materials have many promising applications ranging from energy storage and catalysis to high energy density physics. High resolution additive manufacturing techniques, such as direct laser writing via two photon polymerization (DLW-TPP), now enable the fabrication of highly porous microlattices with deterministic morphology control. In this work, DLW-TPP is used to print millimeter-sized foam reservoirs (down to 0.06 g cm –3) with tailored density-gradient profiles, where density is varied by over an order of magnitude (for instance from 0.6 to 0.06 g cm –3) along a length of <100 µm. Taking full advantage of this technology, however, ismore » a multiscale materials design problem that requires detailed understanding of how the different length scales, from the molecular level to the macroscopic dimensions, affect each other. The design of these 3D-printed foams is based on the brickwork arrangement of 100 × 100 × 16 µm 3 log-pile blocks constructed from sub-micrometer scale features. A block-to-block interdigitated stitching strategy is introduced for obtaining high density uniformity at all length scales. Lastly, these materials are used to shape plasma-piston drives during ramp-compression of targets under high energy density conditions created at the OMEGA Laser Facility.« less
NASA Astrophysics Data System (ADS)
Huffard, Christine L.; Kuhnz, Linda A.; Lemon, Larissa; Sherman, Alana D.; Smith, Kenneth L.
2016-03-01
Holothurians are among the most abundant benthic megafauna at abyssal depths, and important consumers and bioturbators of organic carbon on the sea floor. Significant fluctuations in abyssal holothurian density are often attributed to species-specific responses to variable particulate organic carbon flux (food supply) stemming from surface ocean events. We report changes in densities of 19 holothurian species at the abyssal monitoring site Station M in the northeast Pacific, recorded during 11 remotely operated vehicle surveys between Dec 2006 and Oct 2014. Body size demographics are presented for Abyssocucumis abyssorum, Synallactidae sp. 1, Paelopatides confundens, Elpidia sp. A, Peniagone gracilis, Peniagone papillata, Peniagone vitrea, Peniagone sp. A, Peniagone sp. 1, and Scotoplanes globosa. Densities were lower and species evenness was higher from 2006-2009 compared to 2011-2014. Food supply of freshly-settled phytodetritus was exceptionally high during this latter period. Based on relationships between median body length and density, numerous immigration and juvenile recruitment events of multiple species appeared to take place between 2011 and 2014. These patterns were dominated by elpidiids (Holothuroidea: Elasipodida: Elpidiidae), which consistently increased in density during a period of high food availability, while other groups showed inconsistent responses. We considered minimum body length to be a proxy for size at juvenile recruitment. Patterns in density clustered by this measure, which was a stronger predictor of maximum density than median and mean body length.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
The area of isodensity contours in cosmological models and galaxy surveys
NASA Technical Reports Server (NTRS)
Ryden, Barbara S.; Melott, Adrian L.; Craig, David A.; Gott, J. Richard, III; Weinberg, David H.
1989-01-01
The contour crossing statistic, defined as the mean number of times per unit length that a straight line drawn through the field crosses a given contour, is applied to model density fields and to smoothed samples of galaxies. Models in which the matter is in a bubble structure, in a filamentary net, or in clusters can be distinguished from Gaussian density distributions. The shape of the contour crossing curve in the initially Gaussian fields considered remains Gaussian after gravitational evolution and biasing, as long as the smoothing length is longer than the mass correlation length. With a smoothing length of 5/h Mpc, models containing cosmic strings are indistinguishable from Gaussian distributions. Cosmic explosion models are significantly non-Gaussian, having a bubbly structure. Samples from the CfA survey and the Haynes and Giovanelli (1986) survey are more strongly non-Gaussian at a smoothing length of 6/h Mpc than any of the models examined. At a smoothing length of 12/h Mpc, the Haynes and Giovanelli sample appears Gaussian.
Development of Prototype HTS Components for Magnetic Suspension Applications
NASA Technical Reports Server (NTRS)
Haldar, P.; Hoehn, J., Jr.; Selvamanickam, V.; Farrell, R. A.; Balachandran, U.; Iyer, A. N.; Peterson, E.; Salazar, K.
1996-01-01
We have concentrated on developing prototype lengths of bismuth and thallium based silver sheathed superconductors by the powder-in-tube approach to fabricate high temperature superconducting (HTS) components for magnetic suspension applications. Long lengths of mono and multi filament tapes are presently being fabricated with critical current densities useful for maglev and many other applications. We have recently demonstrated the prototype manufacture of lengths exceeding 1 km of Bi-2223 multi filament conductor. Long lengths of thallium based multi-filament conductor have also been fabricated with practical levels of critical current density and improved field dependence behavior. Test coils and magnets have been built from these lengths and characterized over a range of temperatures and background fields to determine their performance. Work is in progress to develop, fabricate and test HTS windings that will be suitable for magnetic suspension, levitation and other electric power related applications.
Oak regeneration and overstory density in the Missouri Ozarks
David R. Larsen; Monte A. Metzger
1997-01-01
Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...
Estimating loop length from CryoEM images at medium resolutions.
McKnight, Andrew; Si, Dong; Al Nasr, Kamal; Chernikov, Andrey; Chrisochoides, Nikos; He, Jing
2013-01-01
De novo protein modeling approaches utilize 3-dimensional (3D) images derived from electron cryomicroscopy (CryoEM) experiments. The skeleton connecting two secondary structures such as α-helices represent the loop in the 3D image. The accuracy of the skeleton and of the detected secondary structures are critical in De novo modeling. It is important to measure the length along the skeleton accurately since the length can be used as a constraint in modeling the protein. We have developed a novel computational geometric approach to derive a simplified curve in order to estimate the loop length along the skeleton. The method was tested using fifty simulated density images of helix-loop-helix segments of atomic structures and eighteen experimentally derived density data from Electron Microscopy Data Bank (EMDB). The test using simulated density maps shows that it is possible to estimate within 0.5 Å of the expected length for 48 of the 50 cases. The experiments, involving eighteen experimentally derived CryoEM images, show that twelve cases have error within 2 Å. The tests using both simulated and experimentally derived images show that it is possible for our proposed method to estimate the loop length along the skeleton if the secondary structure elements, such as α-helices, can be detected accurately, and there is a continuous skeleton linking the α-helices.
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1995-01-01
For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.
Propensity, Probability, and Quantum Theory
NASA Astrophysics Data System (ADS)
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Laboratory-Tutorial Activities for Teaching Probability
ERIC Educational Resources Information Center
Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.
2006-01-01
We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…
Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA
Yarra, Allyson N.; Magoulick, Daniel D.
2018-01-01
Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Committor of elementary reactions on multistate systems
NASA Astrophysics Data System (ADS)
Király, Péter; Kiss, Dóra Judit; Tóth, Gergely
2018-04-01
In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Whisman, Mark A; Richardson, Emily D
To examine the association between depressive symptoms and salivary telomere length in a probability sample of middle-aged and older adults, and to evaluate age and sex as potential moderators of this association and test whether this association was incremental to potential confounds. Participants were 3,609 individuals from the 2008 wave of the Health and Retirement Study. Telomere length assays were performed using quantitative real-time polymerase chain reaction on DNA extracted from saliva samples. Depressive symptoms were assessed via interview, and health and lifestyle factors, traumatic life events, and neuroticism were assessed via self-report. Regression analyses were conducted to examine the associations between predictor variables and salivary telomere length. After adjusting for demographics, depressive symptoms were negatively associated with salivary telomere length (b = -.003; p = .014). Furthermore, this association was moderated by sex (b = .005; p = .011), such that depressive symptoms were significantly and negatively associated with salivary telomere length for men (b = - .006; p < .001) but not for women (b = - .001; p = .644). The negative association between depressive symptoms and salivary telomere length in men remained statistically significant after additionally adjusting for cigarette smoking, body mass index, chronic health conditions, childhood and lifetime exposure to traumatic life events, and neuroticism. Higher levels of depressive symptoms were associated with shorter salivary telomeres in men, and this association was incremental to several potential confounds. Shortened telomeres may help account for the association between depression and poor physical health and mortality.
Li, Xia; Kearney, Patricia M; Keane, Eimear; Harrington, Janas M; Fitzgerald, Anthony P
2017-06-01
The aim of this study was to explore levels and sociodemographic correlates of physical activity (PA) over 1 week using accelerometer data. Accelerometer data was collected over 1 week from 1075 8-11-year-old children in the cross-sectional Cork Children's Lifestyle Study. Threshold values were used to categorise activity intensity as sedentary, light, moderate or vigorous. Questionnaires collected data on demographic factors. Smoothed curves were used to display minute by minute variations. Binomial regression was used to identify factors correlated with the probability of meeting WHO 60 min moderate to vigorous PA guidelines. Overall, 830 children (mean (SD) age: 9.9(0.7) years, 56.3% boys) were included. From the binomial multiple regression analysis, boys were found more likely to meet guidelines (probability ratio 1.17, 95% CI 1.06 to 1.28) than girls. Older children were less likely to meet guidelines than younger children (probability ratio 0.91, CI 0.87 to 0.95). Normal weight children were more likely than overweight and obese children to meet guidelines (probability ratio 1.25, CI 1.16 to 1.34). Children in urban areas were more likely to meet guidelines than those in rural areas (probability ratio 1.19, CI 1.07 to 1.33). Longer daylight length days were associated with greater probability of meeting guidelines compared to shorter daylight length days. PA levels differed by individual factors including age, gender and weight status as well as by environmental factors including residence and daylight length. Less than one-quarter of children (26.8% boys, 16.2% girls) meet guidelines. Effective intervention policies are urgently needed to increase PA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Simulation study of poled low-water ionomers with different architectures
NASA Astrophysics Data System (ADS)
Allahyarov, Elshad; Taylor, Philip L.; Löwen, Hartmut
2011-11-01
The role of the ionomer architecture in the formation of ordered structures in poled membranes is investigated by molecular dynamics computer simulations. It is shown that the length of the sidechain Ls controls both the areal density of cylindrical aggregates Nc and the diameter of these cylinders in the poled membrane. The backbone segment length Lb tunes the average diameter Ds of cylindrical clusters and the average number of sulfonates Ns in each cluster. A simple empirical formula is noted for the dependence of the number density of induced rod-like aggregates on the sidechain length Ls within the parameter range considered in this study.
Can we estimate molluscan abundance and biomass on the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.
2017-11-01
Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.
NASA Astrophysics Data System (ADS)
Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.
2018-04-01
We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.
Anomaly in the band centre of the one-dimensional Anderson model
NASA Astrophysics Data System (ADS)
Kappus, M.; Wegner, F.
1981-03-01
We calculate the density of states and various characteristic lengths of the one-dimensional Anderson model in the limit of weak disorder. All these quantities show anomalous fluctuations near the band centre. This has already been observed for the density of states in a different model by Gorkov and Dorokhov, and is in close agreement with a Monte-Carlo calculation for the localization length by Czycholl, Kramer and Mac-Kinnon.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
NASA Astrophysics Data System (ADS)
Kazemikia, Kaveh; Bonabi, Fahimeh; Asadpoorchallo, Ali; Shokrzadeh, Majid
2015-02-01
In this work, an optimized pulsed magnetic field production apparatus is designed based on a RLC (Resistance/Self-inductance/Capacitance) discharge circuit. An algorithm for designing an optimum magnetic coil is presented. The coil is designed to work at room temperature. With a minor physical reinforcement, the magnetic flux density can be set up to 12 Tesla with 2 ms duration time. In our design process, the magnitude and the length of the magnetic pulse are the desired parameters. The magnetic field magnitude in the RLC circuit is maximized on the basis of the optimal design of the coil. The variables which are used in the optimization process are wire diameter and the number of coil layers. The coil design ensures the critically damped response of the RLC circuit. The electrical, mechanical, and thermal constraints are applied to the design process. A locus of probable magnetic flux density values versus wire diameter and coil layer is provided to locate the optimum coil parameters. Another locus of magnetic flux density values versus capacitance and initial voltage of the RLC circuit is extracted to locate the optimum circuit parameters. Finally, the application of high magnetic fields on carbon nanotube-PolyPyrrole (CNT-PPy) nano-composite is presented. Scanning probe microscopy technique is used to observe the orientation of CNTs after exposure to a magnetic field. The result shows alignment of CNTs in a 10.3 Tesla, 1.5 ms magnetic pulse.
NASA Astrophysics Data System (ADS)
Khandkar, Mahendra D.; Stinchcombe, Robin; Barma, Mustansir
2017-01-01
We demonstrate the large-scale effects of the interplay between shape and hard-core interactions in a system with left- and right-pointing arrowheads <> on a line, with reorientation dynamics. This interplay leads to the formation of two types of domain walls, >< (A ) and <> (B ). The correlation length in the equilibrium state diverges exponentially with increasing arrowhead density, with an ordered state of like orientations arising in the limit. In this high-density limit, the A domain walls diffuse, while the B walls are static. In time, the approach to the ordered state is described by a coarsening process governed by the kinetics of domain-wall annihilation A +B →0 , quite different from the A +A →0 kinetics pertinent to the Glauber-Ising model. The survival probability of a finite set of walls is shown to decay exponentially with time, in contrast to the power-law decay known for A +A →0 . In the thermodynamic limit with a finite density of walls, coarsening as a function of time t is studied by simulation. While the number of walls falls as t-1/2, the fraction of persistent arrowheads decays as t-θ where θ is close to 1/4 , quite different from the Ising value. The global persistence too has θ =1/4 , as follows from a heuristic argument. In a generalization where the B walls diffuse slowly, θ varies continuously, increasing with increasing diffusion constant.
Khandkar, Mahendra D; Stinchcombe, Robin; Barma, Mustansir
2017-01-01
We demonstrate the large-scale effects of the interplay between shape and hard-core interactions in a system with left- and right-pointing arrowheads <> on a line, with reorientation dynamics. This interplay leads to the formation of two types of domain walls, >< (A) and <> (B). The correlation length in the equilibrium state diverges exponentially with increasing arrowhead density, with an ordered state of like orientations arising in the limit. In this high-density limit, the A domain walls diffuse, while the B walls are static. In time, the approach to the ordered state is described by a coarsening process governed by the kinetics of domain-wall annihilation A+B→0, quite different from the A+A→0 kinetics pertinent to the Glauber-Ising model. The survival probability of a finite set of walls is shown to decay exponentially with time, in contrast to the power-law decay known for A+A→0. In the thermodynamic limit with a finite density of walls, coarsening as a function of time t is studied by simulation. While the number of walls falls as t^{-1/2}, the fraction of persistent arrowheads decays as t^{-θ} where θ is close to 1/4, quite different from the Ising value. The global persistence too has θ=1/4, as follows from a heuristic argument. In a generalization where the B walls diffuse slowly, θ varies continuously, increasing with increasing diffusion constant.
Densities inferred from ESA's Venus Express aerobraking campaign at 130 km altitude
NASA Astrophysics Data System (ADS)
Bruinsma, Sean; Marty, Jean-Charles; Svedhem, Håkan; Williams, Adam; Mueller-Wodarg, Ingo
2015-04-01
In June-July 2014, ESA performed a planned aerobraking campaign with Venus Express to measure neutral densities above 130 km in Venus' atmosphere by means of the engineering accelerometers. To that purpose, the orbit perigee was lowered to approximately 130 km in order to enhance the atmospheric drag effect to the highest tolerable levels for the spacecraft; the accelerometer resolution and precision were not sufficient at higher altitudes. This campaign was requested as part of the Venus Express Atmospheric Drag Experiment (VExADE). A total of 18 orbits (i.e. days) were processed using the attitude quaternions to correctly orient the spacecraft bus and solar arrays in inertial space, which is necessary to accurately compute the exposed surface in the ram direction. The accelerometer data provide good measurements approximately from 130-140 km altitude; the length of the profiles is about 85 seconds, and they are on the early morning side (LST=4.5) at high northern latitude (70°N-82°N). The densities are a factor 2-3 larger than Hedin's VTS-3 thermosphere model, which is consistent with earlier results obtained via classical precise orbit determination at higher altitudes. Wavelike structures with amplitudes of 20% and more are detected, with wavelengths of about 100-500 km. We cannot entirely rule out that these waves are caused by the spacecraft or due to some unknown instrumental effect, but we estimate this probability to be very low.
Dong, Jia; Jones, Robert H.; Mou, Pu
2018-01-01
(1) Background: Plant roots respond to nutrients through root architecture that is regulated by hormones. Strong inter-specific variation in root architecture has been well documented, but physiological mechanisms that may control the variation have not. (2) Methods: We examined correlations between root architecture and hormones to seek clues on mechanisms behind root foraging behavior. In the green house at Beijing Normal University, hydroponic culture experiments were used to examine the root responses of four species—Callistephus chinensis, Solidago canadensis, Ailanthus altissima, Oryza sativa—to two nitrogen types (NO3− or NH4+), three nitrogen concentrations (low, medium, and high concentrations of 0.2, 1, and 18 mM, respectively) and two ways of nitrogen application (stable vs. variable). The plants were harvested after 36 days to measure root mass, 1st order root length, seminal root length for O. sativa, density of the 1st order laterals, seminal root number for O. sativa, the inter-node length of the 1st order laterals, and root hormone contents of indole-3-acetic acid, abscisic acid, and cytokinins (zeatin + zeatinriboside). (3) Results: Species differed significantly in their root architecture responses to nitrogen treatments. They also differed significantly in hormone responses to the nitrogen treatments. Additionally, the correlations between root architecture and hormone responses were quite variable across the species. Each hormone had highly species-specific relationships with root responses. (4) Conclusions: Our finding implies that a particular root foraging behavior is probably not controlled by the same biochemical pathway in all species. PMID:29495558
Grossman, Gary D.; Carline, Robert F.; Wagner, Tyler
2017-01-01
We examined the relationship between density-independent and density-dependent factors on the demography of a dense, relatively unexploited population of brown trout in Spruce Creek Pennsylvania between 1985 and 2011.Individual PCAs of flow and temperature data elucidated groups of years with multiple high flow versus multiple low flow characteristics and high versus low temperature years, although subtler patterns of variation also were observed.Density and biomass displayed similar temporal patterns, ranging from 710 to 1,803 trout/ha and 76–263 kg/ha. We detected a significantly negative linear stock-recruitment relationship (R2 = .39) and there was no evidence that flow or water temperature affected recruitment.Both annual survival and the per-capita rate of increase (r) for the population varied over the study, and density-dependent mechanisms possessed the greatest explanatory power for annual survival data. Temporal trends in population r suggested it displayed a bounded equilibrium with increases observed in 12 years and decreases detected in 13 years.Model selection analysis of per-capita rate of increase data for age 1, and adults (N = eight interpretable models) indicated that both density-dependent (five of eight) and negative density-independent processes (five of eight, i.e. high flows or temperatures), affected r. Recruitment limitation also was identified in three of eight models. Variation in the per-capita rate of increase for the population was most strongly affected by positive density independence in the form of increasing spring–summer temperatures and recruitment limitation.Model selection analyses describing annual variation in both mean length and mass data yielded similar results, although maximum wi values were low ranging from 0.09 to 0.23 (length) and 0.13 to 0.22 (mass). Density-dependence was included in 15 of 15 interpretable models for length and all ten interpretable models for mass. Similarly, positive density-independent effects in the form of increasing autumn–winter flow were present in seven of 15 interpretable models for length and five of ten interpretable models for mass. Negative density independent effects also were observed in the form of high spring–summer flows or temperatures (N = 4), or high autumn–winter temperatures (N = 1).Our analyses of the factors affecting population regulation in an introduced population of brown trout demonstrate that density-dependent forces affected every important demographic characteristic (recruitment, survivorship, the rate of increase, and size) within this population. However, density-independent forces in the form of seasonal variations in flow and temperature also helped explain annual variation in the per-capita rate of increase, and mean length and mass data. Consequently, population regulation within this population is driven by a complex of biotic and environmental factors, although it seems clear that density-dependent factors play a dominant role.
NASA Astrophysics Data System (ADS)
Wellons, Sarah; Torrey, Paul
2017-06-01
Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.
Two Point Space-Time Correlation of Density Fluctuations Measured in High Velocity Free Jets
NASA Technical Reports Server (NTRS)
Panda, Jayanta
2006-01-01
Two-point space-time correlations of air density fluctuations in unheated, fully-expanded free jets at Mach numbers M(sub j) = 0.95, 1.4, and 1.8 were measured using a Rayleigh scattering based diagnostic technique. The molecular scattered light from two small probe volumes of 1.03 mm length was measured for a completely non-intrusive means of determining the turbulent density fluctuations. The time series of density fluctuations were analyzed to estimate the integral length scale L in a moving frame of reference and the convective Mach number M(sub c) at different narrow Strouhal frequency (St) bands. It was observed that M(sub c) and the normalized moving frame length scale L*St/D, where D is the jet diameter, increased with Strouhal frequency before leveling off at the highest resolved frequency. Significant differences were observed between data obtained from the lip shear layer and the centerline of the jet. The wave number frequency transform of the correlation data demonstrated progressive increase in the radiative part of turbulence fluctuations with increasing jet Mach number.
Effect of the environment on the dendritic morphology of the rat auditory cortex
Bose, Mitali; Muñoz-Llancao, Pablo; Roychowdhury, Swagata; Nichols, Justin A.; Jakkamsetti, Vikram; Porter, Benjamin; Byrapureddy, Rajasekhar; Salgado, Humberto; Kilgard, Michael P.; Aboitiz, Francisco; Dagnino-Subiabre, Alexies; Atzori, Marco
2010-01-01
The present study aimed to identify morphological correlates of environment-induced changes at excitatory synapses of the primary auditory cortex (A1). We used the Golgi-Cox stain technique to compare pyramidal cells dendritic properties of Sprague-Dawley rats exposed to different environmental manipulations. Sholl analysis, dendritic length measures, and spine density counts were used to monitor the effects of sensory deafness and an auditory version of environmental enrichment (EE). We found that deafness decreased apical dendritic length leaving basal dendritic length unchanged, whereas EE selectively increased basal dendritic length without changing apical dendritic length. On the contrary, deafness decreased while EE increased spine density in both basal and apical dendrites of A1 layer 2/3 (LII/III) neurons. To determine whether stress contributed to the observed morphological changes in A1, we studied neural morphology in a restraint-induced model that lacked behaviorally relevant acoustic cues. We found that stress selectively decreased apical dendritic length in the auditory but not in the visual primary cortex. Similar to the acoustic manipulation, stress-induced changes in dendritic length possessed a layer specific pattern displaying LII/III neurons from stressed animals with normal apical dendrites but shorter basal dendrites, while infragranular neurons (layers V and VI) displayed shorter apical dendrites but normal basal dendrites. The same treatment did not induce similar changes in the visual cortex, demonstrating that the auditory cortex is an exquisitely sensitive target of neocortical plasticity, and that prolonged exposure to different acoustic as well as emotional environmental manipulation may produce specific changes in dendritic shape and spine density. PMID:19771593
Radiative transition of hydrogen-like ions in quantum plasma
NASA Astrophysics Data System (ADS)
Hu, Hongwei; Chen, Zhanbin; Chen, Wencong
2016-12-01
At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows.
Rolland, Joran
2018-02-01
This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T)=A_{p}r-B_{p}, with A_{p} and B_{p} positive. Moreover, A_{p} and B_{p} are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T≍exp[L(Ar-B)] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and multistability, where ln(T) in the limit of small variance noise is studied. Two points of view, local noise of small variance and large length, can be used to discuss the exponential dependence in L of T. In particular, it is shown how a T≍exp[L(A^{'}R-B^{'})] can be derived in a conceptual two degrees of freedom model of a transitional wall flow proposed by Dauchot and Manneville. This is done by identifying a quasipotential in low variance noise, large length limit. This pinpoints the physical effects controlling collapse and build-up trajectories and corresponding passage times with an emphasis on the saddle points between laminar and turbulent states. This analytical analysis also shows that these effects lead to the asymmetric probability density function of kinetic energy of turbulence.
Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows
NASA Astrophysics Data System (ADS)
Rolland, Joran
2018-02-01
This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T ) =Apr -Bp , with Ap and Bp positive. Moreover, Ap and Bp are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T ≍exp[L (A r -B )] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and multistability, where ln(T ) in the limit of small variance noise is studied. Two points of view, local noise of small variance and large length, can be used to discuss the exponential dependence in L of T . In particular, it is shown how a T ≍exp[L (A'R -B') ] can be derived in a conceptual two degrees of freedom model of a transitional wall flow proposed by Dauchot and Manneville. This is done by identifying a quasipotential in low variance noise, large length limit. This pinpoints the physical effects controlling collapse and build-up trajectories and corresponding passage times with an emphasis on the saddle points between laminar and turbulent states. This analytical analysis also shows that these effects lead to the asymmetric probability density function of kinetic energy of turbulence.
De Jager, Nathan R.; Rohweder, Jason J.
2011-01-01
Different organisms respond to spatial structure in different terms and across different spatial scales. As a consequence, efforts to reverse habitat loss and fragmentation through strategic habitat restoration ought to account for the different habitat density and scale requirements of various taxonomic groups. Here, we estimated the local density of floodplain forest surrounding each of ~20 million 10-m forested pixels of the Upper Mississippi and Illinois River floodplains by using moving windows of multiple sizes (1–100 ha). We further identified forest pixels that met two local density thresholds: 'core' forest pixels were nested in a 100% (unfragmented) forested window and 'dominant' forest pixels were those nested in a >60% forested window. Finally, we fit two scaling functions to declines in the proportion of forest cover meeting these criteria with increasing window length for 107 management-relevant focal areas: a power function (i.e. self-similar, fractal-like scaling) and an exponential decay function (fractal dimension depends on scale). The exponential decay function consistently explained more variation in changes to the proportion of forest meeting both the 'core' and 'dominant' criteria with increasing window length than did the power function, suggesting that elevation, soil type, hydrology, and human land use constrain these forest types to a limited range of scales. To examine these scales, we transformed the decay constants to measures of the distance at which the probability of forest meeting the 'core' and 'dominant' criteria was cut in half (S 1/2, m). S 1/2 for core forest was typically between ~55 and ~95 m depending on location along the river, indicating that core forest cover is restricted to extremely fine scales. In contrast, half of all dominant forest cover was lost at scales that were typically between ~525 and 750 m, but S 1/2 was as long as 1,800 m. S 1/2 is a simple measure that (1) condenses information derived from multi-scale analyses, (2) allows for comparisons of the amount of forest habitat available to species with different habitat density and scale requirements, and (3) can be used as an index of the spatial continuity of habitat types that do not scale fractally.
High-power, kilojoule laser interactions with near-critical density plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willingale, L.; Thomas, A. G. R.; Maksimchuk, A.
Experiments were performed using the Omega EP laser, which provided pulses containing 1kJ of energy in 9ps and was used to investigate high-power, relativistic intensity laser interactions with near-critical density plasmas, created from foam targets with densities of 3-100 mg/cm{sup 3}. The effect of changing the plasma density on both the laser light transmitted through the targets and the proton beam accelerated from the interaction was investigated. Two-dimensional particle-in-cell simulations enabled the interaction dynamics and laser propagation to be studied in detail. The effect of the laser polarization and intensity in the two-dimensional simulations on the channel formation and electronmore » heating are discussed. In this regime, where the plasma density is above the critical density, but below the relativistic critical density, the channel formation speed and therefore length are inversely proportional to the plasma density, which is faster than the hole boring model prediction. A general model is developed to describe the channel length in this regime.« less
Epidemics in interconnected small-world networks.
Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong
2015-01-01
Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Horn, J. E.; Harter, T.
2011-06-01
Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Mead, John D.; Dengerink, Harold A.
1977-01-01
The major intent of this research was to provide a further test of the relationships between physiological arousal and event probability by experimentally generating subjective expectancies for shock. The relationship of event probability to stress was discussed with respect to length of the anticipatory periods and methods used to establish…
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
Point Count Length and Detection of Forest Neotropical Migrant Birds
Deanna K. Dawson; David R. Smith; Chandler S. Robbins
1995-01-01
Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences...
Predictions of malaria vector distribution in Belize based on multispectral satellite data.
Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J
1996-03-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Predictions of malaria vector distribution in Belize based on multispectral satellite data
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.
1996-01-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation
Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.
1998-01-01
We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.
Scott, David; Shore-Lorenti, Catherine; McMillan, Lachlan B; Mesinovic, Jakub; Clark, Ross A; Hayes, Alan; Sanders, Kerrie M; Duque, Gustavo; Ebeling, Peter R
2018-03-01
To determine whether associations of calf muscle density with physical function are independent of other determinants of functional decline in overweight and obese older adults. This was a secondary analysis of a cross-sectional study of 85 community-dwelling overweight and obese adults (mean±SD age 62.8±7.9 years; BMI 32.3±6.1 kg/m2; 58% women). Peripheral quantitative computed tomography assessed mid-calf muscle density (66% tibial length) and dual-energy X-ray absorptiometry determined visceral fat area. Fasting glucose, Homeostatic Model Assessment of Insulin Resistance (HOMA-IR) and C-reactive protein (CRP) were analysed. Physical function assessments included hand grip and knee extension strength, balance path length (computerised posturography), stair climb test, Short Physical Performance Battery (SPPB) and self-reported falls efficacy (Modified Falls Efficacy Scale; M-FES). Visceral fat area, not muscle density, was independently associated with CRP and fasting glucose (B=0.025; 95% CI 0.009-0.042 and B=0.009; 0.001-0.017, respectively). Nevertheless, higher muscle density was independently associated with lower path length and stair climb time, and higher SPPB and M-FES scores (all P⟨0.05). Visceral fat area, fasting glucose and CRP did not mediate these associations. Higher calf muscle density predicts better physical function in overweight and obese older adults independent of insulin resistance, visceral adiposity or inflammation.
Quantification of surface charge density and its effect on boundary slip.
Jing, Dalei; Bhushan, Bharat
2013-06-11
Reduction of fluid drag is important in the micro-/nanofluidic systems. Surface charge and boundary slip can affect the fluid drag, and surface charge is also believed to affect boundary slip. The quantification of surface charge and boundary slip at a solid-liquid interface has been widely studied, but there is a lack of understanding of the effect of surface charge on boundary slip. In this paper, the surface charge density of borosilicate glass and octadecyltrichlorosilane (OTS) surfaces immersed in saline solutions with two ionic concentrations and deionized (DI) water with different pH values and electric field values is quantified by fitting experimental atomic force microscopy (AFM) electrostatic force data using a theoretical model relating the surface charge density and electrostatic force. Results show that pH and electric field can affect the surface charge density of glass and OTS surfaces immersed in saline solutions and DI water. The mechanisms of the effect of pH and electric field on the surface charge density are discussed. The slip length of the OTS surface immersed in saline solutions with two ionic concentrations and DI water with different pH values and electric field values is measured, and their effects on the slip length are analyzed from the point of surface charge. Results show that a larger absolute value of surface charge density leads to a smaller slip length for the OTS surface.
Structure and yarn sensor for fabric
Mee, David K.; Allgood, Glenn O.; Mooney, Larry R.; Duncan, Michael G.; Turner, John C.; Treece, Dale A.
1998-01-01
A structure and yarn sensor for fabric directly determines pick density in a fabric thereby allowing fabric length and velocity to be calculated from a count of the picks made by the sensor over known time intervals. The structure and yarn sensor is also capable of detecting full length woven defects and fabric. As a result, an inexpensive on-line pick (or course) density measurement can be performed which allows a loom or knitting machine to be adjusted by either manual or automatic means to maintain closer fiber density tolerances. Such a sensor apparatus dramatically reduces fabric production costs and significantly improves fabric consistency and quality for woven or knitted fabric.
Structure and yarn sensor for fabric
Mee, D.K.; Allgood, G.O.; Mooney, L.R.; Duncan, M.G.; Turner, J.C.; Treece, D.A.
1998-10-20
A structure and yarn sensor for fabric directly determines pick density in a fabric thereby allowing fabric length and velocity to be calculated from a count of the picks made by the sensor over known time intervals. The structure and yarn sensor is also capable of detecting full length woven defects and fabric. As a result, an inexpensive on-line pick (or course) density measurement can be performed which allows a loom or knitting machine to be adjusted by either manual or automatic means to maintain closer fiber density tolerances. Such a sensor apparatus dramatically reduces fabric production costs and significantly improves fabric consistency and quality for woven or knitted fabric. 13 figs.
Electromagnetic Compatibility Testing Studies
NASA Technical Reports Server (NTRS)
Trost, Thomas F.; Mitra, Atindra K.
1996-01-01
This report discusses the results on analytical models and measurement and simulation of statistical properties from a study of microwave reverberation (mode-stirred) chambers performed at Texas Tech University. Two analytical models of power transfer vs. frequency in a chamber, one for antenna-to-antenna transfer and the other for antenna to D-dot sensor, were experimentally validated in our chamber. Two examples are presented of the measurement and calculation of chamber Q, one for each of the models. Measurements of EM power density validate a theoretical probability distribution on and away from the chamber walls and also yield a distribution with larger standard deviation at frequencies below the range of validity of the theory. Measurements of EM power density at pairs of points which validate a theoretical spatial correlation function on the chamber walls and also yield a correlation function with larger correlation length, R(sub corr), at frequencies below the range of validity of the theory. A numerical simulation, employing a rectangular cavity with a moving wall shows agreement with the measurements. The determination that the lowest frequency at which the theoretical spatial correlation function is valid in our chamber is considerably higher than the lowest frequency recommended by current guidelines for utilizing reverberation chambers in EMC testing. Two suggestions have been made for future studies related to EMC testing.
Convection due to an unstable density difference across a permeable membrane
NASA Astrophysics Data System (ADS)
Puthenveettil, Baburaj A.; Arakeri, Jaywant H.
We study natural convection driven by unstable concentration differences of sodium chloride (NaCl) across a horizontal permeable membrane at Rayleigh numbers (Ra) of 1010 to 1011 and Schmidt number (Sc)=600. A layer of brine lies over a layer of distilled water, separated by the membrane, in square-cross-section tanks. The membrane is permeable enough to allow a small flow across it at higher driving potentials. Based on the predominant mode of transport across the membrane, three regimes of convection, namely an advection regime, a diffusion regime and a combined regime, are identified. The near-membrane flow in all the regimes consists of sheet plumes formed from the unstable layers of fluid near the membrane. In the advection regime observed at higher concentration differences (Bb) show a common log-normal probability density function at all Ra. We propose a phenomenology which predicts /line{lambda}_b sqrt{Z_w Z_{V_i}}, where Zw and Z_{V_i} are, respectively, the near-wall length scales in Rayleighnard convection (RBC) and due to the advection velocity. In the combined regime, which occurs at intermediate values of C/2)4/3. At lower driving potentials, in the diffusion regime, the flux scaling is similar to that in turbulent RBC.
Microstructure characterization via stereological relations — A shortcut for beginners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pabst, Willi, E-mail: pabstw@vscht.cz; Gregorová, Eva; Uhlířová, Tereza
Stereological relations that can be routinely applied for the quantitative characterization of microstructures of heterogeneous single- and two-phase materials via global microstructural descriptors are reviewed. It is shown that in the case of dense, single-phase polycrystalline materials (e.g., transparent yttrium aluminum garnet ceramics) two quantities have to be determined, the interface density (or, equivalently, the mean chord length of the grains) and the mean curvature integral density (or, equivalently, the Jeffries grain size), while for two-phase materials (e.g., highly porous, cellular alumina ceramics), one additional quantity, the volume fraction (porosity), is required. The Delesse–Rosiwal law is recalled and size measuresmore » are discussed. It is shown that the Jeffries grain size is based on the triple junction line length density, while the mean chord length of grains is based on the interface density (grain boundary area density). In contrast to widespread belief, however, these two size measures are not alternative, but independent (and thus complementary), measures of grain size. Concomitant with this fact, a clear distinction between linear and planar grain size numbers is proposed. Finally, based on our concept of phase-specific quantities, it is shown that under certain conditions it is possible to define a Jeffries size also for two-phase materials and that the ratio of the mean chord length and the Jeffries size has to be considered as an invariant number for a certain type of microstructure, i.e., a characteristic value that is independent of the absolute size of the microstructural features (e.g., grains, inclusions or pores). - Highlights: • Stereology-based image analysis is reviewed, including error considerations. • Recipes are provided for measuring global metric microstructural descriptors. • Size measures are based on interface density and mean curvature integral density. • Phase-specific quantities and a generalized Jeffries size are introduced. • Linear and planar grain size numbers are clearly distinguished and explained.« less
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo
2018-01-01
Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.
Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.
Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth
2016-06-01
Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.
Yang, Qingling; Zhang, Nan; Zhao, Feifei; Zhao, Wanli; Dai, Shanjun; Liu, Jinhao; Bukhari, Ihtisham; Xin, Hang; Niu, Wenbing; Sun, Yingpu
2015-07-01
The ends of eukaryotic chromosomes contain specialized chromatin structures called telomeres, the length of which plays a key role in early human embryonic development. Although the effect of sperm preparation techniques on major sperm characteristics, such as concentration, motility and morphology have been previously documented, the possible status of telomere length and its relation with sperm preparation techniques is not well-known for humans. The aim of this study was to investigate the role of density gradient centrifugation in the selection of spermatozoa with longer telomeres for use in assisted reproduction techniques in 105 samples before and after sperm processing. After density gradient centrifugation, the average telomere length of the sperm was significantly longer (6.51 ± 2.54 versus 5.16 ± 2.29, P < 0.01), the average motile sperm rate was significantly higher (77.9 ± 11.8 versus 44.6 ± 11.2, P < 0.01), but average DNA fragmentation rate was significantly lower (11.1 ± 5.9 versus 25.9 ± 12.9, P < 0.01) compared with raw semen. Additionally, telomere length was positively correlated with semen sperm count (rs = 0.58; P < 0.01). In conclusion, density gradient centrifugation is a useful technique for selection of sperm with longer telomeres. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Laso, Manuel; Karayiannis, Nikos Ch
2008-05-07
We present predictions for the static scaling exponents and for the cross-over polymer volumetric fractions in the marginal and concentrated solution regimes. Corrections for finite chain length are made. Predictions are based on an analysis of correlated fluctuations in density and chain length, in a semigrand ensemble in which mers and solvent sites exchange identities. Cross-over volumetric fractions are found to be chain length independent to first order, although reciprocal-N corrections are also estimated. Predicted scaling exponents and cross-over regimes are compared with available data from extensive off-lattice Monte Carlo simulations [Karayiannis and Laso, Phys. Rev. Lett. 100, 050602 (2008)] on freely jointed, hard-sphere chains of average lengths from N=12-500 and at packing densities from dilute ones up to the maximally random jammed state.
On the normalization of the minimum free energy of RNAs by sequence length.
Trotta, Edoardo
2014-01-01
The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.
On the Normalization of the Minimum Free Energy of RNAs by Sequence Length
Trotta, Edoardo
2014-01-01
The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size. PMID:25405875
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Underscreening in ionic liquids: a first principles analysis.
Rotenberg, Benjamin; Bernard, Olivier; Hansen, Jean-Pierre
2018-02-07
An attempt is made to understand the underscreening effect, observed in concentrated electrolyte solutions or melts, on the basis of simple, admittedly crude models involving charged (for the ions) and neutral (for the solvent molecules) hard spheres. The thermodynamic and structural properties of these 'primitive' and 'semi-primitive' models are calculated within mean spherical approximation, which provides the basic input required to determine the partial density response functions. The screening length [Formula: see text], which is unambiguously defined in terms of the wave-number-dependent response functions, exhibits a cross-over from a low density, Debye-like regime, to a regime where [Formula: see text] increases with density beyond a critical density at which the Debye length [Formula: see text] becomes comparable to the ion diameter. In this high density regime the ratio [Formula: see text] increases according to a power law, in qualitative agreement with experimental measurements, albeit at a much slower rate.
Underscreening in ionic liquids: a first principles analysis
NASA Astrophysics Data System (ADS)
Rotenberg, Benjamin; Bernard, Olivier; Hansen, Jean-Pierre
2018-02-01
An attempt is made to understand the underscreening effect, observed in concentrated electrolyte solutions or melts, on the basis of simple, admittedly crude models involving charged (for the ions) and neutral (for the solvent molecules) hard spheres. The thermodynamic and structural properties of these ‘primitive’ and ‘semi-primitive’ models are calculated within mean spherical approximation, which provides the basic input required to determine the partial density response functions. The screening length λS , which is unambiguously defined in terms of the wave-number-dependent response functions, exhibits a cross-over from a low density, Debye-like regime, to a regime where λS increases with density beyond a critical density at which the Debye length λD becomes comparable to the ion diameter. In this high density regime the ratio λ_S/λD increases according to a power law, in qualitative agreement with experimental measurements, albeit at a much slower rate.
A Theoretical Study of Flow Structure and Radiation for Multiphase Turbulent Diffusion Flames
1990-03-01
density function. According to the axial void fraction profile in Fig. 24, the flame length (the total penetration length) extends to x/d=150. By referring...temperature because of subcooling effect. Decreasing liquid temperature will increase condensation which in turn reduces the flame length as defined by
Stochastic transport models for mixing in variable-density turbulence
NASA Astrophysics Data System (ADS)
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
Drying and wetting transitions of a Lennard-Jones fluid: Simulations and density functional theory
NASA Astrophysics Data System (ADS)
Evans, Robert; Stewart, Maria C.; Wilding, Nigel B.
2017-07-01
We report a theoretical and simulation study of the drying and wetting phase transitions of a truncated Lennard-Jones fluid at a flat structureless wall. Binding potential calculations predict that the nature of these transitions depends on whether the wall-fluid attraction has a long ranged (LR) power law decay or is instead truncated, rendering it short ranged (SR). Using grand canonical Monte Carlo simulation and classical density functional theory, we examine both cases in detail. We find that for the LR case wetting is first order, while drying is continuous (critical) and occurs exactly at zero attractive wall strength, i.e., in the limit of a hard wall. In the SR case, drying is also critical but the order of the wetting transition depends on the truncation range of the wall-fluid potential. We characterize the approach to critical drying and wetting in terms of the density and local compressibility profiles and via the finite-size scaling properties of the probability distribution of the overall density. For the LR case, where the drying point is known exactly, this analysis allows us to estimate the exponent ν∥, which controls the parallel correlation length, i.e., the extent of vapor bubbles at the wall. Surprisingly, the value we obtain is over twice that predicted by mean field and renormalization group calculations, despite the fact that our three dimensional system is at the upper critical dimension where mean field theory for critical exponents is expected to hold. Possible reasons for this discrepancy are discussed in the light of fresh insights into the nature of near critical finite-size effects.
Drying and wetting transitions of a Lennard-Jones fluid: Simulations and density functional theory.
Evans, Robert; Stewart, Maria C; Wilding, Nigel B
2017-07-28
We report a theoretical and simulation study of the drying and wetting phase transitions of a truncated Lennard-Jones fluid at a flat structureless wall. Binding potential calculations predict that the nature of these transitions depends on whether the wall-fluid attraction has a long ranged (LR) power law decay or is instead truncated, rendering it short ranged (SR). Using grand canonical Monte Carlo simulation and classical density functional theory, we examine both cases in detail. We find that for the LR case wetting is first order, while drying is continuous (critical) and occurs exactly at zero attractive wall strength, i.e., in the limit of a hard wall. In the SR case, drying is also critical but the order of the wetting transition depends on the truncation range of the wall-fluid potential. We characterize the approach to critical drying and wetting in terms of the density and local compressibility profiles and via the finite-size scaling properties of the probability distribution of the overall density. For the LR case, where the drying point is known exactly, this analysis allows us to estimate the exponent ν ∥ , which controls the parallel correlation length, i.e., the extent of vapor bubbles at the wall. Surprisingly, the value we obtain is over twice that predicted by mean field and renormalization group calculations, despite the fact that our three dimensional system is at the upper critical dimension where mean field theory for critical exponents is expected to hold. Possible reasons for this discrepancy are discussed in the light of fresh insights into the nature of near critical finite-size effects.
Asymmetric simple exclusion process on chains with a shortcut
NASA Astrophysics Data System (ADS)
Bunzarova, Nadezhda; Pesheva, Nina; Brankov, Jordan
2014-03-01
We consider the asymmetric simple exclusion process (TASEP) on an open network consisting of three consecutively coupled macroscopic chain segments with a shortcut between the tail of the first segment and the head of the third one. The model was introduced by Y.-M. Yuan et al. [J. Phys. A 40, 12351 (2007), 10.1088/1751-8113/40/41/006] to describe directed motion of molecular motors along twisted filaments. We report here unexpected results which revise the previous findings in the case of maximum current through the network. Our theoretical analysis, based on the effective rates' approximation, shows that the second (shunted) segment can exist in both low- and high-density phases, as well as in the coexistence (shock) phase. Numerical simulations demonstrate that the last option takes place in finite-size networks with head and tail chains of equal length, provided the injection and ejection rates at their external ends are equal and greater than one-half. Then the local density distribution and the nearest-neighbor correlations in the middle chain correspond to a shock phase with completely delocalized domain wall. Upon moving the shortcut to the head or tail of the network, the density profile takes a shape typical of a high- or low-density phase, respectively. Surprisingly, the main quantitative parameters of that shock phase are governed by a positive root of a cubic equation, the coefficients of which linearly depend on the probability of choosing the shortcut. Alternatively, they can be expressed in a universal way through the shortcut current. The unexpected conclusion is that a shortcut in the bulk of a single lane may create traffic jams.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Spatial distribution of galls caused by Aculus tetanothrix (Acari: Eriophyoidea) on arctic willows.
Kuczyński, Lechosław; Skoracka, Anna
2005-01-01
The distribution of galls caused by Aculus tetanothrix (Acari: Eriophyoidea) on three Salix species was studied. The factors influencing this distribution were analysed, i.e. willow species, study area and shoot length. Spatial pattern of gall distribution within the shoot was also examined. The study was conducted in Russia, Kola Peninsula. Densities of galls caused by A. tetanothrix differed significantly among willow species. Considerably higher gall density was recorded in the White Sea coast than in the Khibiny Mountains. This may be explained by the influence of a milder maritime climate that favors mite occurrence compared to a harsh and variable mountain climate that limits mite abundance. There was no relationship between the gall density and the shoot length. The highest density of galls was recorded on the inner offshoots; within the offshoot, there was a maximum density on the fifth leaf. This pattern was repeatable for all shoots studied, independent of the study area, willow species and length of shoots, suggesting the optimal conditions for A. tetanothrix exist on leaves in the middle part of a shoot. This distribution pattern may be an effect of the trade-off between the costs and benefits resulting from leaf quality and mite movement along the shoot. This hypothesis, however, needs to be tested experimentally.
Mode-coupling theory for active Brownian particles
NASA Astrophysics Data System (ADS)
Liluashvili, Alexander; Ónody, Jonathan; Voigtmann, Thomas
2017-12-01
We present a mode-coupling theory (MCT) for the slow dynamics of two-dimensional spherical active Brownian particles (ABPs). The ABPs are characterized by a self-propulsion velocity v0 and by their translational and rotational diffusion coefficients Dt and Dr, respectively. Based on the integration-through-transients formalism, the theory requires as input only the equilibrium static structure factors of the passive system (where v0=0 ). It predicts a nontrivial idealized-glass-transition diagram in the three-dimensional parameter space of density, self-propulsion velocity, and rotational diffusivity that arise because at high densities, the persistence length of active swimming ℓp=v0/Dr interferes with the interaction length ℓc set by the caging of particles. While the low-density dynamics of ABPs is characterized by a single Péclet number Pe=v02/DrDt , close to the glass transition the dynamics is found to depend on Pe and ℓp separately. At fixed density, increasing the self-propulsion velocity causes structural relaxation to speed up, while decreasing the persistence length slows down the relaxation. The active-MCT glass is a nonergodic state that is qualitatively different from the passive glass. In it, correlations of initial density fluctuations never fully decay, but also an infinite memory of initial orientational fluctuations is retained in the positions.
Wu, Dan; Katsumi, Hidemasa; Quan, Ying-Shu; Kamiyama, Fumio; Kusamori, Kosuke; Sakane, Toshiyasu; Yamamoto, Akira
2016-09-01
Available formulations of sumatriptan succinate (SS) have low bioavailability or are associated with site reactions. We developed various types of self-dissolving microneedle arrays (MNs) fabricated from sodium hyaluronate as a new delivery system for SS and evaluated their skin permeation and irritation in terms of clinical application. In vitro permeation studies with human skin, physicochemical properties (needle length, thickness and density), and penetration enhancers (glycerin, sodium dodecyl sulfate and lauric acid diethanolamide) were investigated. SS-loaded high-density MNs of 800 µm in length were the optimal formulation and met clinical therapeutic requirements. Penetration enhancers did not significantly affect permeation of SS from MNs. Optical coherence tomography images demonstrated that SS-loaded high-density MNs (800 µm) uniformly created drug permeation pathways for the delivery of SS into the skin. SS-loaded high-density MNs induced moderate primary skin irritations in rats, but the skin recovered within 72 h of removal of the MNs. These findings suggest that high-density MNs of 800 µm in length are an effective and promising formulation for transdermal delivery of SS. To our knowledge, this is the first report of SS permeation across human skin using self-dissolving MNs.
Bunch Length Measurements Using CTR at the AWA with Comparison to Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neveu, N.; Spentzouris, L.; Halavanau, A.
In this paper we present electron bunch length measurements at the Argonne Wakefield Accelerator (AWA) photoinjector facility. The AWA accelerator has a large dynamic charge density range, with electron beam charge varying between 0.1 nC - 100 nC, and laser spot size diameter at the cathode between 0.1 mm - 18 mm. The bunch length measurements were taken at different charge densities using a metallic screen and a Michelson interferometer to perform autocorrelation scans of the corresponding coherent transition radiation (CTR). A liquid helium-cooled 4K bolometer was used to register the interferometer signal. The experimental results are compared with OPAL-Tmore » numerical simulations.« less
Brillouin Scattering of Picosecond Laser Pulses in Preformed, Short-Scale-Length Plasmas
NASA Astrophysics Data System (ADS)
Gaeris, A. C.; Fisher, Y.; Delettrez, J. A.; Meyerhofer, D. D.
1996-11-01
Brillouin scattering (BS) has been studied in short-scale-length, preformed plasmas. The backscattered and specularly reflected light resulting from the interaction of high-power picosecond pulses with preformed silicon plasmas has been measured. A first laser pulse forms a short-scale-length plasma -- without significant BS -- while a second delayed pulse interacts with an expanded, drifting underdense region of the plasma with density scale length (0 <= Ln <= 600 λ _L). The pulses are generated at λ L = 1054 nm, with intensities up to 10^16 W/cm^2. The backscattered light spectra, threshold intensities, and enhanced reflectivities have been determined for different plasma-density scale lengths and are compared to Liu, Rosenbluth, and White's(C. S. Liu, M. N. Rosenbluth, and R. B. White, Phys. Fluids 17, 1211 (1974).) WKB treatment of stimulated Brillouin scattering in inhomogeneous drifting plasmas. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC03-92SF19460.
Turbulence Generation in Combustion.
1987-07-22
flame length . This work is summarized in this section. I1.1 Model for Turbulent Burning Velocity For a range of turbulence conditions including...Variable density effects have been added in an approximation, and an expression for the length of jet flames has been developed. The flame length expression...of jet mixing and jet flame length data using fractals, College of Engineering, Energy Report E-86-02, Comell University, Ithaca, NY, 1986. Results
Inversion and Application of Muon Tomography Data for Cave Exploration in Budapest, Hungary
NASA Astrophysics Data System (ADS)
Molnár, Gábor; Surányi, Gergely; Gábor Barnaföldi, Gergely; Oláh, László; Hamar, Gergö; Varga, Dezsö
2016-04-01
In this contribution we present a prospecting muon-tomograph and its application for cave exploration in Budapest, Hungary. The more than 50 years old basic idea behind muon tomography is the ability of muon particles, generated in the upper atmosphere to penetrate tens of meters into rocks with continuous attenuation before decay. This enables us placing a detector in a tunnel and measure muon fluxes from different directions and convert these fluxes to rock density data. The lightweight, 51x46x32 cm3 size, muon tomograph containing 5 detector layers was developed by Wigner Research Centre for Physics, Budapest, Hungary. A muon passing at least 4 of the 5 detector layers along one line are classified as unique muon detection. Its angular resolution is approximately 1 degree and it is effective up to 50 degrees off zenith. During the measurement campaign we installed the muon detector at seventeen locations along an abandoned, likely Cold War air raid shelter tunnel for 10-15 days at each location, collecting large set of events. The measured fluxes are converted to apparent density lengths (multiplication of rock densities by along path lengths) using an empirically tested relationship. For inverting measurements, a 3D block model of the subsurface was developed. It consisted of cuboids, with equal horizontal size, equal number in every line and in every row of the model. Additionally it consisted of blocks with different heights, equal number of blocks in every column. (Block height was constant in a column, but varied from column to column.) The heights of the blocks in a column were chosen, that top face of the uppermost blocks has an elevation defined by a Digital Elevation Model. Initially the density of every model blocks was set to a realistic value. We calculated the theoretical density length for every detector location and for a subset of flux measurement directions. We also calculated the partial derivatives of these theoretical density length values with respect to the densities of every model block. This is the Jacobian of the problem and these values were proportional to the path length in the respective block. A regularized least squares solution returns the corrections of the densities of the blocks. If the corrected density of a block is significantly smaller than the typical rock density of the subsurface, the block is dedicated as a cave. According to our results a supposed cave exists some 7 meters above the tunnel. This work has been supported by the Lendület Program of the Hungarian Academy of Sciences (LP2013-60) and the OTKA NK-106119 grant. Gergely Gábor Barnaföld and Dezsö Varga thank for the support of the Bolyai Fellowship of the Hungarian Academy of Sciences.
Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun
2013-01-01
Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
A partial differential equation for pseudocontact shift.
Charnock, G T P; Kuprov, Ilya
2014-10-07
It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
Coupling of laser energy into plasma channels
NASA Astrophysics Data System (ADS)
Dimitrov, D. A.; Giacone, R. E.; Bruhwiler, D. L.; Busby, R.; Cary, J. R.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.
2007-04-01
Diffractive spreading of a laser pulse imposes severe limitations on the acceleration length and maximum electron energy in the laser wake field accelerator (LWFA). Optical guiding of a laser pulse via plasma channels can extend the laser-plasma interaction distance over many Rayleigh lengths. Energy efficient coupling of laser pulses into and through plasma channels is very important for optimal LWFA performance. Results from simulation parameter studies on channel guiding using the particle-in-cell (PIC) code VORPAL [C. Nieter and J. R. Cary, J. Comput. Phys. 196, 448 (2004)] are presented and discussed. The effects that density ramp length and the position of the laser pulse focus have on coupling into channels are considered. Moreover, the effect of laser energy leakage out of the channel domain and the effects of tunneling ionization of a neutral gas on the guided laser pulse are also investigated. Power spectral diagnostics were developed and used to separate pump depletion from energy leakage. The results of these simulations show that increasing the density ramp length decreases the efficiency of coupling a laser pulse to a channel and increases the energy loss when the pulse is vacuum focused at the channel entrance. Then, large spot size oscillations result in increased energy leakage. To further analyze the coupling, a differential equation is derived for the laser spot size evolution in the plasma density ramp and channel profiles are simulated. From the numerical solution of this equation, the optimal spot size and location for coupling into a plasma channel with a density ramp are determined. This result is confirmed by the PIC simulations. They show that specifying a vacuum focus location of the pulse in front of the top of the density ramp leads to an actual focus at the top of the ramp due to plasma focusing, resulting in reduced spot size oscillations. In this case, the leakage is significantly reduced and is negligibly affected by ramp length, allowing for efficient use of channels with long ramps.
Seker, Fatih; Pfaff, Johannes; Wolf, Marcel; Schönenberger, Silvia; Nagel, Simon; Herweh, Christian; Pham, Mirko; Bendszus, Martin; Möhlenbruch, Markus A
2017-10-01
The impact of thrombus length on recanalization in IV thrombolysis for acute intracranial artery occlusion has been well studied. Here we analyzed the influence of thrombus length on the number of thrombectomy maneuvers needed for recanalization, intraprocedural complications, recanalization success, and clinical outcome after mechanical thrombectomy. We retrospectively analyzed angiographic and clinical data from 72 consecutive patients with acute occlusion of the M1 segment of the middle cerebral artery who were treated with mechanical thrombectomy using stent retrievers. Successful recanalization was defined as a Thrombolysis in Cerebral Infarction score of 2b or 3. Good neurological outcome was defined as a modified Rankin Scale score of ≤2 at 90 days after stroke onset. Mean thrombus length was 13.4±5.2 mm. Univariate binary logistic regression did not show an association of thrombus length with the probability of a good clinical outcome (OR 0.95, 95% CI 0.84 to 1.03, p=0.176) or successful recanalization (OR 0.92, 95% CI 0.81 to 1.05, p=0.225). There was no significant correlation between thrombus length and the number of thrombectomy maneuvers needed for recanalization (p=0.112). Furthermore, thrombus length was not correlated with the probability of intraprocedural complications (p=0.813), including embolization in a new territory (n=3). In this study, thrombus length had no relevant impact on recanalization, neurological outcome, or intraprocedural complications following mechanical thrombectomy of middle cerebral artery occlusions. Therefore, mechanical thrombectomy with stent retrievers can be attempted with large clots. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Whisman, Mark A.; Richardson, Emily D.
2016-01-01
Objective To examine the association between depressive symptoms and salivary telomere length in a probability sample of middle-aged and older adults, evaluate age and sex as potential moderators of this association, and test whether this association was incremental to potential confounds. Methods Participants were 3,609 individuals from the 2008 wave of the Health and Retirement Study. Telomere length assays were performed using quantitative real-time polymerase chain reaction (qPCR) on DNA extracted from saliva samples. Depressive symptoms were assessed via interview, and health and lifestyle factors, traumatic life events, and neuroticism were assessed via self-report. Regression analyses were conducted to examine the associations between predictor variables and salivary telomere length. Results After adjusting for demographics, depressive symptoms were negatively associated with salivary telomere length (b = −.003, p = .014). Furthermore, this association was moderated by sex (b = .005, p = .011), such that depressive symptoms were significantly and negatively associated with salivary telomere length for men (b = −.006, p < .001) but not for women (b = −.001, p = .644). The negative association between depressive symptoms and salivary telomere length in men remained statistically significant after additionally adjusting for cigarette smoking, body mass index, chronic health conditions, childhood and lifetime exposure to traumatic life events, and neuroticism. Conclusions Higher levels of depressive symptoms were associated with shorter salivary telomeres in men and this association was incremental to several potential confounds. Shortened telomeres may help account for the association between depression and poor physical health and mortality. PMID:28029664
NASA Astrophysics Data System (ADS)
Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping
2015-05-01
It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.
NASA Astrophysics Data System (ADS)
Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.
2018-01-01
Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.
The role of demographic compensation theory in incidental take assessments for endangered species
McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts
2011-01-01
Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Factors affecting detectability of river otters during sign surveys
Jeffress, Mackenzie R.; Paukert, Craig P.; Sandercock, Brett K.; Gipson, Philip S.
2011-01-01
Sign surveys are commonly used to study and monitor wildlife species but may be flawed when surveys are conducted only once and cover short distances, which can lead to a lack of accountability for false absences. Multiple observers surveyed for river otter (Lontra canadensis) scat and tracks along stream and reservoir shorelines at 110 randomly selected sites in eastern Kansas from January to April 2008 and 2009 to determine if detection probability differed among substrates, sign types, observers, survey lengths, and near access points. We estimated detection probabilities (p) of river otters using occupancy models in Program PRESENCE. Mean detection probability for a 400-m survey was highest in mud substrates (p = 0.60) and lowest in snow (p = 0.18) and leaf litter substrates (p = 0.27). Scat had a higher detection probability (p = 0.53) than tracks (p = 0.18), and experienced observers had higher detection probabilities (p < 0.71) than novice observers (p < 0.55). Detection probabilities increased almost 3-fold as survey length increased from 200 m to 1,000 m, and otter sign was not concentrated near access points. After accounting for imperfect detection, our estimates of otter site occupancy based on a 400-m survey increased >3-fold, providing further evidence of the potential negative bias that can occur in estimates from sign surveys when imperfect detection is not addressed. Our study identifies areas for improvement in sign survey methodologies and results are applicable for sign surveys commonly used for many species across a range of habitats.
Zheng, Zhiqiang; Xu, Qiming; Guo, Jiangna; Qin, Jing; Mao, Hailei; Wang, Bin; Yan, Feng
2016-05-25
The structure-antibacterial activity relationship between the small molecular compounds and polymers are still elusive. Here, imidazolium-type ionic liquid (IL) monomers and their corresponding poly(ionic liquids) (PILs) and poly(ionic liquid) membranes were synthesized. The effect of chemical structure, including carbon chain length of substitution at the N3 position and charge density of cations (mono- or bis-imidazolium) on the antimicrobial activities against both Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) was investigated by determination of minimum inhibitory concentration (MIC). The antibacterial activities of both ILs and PILs were improved with the increase of the alkyl chain length and higher charge density (bis-cations) of imidazolium cations. Moreover, PILs exhibited lower MIC values relative to the IL monomers. However, the antibacterial activities of PIL membranes showed no correlation to those of their analogous small molecule IL monomers and PILs, which increased with the charge density (bis-cations) while decreasing with the increase of alkyl chain length. The results indicated that antibacterial property studies on small molecules and homopolymers may not provide a solid basis for evaluating that in corresponding polymer membranes.
LPI Thresholds in Longer Scale Length Plasmas Driven by the Nike Laser*
NASA Astrophysics Data System (ADS)
Weaver, J.; Oh, J.; Phillips, L.; Afeyan, B.; Seely, J.; Kehne, D.; Brown, C.; Obenschain, S.; Serlin, V.; Schmitt, A. J.; Feldman, U.; Holland, G.; Lehmberg, R. H.; McLean, E.; Manka, C.
2010-11-01
The Krypton-Fluoride (KrF) laser is an attractive driver for inertial confinement fusion due to its short wavelength (248nm), large bandwidth (1-3 THz), and beam smoothing by induced spatial incoherence. Experiments with the Nike KrF laser have demonstrated intensity thresholds for laser plasma instabilities (LPI) higher than reported for other high power lasers operating at longer wavelengths (>=351 nm). The previous Nike experiments used short pulses (350 ps FWHM) and small spots (<260 μm FWHM) that created short density scale length plasmas (Ln˜50-70 μm) from planar CH targets and demonstrated the onset of two-plasmon decay (2φp) at laser intensities ˜2x10^15 W/cm^2. This talk will present an overview of the current campaign that uses longer pulses (0.5-4.0 ns) to achieve greater density scale lengths (Ln˜100-200 μm). X-rays, emission near ^1/2φo and ^3/2φo harmonics, and reflected laser light have been monitored for onset of 2φp. The longer density scale lengths will allow better comparison to results from other laser facilities. *Work supported by DoE/NNSA and ONR.
Mark A. Finney; Charles W. McHugh; Isaac Grenfell; Karin L. Riley
2010-01-01
Components of a quantitative risk assessment were produced by simulation of burn probabilities and fire behavior variation for 134 fire planning units (FPUs) across the continental U.S. The system uses fire growth simulation of ignitions modeled from relationships between large fire occurrence and the fire danger index Energy Release Component (ERC). Simulations of 10,...
Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps
Adam Brandt
2015-11-15
This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.
Frame synchronization methods based on channel symbol measurements
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.
1989-01-01
The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Information Density and Syntactic Repetition.
Temperley, David; Gildea, Daniel
2015-11-01
In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Ghigo, G.; Chiodoni, A.; Gerbaldo, R.; Gozzelino, L.; Laviano, F.; Mezzetti, E.; Minetti, B.; Camerlingo, C.
This paper deals with the mechanisms controlling the critical current density vs. field behavior in YBCO films. We base our analysis on a suitable model concerning the existence of a network of intergrain Josephson junctions whose length is modulated by defects. Irradiation with 0.25 GeV Au ions provide a useful tool to check the texture of the sample, in particular to give a gauge length reference to separate “weak” links and high- J c links.
Quantitative analysis of drainage obtained from aerial photographs and RBV/LANDSAT images
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Formaggio, A. R.; Epiphanio, J. C. N.; Filho, M. V.
1981-01-01
Data obtained from aerial photographs (1:60,000) and LANDSAT return beam vidicon imagery (1:100,000) concerning drainage density, drainage texture, hydrography density, and the average length of channels were compared. Statistical analysis shows that significant differences exist in data from the two sources. The highly drained area lost more information than the less drained area. In addition, it was observed that the loss of information about the number of rivers was higher than that about the length of the channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastner, S.O.; Bhatia, A.K.
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less
Bodoriková, Silvia; Tibenská, Kristína Domonkosová; Katina, Stanislav; Uhrová, Petra; Dörnhöferová, Michaela; Takács, Michal; Urminský, Jozef
2013-01-01
The aim of the study was to determine the diet of an historical human population using the trace elements in dental tissues and dental buccal microwear. Although 38 individuals had been buried in the cemetery, preservation of the remains did not allow analysis of all of them. A total of 13 individuals were analysed, of which the samples for trace-element analysis consisted of 12 permanent premolars from 12 individuals. Buccal microwear was studied in a sample of nine teeth from nine individuals. Both trace-element and microwear analyses were performed on eight individuals. All analyzed teeth were intact, with fully developed roots, without dental calculus and macro-abrasion. Concentrations of Sr, Zn, and Ca, and their ratios, were used to determine the relative proportions of plant and animal protein in the diet. Samples were analyzed using optical emission spectrometry with inductively coupled plasma. The values of the Sr and Zn concentrations indicate that a diet of the investigated population was of a mixed character with approximately the same proportion of plants and meat in their food. Buccal microwear was studied in molds ofbuccal surfaces and observed at 100x magnification with a scanning electron microscope (SEM). Length and orientation of striations were determined with the SigmaScan Pro 5.0 image analysis program. The results obtained from microwear analysis correspond with those from trace-element analysis and showed that the population consumed a mixed diet. The density of the scratches indicates that the diet contained a considerable vegetable component. The high number of vertical scratches and their high average length suggest that individuals also consumed a large portion of meat. The results of both analyses showed that there were also individuals whose diet had probably been poor, i.e. richer in animal protein, which probably could be related to their health or social status in the population.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
The Pearson walk with shrinking steps in two dimensions
NASA Astrophysics Data System (ADS)
Serino, C. A.; Redner, S.
2010-01-01
We study the shrinking Pearson random walk in two dimensions and greater, in which the direction of the Nth step is random and its length equals λN-1, with λ<1. As λ increases past a critical value λc, the endpoint distribution in two dimensions, P(r), changes from having a global maximum away from the origin to being peaked at the origin. The probability distribution for a single coordinate, P(x), undergoes a similar transition, but exhibits multiple maxima on a fine length scale for λ close to λc. We numerically determine P(r) and P(x) by applying a known algorithm that accurately inverts the exact Bessel function product form of the Fourier transform for the probability distributions.
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.; Lyle, Karen H.; Spellman, Regina L.
2004-01-01
A study was performed to examine the influence of varying mesh density on an LS-DYNA simulation of a rectangular-shaped foam projectile impacting the space shuttle leading edge Panel 6. The shuttle leading-edge panels are fabricated of reinforced carbon-carbon (RCC) material. During the study, nine cases were executed with all possible combinations of coarse, baseline, and fine meshes of the foam and panel. For each simulation, the same material properties and impact conditions were specified and only the mesh density was varied. In the baseline model, the shell elements representing the RCC panel are approximately 0.2-in. on edge, whereas the foam elements are about 0.5-in. on edge. The element nominal edge-length for the baseline panel was halved to create a fine panel (0.1-in. edge length) mesh and doubled to create a coarse panel (0.4-in. edge length) mesh. In addition, the element nominal edge-length of the baseline foam projectile was halved (0.25-in. edge length) to create a fine foam mesh and doubled (1.0- in. edge length) to create a coarse foam mesh. The initial impact velocity of the foam was 775 ft/s. The simulations were executed in LS-DYNA version 960 for 6 ms of simulation time. Contour plots of resultant panel displacement and effective stress in the foam were compared at five discrete time intervals. Also, time-history responses of internal and kinetic energy of the panel, kinetic and hourglass energy of the foam, and resultant contact force were plotted to determine the influence of mesh density. As a final comparison, the model with a fine panel and fine foam mesh was executed with slightly different material properties for the RCC. For this model, the average degraded properties of the RCC were replaced with the maximum degraded properties. Similar comparisons of panel and foam responses were made for the average and maximum degraded models.
Evolution of Structure in the Intergalactic Medium and the Nature of the LY-Alpha Forest
NASA Technical Reports Server (NTRS)
Bi, Hongguang; Davidsen, Arthur F.
1997-01-01
We have performed a detailed statistical study of the evolution of structure in a photoionized intergalactic medium (IGM) using analytical simulations to extend the calculation into the mildly nonlinear density regime found to prevail at z = 3. Our work is based on a simple fundamental conjecture: that the probability distribution function of the density of baryonic diffuse matter in the universe is described by a lognormal (LN) random field. The LN distribution has several attractive features and follows plausibly from the assumption of initial linear Gaussian density and velocity fluctuations at arbitrarily early times. Starting with a suitably normalized power spectrum of primordial fluctuations in a universe dominated by cold dark matter (CDM), we compute the behavior of the baryonic matter, which moves slowly toward minima in the dark matter potential on scales larger than the Jeans length. We have computed two models that succeed in matching observations. One is a nonstandard CDM model with OMEGA = 1, h = 0.5, and GAMMA = 0.3, and the other is a low-density flat model with a cosmological constant (LCDM), with OMEGA = 0.4, OMEGA(sub LAMBDA) = 0.6, and h = 0.65. In both models, the variance of the density distribution function grows with time, reaching unity at about z = 4, where the simulation yields spectra that closely resemble the Ly-alpha forest absorption seen in the spectra of high-z quasars. The calculations also successfully predict the observed properties of the Ly-alpha forest clouds and their evolution from z = 4 down to at least z = 2, assuming a constant intensity for the metagalactic UV background over this redshift range. However, in our model the forest is not due to discrete clouds, but rather to fluctuations in a continuous intergalactic medium. At z = 3; typical clouds with measured neutral hydrogen column densities N(sub H I) = 10(exp 13.3), 10(exp 13.5), and 10(exp 11.5) /sq cm correspond to fluctuations with mean total densities approximately 10, 1, and 0.1 times the universal mean baryon density. Perhaps surprisingly, fluctuations whose amplitudes are less than or equal to the mean density still appear as "clouds" because in our model more than 70% of the volume of the IGM at z = 3 is filled with gas at densities below the mean value.
F.F. Wangaard; George E. Woodson
1972-01-01
Based on a model developed for hardwood fiber strength-pulp property relationships, multiple-regression equations involving fiber strength, fiber length, and sheet density were determined to predict the properties of kraft pulps of slash pine (Pinus elliottii). Regressions for breaking length and burst factor accounted for 88 and 90 percent,...