NASA Astrophysics Data System (ADS)
Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin
2016-03-01
A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.
The role of demographic compensation theory in incidental take assessments for endangered species
McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts
2011-01-01
Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.
Study of discharge cleaning process in JIPP T-2 Torus by residual gas analyzer
NASA Astrophysics Data System (ADS)
Noda, N.; Hirokura, S.; Taniguchi, Y.; Tanahashi, S.
1982-12-01
During discharge cleaning, decay time of water vapor pressure changes when the pressure reaches a certain level. A long decay time observed in the later phase can be interpreted as a result of a slow deoxidization rate of chromium oxide, which may dominate the cleaning process in this phase. Optimization of plasma density for the cleaning is discussed comparing the experimental results on density dependence of water vapor pressure with a result based on a zero dimensional calculation for particle balance. One of the essential points for effective cleaning is the raising of the electron density of the plasma high enough that the dissociation loss rate of H2O is as large as the sticking loss rate. A density as high as 10 to the 11th power/cu cm is required for a clean surface condition where sticking probability is presumed to be around 0.5.
Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian
2009-07-15
Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less
NASA Astrophysics Data System (ADS)
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
Hensel, Mario; Geppert, Daniel; Kersten, Jan F; Stuhr, Markus; Lorenz, Jürgen; Wirtz, Sebastian; Kerner, Thoralf
2018-01-01
The objective of this study was to determine the association between weather-related factors and out-of-hospital cardiac arrest (OHCA) of presumed cardiac etiology. This was a prospective observational study performed in a prehospital setting. Data from the Emergency Medical Service in Hamburg (Germany) and data from the local weather station were evaluated over a 5-year period. Weather data (temperature, humidity, air pressure, wind speed) were obtained every minute and matched with the associated rescue mission data. Lowess-Regression analysis was performed to assess the relationship between the above-mentioned weather-related factors and OHCA of presumed cardiac etiology. Additionally, varying measuring-ranges were defined for each weather-related factor in order to compare them with each other with regard to the probability of occurrence of OHCA. During the observation period 1,558 OHCA with presumed cardiac etiology were registered (age: 67 ± 19 yrs; 62% male; hospital admission: 37%; survival to hospital discharge: 6.7%). Compared to moderate temperatures (5 - 25°C), probability of OHCA-occurrence increased significantly at temperatures above 25°C (p = 0.028) and below 5°C p = 0.011). Regarding air humidity, probability of OHCA-occurrence increased below a threshold-value of 75% compared to values above this cut-off (p = 0.006). Decreased probability was seen at moderate atmospheric pressure (1000 hPa - 1020 hPa), whereas increased probability was seen above 1020 hPa (p = 0.023) and below 1000 hPa (p = 0.035). Probability of OHCA-occurrence increased continuously with increasing wind speed (p < 0.001). There are associations between several weather-related factors such as temperature, humidity, air pressure, and wind speed, and occurrence of OHCA of presumed cardiac etiology. Particularly dangerous seem to be cold weather, dry air and strong wind.
Primary gamma rays. [resulting from cosmic ray interaction with interstellar matter
NASA Technical Reports Server (NTRS)
Fichtel, C. E.
1974-01-01
Within this galaxy, cosmic rays reveal their presence in interstellar space and probably in source regions by their interactions with interstellar matter which lead to gamma rays with a very characteristic energy spectrum. From the study of the intensity of the high energy gamma radiation as a function of galactic longitude, it is already clear that cosmic rays are almost certainly not uniformly distributed in the galaxy and are not concentrated in the center of the galaxy. The galactic cosmic rays appear to be tied to galactic structural features, presumably by the galactic magnetic fields which are in turn held by the matter in the arm segments and the clouds. On the extragalactic scale, it is now possible to say that cosmic rays are not universal at the density seen near the earth. The diffuse celestial gamma ray spectrum that is observed presents the interesting possibility of cosmological studies and possible evidence for a residual universal cosmic ray density, which is much lower than the present galactic cosmic ray density.
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
Baskerville, Jerry Ray; Herrick, John
2012-02-01
This study focuses on clinically assigned prospective estimated pretest probability and pretest perception of legal risk as independent variables in the ordering of multidetector computed tomographic (MDCT) head scans. Our primary aim is to measure the association between pretest probability of a significant finding and pretest perception of legal risk. Secondarily, we measure the percentage of MDCT scans that physicians would not order if there was no legal risk. This study is a prospective, cross-sectional, descriptive analysis of patients 18 years and older for whom emergency medicine physicians ordered a head MDCT. We collected a sample of 138 patients subjected to head MDCT scans. The prevalence of a significant finding in our population was 6%, yet the pretest probability expectation of a significant finding was 33%. The legal risk presumed was even more dramatic at 54%. These data support the hypothesis that physicians presume the legal risk to be significantly higher than the risk of a significant finding. A total of 21% or 15% patients (95% confidence interval, ±5.9%) would not have been subjected to MDCT if there was no legal risk. Physicians overestimated the probability that the computed tomographic scan would yield a significant result and indicated an even greater perceived medicolegal risk if the scan was not obtained. Physician test-ordering behavior is complex, and our study queries pertinent aspects of MDCT testing. The magnification of legal risk vs the pretest probability of a significant finding is demonstrated. Physicians significantly overestimated pretest probability of a significant finding on head MDCT scans and presumed legal risk. Copyright © 2012 Elsevier Inc. All rights reserved.
Fleming, C; Momin, Z A; Brensilver, J M; Brandstetter, R D
1995-03-01
Decisional capacity includes ability to comprehend information, to make an informed choice, and to communicate that choice; it is specific to the decision at hand. Presume a patient has decisional capacity; an evaluation of incapacity must be justified. Administer a standardized mental status test to help assess alertness, attention, memory, and reasoning ability. A patient scoring below 10 on the Folstein Mini-Mental State Examination (maximum score, 30) probably does not have decisional capacity; one scoring from 10 to 15 probably can designate a proxy but not make complex health care decisions. Obtain psychiatric consultations for a patient who exhibits psychological barriers to decision making.
LES, DNS and RANS for the analysis of high-speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, Peyman; Taulbee, Dale B.; Adumitroaie, Virgil; Sabini, George J.; Shieh, Geoffrey S.
1994-01-01
The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Sep. 1993 - 1 Sep. 1994, we have focused our efforts on two research problems: (1) developments of 'algebraic' moment closures for statistical descriptions of nonpremixed reacting systems, and (2) assessments of the Dirichlet frequency in presumed scalar probability density function (PDF) methods in stochastic description of turbulent reacting flows. This report provides a complete description of our efforts during this past year as supported by the NASA Langley Research Center under Grant NAG1-1122.
Anomalous sea surface structures as an object of statistical topography
NASA Astrophysics Data System (ADS)
Klyatskin, V. I.; Koshel, K. V.
2015-06-01
By exploiting ideas of statistical topography, we analyze the stochastic boundary problem of emergence of anomalous high structures on the sea surface. The kinematic boundary condition on the sea surface is assumed to be a closed stochastic quasilinear equation. Applying the stochastic Liouville equation, and presuming the stochastic nature of a given hydrodynamic velocity field within the diffusion approximation, we derive an equation for a spatially single-point, simultaneous joint probability density of the surface elevation field and its gradient. An important feature of the model is that it accounts for stochastic bottom irregularities as one, but not a single, perturbation. Hence, we address the assumption of the infinitely deep ocean to obtain statistic features of the surface elevation field and the squared elevation gradient field. According to the calculations, we show that clustering in the absolute surface elevation gradient field happens with the unit probability. It results in the emergence of rare events such as anomalous high structures and deep gaps on the sea surface almost in every realization of a stochastic velocity field.
He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou
2018-06-20
Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.
FORMATION OF INTRACYTOPLASMIC MEMBRANE SYSTEM OF MYCOBACTERIA RELATED TO CELL DIVISION
Imaeda, Tamotsu; Ogura, Mituo
1963-01-01
Imaeda, Tamotsu (Instituto Venezolano de Investigaciones Científicas, Caracas, Venezuela) and Mitua Ogura. Formation of intracytoplasmic membrane system of mycobacteria related to cell division. J. Bacteriol. 85:150–163. 1963.—Mycobacterium leprae, M. lepraemurium, and a Mycobacterium sp. were observed with an electron microscope. In these bacilli, the three-dimensional structure of the intracytoplasmic membrane system consists of tubular infoldings of the invaginated plasma membrane. The moderately dense substance, presumably representing the cell-wall precursor, is found in the membranous system, especially in the rapid growth phase of mycobacteria. This system always shows an intimate relationship with cell division. A low-density zone, probably corresponding to the low-density substance which coats the cell wall, appears in the connecting regions of the system and in the longitudinal portion of the cell wall. These zones extend centripetally, and the separation of the cell wall occurs after the two zones meet. Based on these results, we hypothesize that the intracytoplasmic membrane system may produce cell-wall material during cell division of mycobacteria. Images PMID:13956365
NASA Astrophysics Data System (ADS)
Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng
2016-11-01
The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.
Fritts, Karen R.; Kilb, Debi
2009-01-01
It has been traditionally held that aftershocks occur within one to two fault lengths of the mainshock. Here we demonstrate that this perception has been shaped by the sensitivity of seismic networks. The 31 October 2001 Mw 5.0 and 12 June 2005 Mw 5.2 Anza mainshocks in southern California occurred in the middle of the densely instrumented ANZA seismic network and thus were unusually well recorded. For the June 2005 event, aftershocks as small as M 0.0 could be observed stretching for at least 50 km along the San Jacinto fault even though the mainshock fault was only ∼4.5 km long. It was hypothesized that an observed aseismic slipping patch produced a spatially extended aftershock-triggering source, presumably slowing the decay of aftershock density with distance and leading to a broader aftershock zone. We find, however, the decay of aftershock density with distance for both Anza sequences to be similar to that observed elsewhere in California. This indicates there is no need for an additional triggering mechanism and suggests that given widespread dense instrumentation, aftershock sequences would routinely have footprints much larger than currently expected. Despite the large 2005 aftershock zone, we find that the probability that the 2005 Anza mainshock triggered the M 4.9 Yucaipa mainshock, which occurred 4.2 days later and 72 km away, to be only 14%±1%. This probability is a strong function of the time delay; had the earthquakes been separated by only an hour, the probability of triggering would have been 89%.
NASA Astrophysics Data System (ADS)
Kim, Jeonglae; Pope, Stephen B.
2014-05-01
A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.
Simulations of material mixing in laser-driven reshock experiments
NASA Astrophysics Data System (ADS)
Haines, Brian M.; Grinstein, Fernando F.; Welser-Sherrill, Leslie; Fincke, James R.
2013-02-01
We perform simulations of a laser-driven reshock experiment [Welser-Sherrill et al., High Energy Density Phys. (unpublished)] in the strong-shock high energy-density regime to better understand material mixing driven by the Richtmyer-Meshkov instability. Validation of the simulations is based on direct comparison of simulation and radiographic data. Simulations are also compared with published direct numerical simulation and the theory of homogeneous isotropic turbulence. Despite the fact that the flow is neither homogeneous, isotropic nor fully turbulent, there are local regions in which the flow demonstrates characteristics of homogeneous isotropic turbulence. We identify and isolate these regions by the presence of high levels of turbulent kinetic energy (TKE) and vorticity. After reshock, our analysis shows characteristics consistent with those of incompressible isotropic turbulence. Self-similarity and effective Reynolds number assessments suggest that the results are reasonably converged at the finest resolution. Our results show that in shock-driven transitional flows, turbulent features such as self-similarity and isotropy only fully develop once de-correlation, characteristic vorticity distributions, and integrated TKE, have decayed significantly. Finally, we use three-dimensional simulation results to test the performance of two-dimensional Reynolds-averaged Navier-Stokes simulations. In this context, we also test a presumed probability density function turbulent mixing model extensively used in combustion applications.
Territorial organization of the lowland classic maya.
Marcus, J
1973-06-01
Thus far I have discussed ancient Maya sociopolitical structure from the upper levels of the hierarchy downward. Let me now summarize their territorial organization from the bottom upward, starting at the hamlet level (Fig. 8). The smallest unit of settlement-one usually overlooked by archeological surveys in the lowland rain forest-was probably a cluster of thatched huts occupied by a group of related families; larger clusters may have been divided into four quadrants along the lines suggested by Coe (26). Because of the long fallow period (6 to 8 years) characteristic of slash-and-burn agriculture in the Petén, these small hamlets are presumed to have changed location over the years, although they probably shifted in a somewhat circular fashion around a tertiary ceremonial-civic center for whose maintenance they were partly responsible. These tertiary centers were spaced at fairly regular intervals around secondary ceremonial-civic centers with pyramids, carved monuments, and palace-like residences. In turn, the secondary centers occurred at such regular intervals as to form hexagonal patterns around primary centers, which were still larger, with acropolises, multiple ceremonial plazas, and greater numbers of monuments. In some cases, the distance between secondary centers was roughly twice the distance between secondary and tertiary centers, creating a lattice of nested hexagonal cells. This pattern, which conforms to a Western theoretical construct, was presumably caused by factors of service function, travel, and transport. The pattern was not recognized by the Maya at all. They simply recognized that a whole series of smaller centers were dependent on a primary center and therefore mentioned its emblem glyph. Linking the centers of the various hexagons were marriage alliances between members of royal dynasties, who had no kinship ties with the farmers in the hamlets. Out of the large number of primary centers available to them, the Maya selected four as regional capitals. True to their cosmology, the Maya regarded these capitals as associated with the four quadrants of their realm, regardless of their actual location. Each was the home city for a very important dynasty whose junior members probably ruled secondary centers. Since the hexagonal lattices were probably adjusted to variations in population density, each of the four quadrants of the Maya realm probably controlled a comparable number of persons. So strong was the cognized model that, despite the rise and fall of individual centers, there seem always to have been four capitals, each associated with a direction and, presumably, with a color. There is still a great deal to learn about the social, political, and territorial organization of the lowland Maya, and parts of the picture presented here need far more data for their confirmation. What seems likely is that the Maya had an overall quadripartite organization (rather than a core and buffer zone) and that within each quadrant there was at least a five-tiered administrative hierarchy of capital, secondary center, tertiary center, village, and hamlet. Perhaps most significant, there was no real conflict between the lattice-like network predicted by locational analysis and the cosmological four-part structure predicted by epigraphy and ethnology.
NASA Astrophysics Data System (ADS)
Donini, A.; Martin, S. M.; Bastiaans, R. J. M.; van Oijen, J. A.; de Goey, L. P. H.
2013-10-01
In the present paper a computational analysis of a high pressure confined premixed turbulent methane/air jet flames is presented. In this scope, chemistry is reduced by the use of the Flamelet Generated Manifold method [1] and the fluid flow is modeled in an LES and RANS context. The reaction evolution is described by the reaction progress variable, the heat loss is described by the enthalpy and the turbulence effect on the reaction is represented by the progress variable variance. The interaction between chemistry and turbulence is considered through a presumed probability density function (PDF) approach. The use of FGM as a combustion model shows that combustion features at gas turbine conditions can be satisfactorily reproduced with a reasonable computational effort. Furthermore, the present analysis indicates that the physical and chemical processes controlling carbon monoxide (CO) emissions can be captured only by means of unsteady simulations.
Modelling thermal radiation from one-meter diameter methane pool fires
NASA Astrophysics Data System (ADS)
Consalvi, J. L.; Demarco, R.
2012-06-01
The first objective of this article is to implement a comprehensive radiation model in order to predict the radiant fractions and radiative fluxes on remote surfaces in large-scale methane pool fires. The second aim is to quantify the importance of Turbulence-Radiation Interactions (TRIs) in such buoyant flames. The fire-induced flow is modelled by using a buoyancy-modified k-ɛ model and the Steady Laminar Flamelet (SLF) model coupled with a presumed probability density function (pdf) approach. Spectral radiation is modelled by using the Full-Spectrum Correlated-k (FSCK) method. TRIs are taken into account by considering the Optically-Thin Fluctuation Approximation (OTFA). The emission term and the mean absorption coefficient are closed by using a presumed pdf of the mixture fraction, scalar dissipation rate and enthalpy defect. Two 1m-diameter fires with Heat Release Rates (HRR) of 49 kW and 162 kW were simulated. Predicted radiant fractions and radiative heat fluxes are found in reasonable agreement with experimental data. The importance of TRIs is evidenced, computed radiant fractions and radiative heat fluxes being considerably higher than those obtained from calculations based on mean properties. Finally, model results show that the complete absorption coefficient-Planck function correlation should be considered in order to properly take into account the influence of TRIs on the emission term, whereas the absorption coefficient self-correlation in the absorption term reduces significantly the radiant fractions.
Resinless section electron microscopy reveals the yeast cytoskeleton.
Penman, J; Penman, S
1997-04-15
The cytoskeleton of Saccharomyces cerevisiae is essentially invisible using conventional microscopy techniques. A similar problem was solved for the mammalian cell cytoskeleton using resinless section electron microscopy, a technique applied here to yeast. In the resinless image, soluble proteins are no longer cloaked by embedding medium and must be removed by selective detergent extraction. In yeast, this requires breaching the cell wall by digesting with Zymolyase sufficiently to allow detergent extraction of the plasma membrane lipids. Gel electropherograms show that the extracted or "soluble" proteins are distinct from the retained or "structural" proteins that presumably comprise the cytoskeleton. These putative cytoskeleton proteins include the major portions of a 43-kDa protein, which is presumably actin, and of proteins in a band appearing at 55 kDa, as well as numerous less abundant, nonactin proteins. Resinless section electron micrographs show a dense, three-dimensional web of anastomosing, polymorphic filaments bounded by the remnant cell wall. Although the filament network is very heterogenous, there appear to be two principal classes of filament diameters-5 nm and 15-20 nm-which may correspond to actin and intermediate filaments, respectively. A large oval region of lower filament density probably corresponds to the vacuole, and an electron dense spheroidal body, 300-500 nm in diameter, is likely the nucleus. The techniques detailed in this report afford new approaches to the study of yeast cytoarchitecture.
Resinless section electron microscopy reveals the yeast cytoskeleton
Penman, Joshua; Penman, Sheldon
1997-01-01
The cytoskeleton of Saccharomyces cerevisiae is essentially invisible using conventional microscopy techniques. A similar problem was solved for the mammalian cell cytoskeleton using resinless section electron microscopy, a technique applied here to yeast. In the resinless image, soluble proteins are no longer cloaked by embedding medium and must be removed by selective detergent extraction. In yeast, this requires breaching the cell wall by digesting with Zymolyase sufficiently to allow detergent extraction of the plasma membrane lipids. Gel electropherograms show that the extracted or “soluble” proteins are distinct from the retained or “structural” proteins that presumably comprise the cytoskeleton. These putative cytoskeleton proteins include the major portions of a 43-kDa protein, which is presumably actin, and of proteins in a band appearing at 55 kDa, as well as numerous less abundant, nonactin proteins. Resinless section electron micrographs show a dense, three-dimensional web of anastomosing, polymorphic filaments bounded by the remnant cell wall. Although the filament network is very heterogenous, there appear to be two principal classes of filament diameters—5 nm and 15–20 nm—which may correspond to actin and intermediate filaments, respectively. A large oval region of lower filament density probably corresponds to the vacuole, and an electron dense spheroidal body, 300–500 nm in diameter, is likely the nucleus. The techniques detailed in this report afford new approaches to the study of yeast cytoarchitecture. PMID:9108046
Persuasion, Surveillance, and Voting Behavior
ERIC Educational Resources Information Center
Gross, Alan E.; And Others
1974-01-01
The present study was designed to test the efficacy of two basic strategies which might be employed to increase the probability that a potential voter will act in accordance with his presumed belief that it is right, good, or desirable to exercise his franchise. (Author)
Symptomatic Raccoon Dogs and Sarcoptic Mange Along an Urban Gradient.
Saito, Masayuki U; Sonoda, Yoichi
2017-06-01
We quantitatively evaluated the effects of landscape factors on the distribution of symptomatic raccoon dogs with sarcoptic mange along an urban gradient. We used 246 camera traps (182 traps from April 2005 to December 2006; 64 traps from September 2009 to October 2010) to record the occurrence of asymptomatic and symptomatic raccoon dogs at 21 survey sites along an urban-rural gradient in the Tama Hills area of Tokyo. Each occurrence was explained in terms of the surrounding forest, agricultural, and grassland areas and additional factors (i.e., seasonal variations and survey methods) at various spatial scales using a generalized additive mixed model (GAMM). In our analysis, a 1000-m radius was identified as the important spatial scale for asymptomatic and symptomatic raccoon dog occurrence. The peak of the predicted occurrence probability of asymptomatic raccoon dogs appeared in the intermediate forest landscape as opposed to non-forest and forest landscapes. However, a high occurrence probability of symptomatic raccoon dogs was detected in non-forest and intermediate forest landscapes (i.e., urban and suburban) as opposed to a forest landscape, presumably because of animals occurring at much higher densities in more urbanized areas. Therefore, our results suggest that human-modified landscapes play an important role in the high occurrence of sarcoptic mange in raccoon dogs.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
ERIC Educational Resources Information Center
Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.
2010-01-01
Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…
Population density predicts outcome from out-of-hospital cardiac arrest in Victoria, Australia.
Nehme, Ziad; Andrew, Emily; Cameron, Peter A; Bray, Janet E; Bernard, Stephen A; Meredith, Ian T; Smith, Karen
2014-05-05
To examine the impact of population density on incidence and outcome of out-of-hospital cardiac arrest (OHCA). Data were extracted from the Victorian Ambulance Cardiac Arrest Registry for all adult OHCA cases of presumed cardiac aetiology attended by the emergency medical service (EMS) between 1 January 2003 and 31 December 2011. Cases were allocated into one of five population density groups according to their statistical local area: very low density (≤ 10 people/km(2)), low density (11-200 people/km(2)), medium density (201-1000 people/km(2)), high density (1001-3000 people/km(2)), and very high density (> 3000 people/km(2)). Survival to hospital and survival to hospital discharge. The EMS attended 27 705 adult presumed cardiac OHCA cases across 204 Victorian regions. In 12 007 of these (43.3%), resuscitation was attempted by the EMS. Incidence was lower and arrest characteristics were consistently less favourable for lower population density groups. Survival outcomes, including return of spontaneous circulation, survival to hospital and survival to hospital discharge, were significantly poorer in less densely populated groups (P < 0.001 for all comparisons). When compared with very low density populations, the risk-adjusted odds ratios of surviving to hospital discharge were: low density, 1.88 (95% CI, 1.15-3.07); medium density, 2.49 (95% CI, 1.55-4.02); high density, 3.47 (95% CI, 2.20-5.48) and very high density, 4.32 (95% CI, 2.67-6.99). Population density is independently associated with survival after OHCA, and significant variation in the incidence and characteristics of these events are observed across the state.
USDA-ARS?s Scientific Manuscript database
This study was part of a larger project investigating breed-related differences in feeding habits of Raramuri Criollo (RC) versus Angus x Hereford (AH) cows. Seed densities in fecal samples collected in July and August 2015 were analyzed to compare presumed mesquite bean consumption of RC and AH cow...
Shunting normal pressure hydrocephalus: the predictive value of combined clinical and CT data.
Vanneste, J; Augustijn, P; Tan, W F; Dirven, C
1993-03-01
The value of an ordinal global scale derived from combined clinical and CT data (clin/CT scale) to predict the clinical outcome in 112 patients shunted for presumed normal pressure hydrocephalus (NPH) was analysed. The clinical data were retrospectively collected, all CT scans were re-evaluated, and the clin/CT scale was determined blind to the results of further ancillary tests and to the post-surgical outcome. The scale ranked three classes of prediction: on the basis of clinical and CT characteristics, improvement after shunting was probable, possible, or improbable. The predictive value of the clin/CT scale for the subgroup of communicating NPH was established for two different strategies, depending on the strictness of selection criteria for shunting. In the subgroup of patients with presumed communicating NPH, the prevalence of shunt responsiveness was 29%; the best strategy was to shunt only patients with probable shunt-responsive NPH: the sensitivity was 0.54, the specificity 0.84, and the predictive accuracy 0.75, with a limited number of ineffective shunts (11%) and missed improvements (13%). The study illustrates its need to assess the pre-test probability of NPH based on combined clinical and CT data, before establishing the clinical usefulness of an ancillary test.
Shunting normal pressure hydrocephalus: the predictive value of combined clinical and CT data.
Vanneste, J; Augustijn, P; Tan, W F; Dirven, C
1993-01-01
The value of an ordinal global scale derived from combined clinical and CT data (clin/CT scale) to predict the clinical outcome in 112 patients shunted for presumed normal pressure hydrocephalus (NPH) was analysed. The clinical data were retrospectively collected, all CT scans were re-evaluated, and the clin/CT scale was determined blind to the results of further ancillary tests and to the post-surgical outcome. The scale ranked three classes of prediction: on the basis of clinical and CT characteristics, improvement after shunting was probable, possible, or improbable. The predictive value of the clin/CT scale for the subgroup of communicating NPH was established for two different strategies, depending on the strictness of selection criteria for shunting. In the subgroup of patients with presumed communicating NPH, the prevalence of shunt responsiveness was 29%; the best strategy was to shunt only patients with probable shunt-responsive NPH: the sensitivity was 0.54, the specificity 0.84, and the predictive accuracy 0.75, with a limited number of ineffective shunts (11%) and missed improvements (13%). The study illustrates its need to assess the pre-test probability of NPH based on combined clinical and CT data, before establishing the clinical usefulness of an ancillary test. PMID:8459240
Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A
2006-11-01
A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.
Entanglement-enhanced Neyman-Pearson target detection using quantum illumination
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
Data-driven probability concentration and sampling on manifold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2016-09-15
A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less
Density-dependent natural selection and trade-offs in life history traits.
Mueller, L D; Guo, P Z; Ayala, F J
1991-07-26
Theories of density-dependent natural selection state that at extreme population densities evolution produces alternative life histories due to trade-offs. The trade-offs are presumed to arise because those genotypes with highest fitness at high population densities will not also have high fitness at low density and vice-versa. These predictions were tested by taking samples from six populations of Drosophila melanogaster kept at low population densities (r-populations) for nearly 200 generations and placing them in crowded cultures (K-populations). After 25 generations in the crowded cultures, the derived K-populations showed growth rate and productivity that at high densities were elevated relative to the controls, but at low density were depressed.
Modelling thermal radiation in buoyant turbulent diffusion flames
NASA Astrophysics Data System (ADS)
Consalvi, J. L.; Demarco, R.; Fuentes, A.
2012-10-01
This work focuses on the numerical modelling of radiative heat transfer in laboratory-scale buoyant turbulent diffusion flames. Spectral gas and soot radiation is modelled by using the Full-Spectrum Correlated-k (FSCK) method. Turbulence-Radiation Interactions (TRI) are taken into account by considering the Optically-Thin Fluctuation Approximation (OTFA), the resulting time-averaged Radiative Transfer Equation (RTE) being solved by the Finite Volume Method (FVM). Emission TRIs and the mean absorption coefficient are then closed by using a presumed probability density function (pdf) of the mixture fraction. The mean gas flow field is modelled by the Favre-averaged Navier-Stokes (FANS) equation set closed by a buoyancy-modified k-ɛ model with algebraic stress/flux models (ASM/AFM), the Steady Laminar Flamelet (SLF) model coupled with a presumed pdf approach to account for Turbulence-Chemistry Interactions, and an acetylene-based semi-empirical two-equation soot model. Two sets of experimental pool fire data are used for validation: propane pool fires 0.3 m in diameter with Heat Release Rates (HRR) of 15, 22 and 37 kW and methane pool fires 0.38 m in diameter with HRRs of 34 and 176 kW. Predicted flame structures, radiant fractions, and radiative heat fluxes on surrounding surfaces are found in satisfactory agreement with available experimental data across all the flames. In addition further computations indicate that, for the present flames, the gray approximation can be applied for soot with a minor influence on the results, resulting in a substantial gain in Computer Processing Unit (CPU) time when the FSCK is used to treat gas radiation.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Turbulent flame spreading mechanisms after spark ignition
NASA Astrophysics Data System (ADS)
Subramanian, V.; Domingo, Pascale; Vervisch, Luc
2009-12-01
Numerical simulation of forced ignition is performed in the framework of Large-Eddy Simulation (LES) combined with a tabulated detailed chemistry approach. The objective is to reproduce the flame properties observed in a recent experimental work reporting probability of ignition in a laboratory-scale burner operating with Methane/air non premixed mixture [1]. The smallest scales of chemical phenomena, which are unresolved by the LES grid, are approximated with a flamelet model combined with presumed probability density functions, to account for the unresolved part of turbulent fluctuations of species and temperature. Mono-dimensional flamelets are simulated using GRI-3.0 [2] and tabulated under a set of parameters describing the local mixing and progress of reaction. A non reacting case was simulated at first, to study the unsteady velocity and mixture fields. The time averaged velocity and mixture fraction, and their respective turbulent fluctuations, are compared against the experimental measurements, in order to estimate the prediction capabilities of LES. The time history of axial and radial components of velocity and mixture fraction is cumulated and analysed for different burner regimes. Based on this information, spark ignition is mimicked on selected ignition spots and the dynamics of kernel development analyzed to be compared against the experimental observations. The possible link between the success or failure of the ignition and the flow conditions (in terms of velocity and composition) at the sparking time are then explored.
ERIC Educational Resources Information Center
Brick, John
Alcohol intoxication increases the risk of highway accidents, the relative risk of crash probability increasing as a function of blood alcohol content (BAC). Because alcohol use is more prevalent than use of other drugs, more is known about the relationship between alcohol use and driving. Most states presume a BAC of .10% to be evidence of drunk…
Prehn, Richmond T
2010-01-01
All nascent neoplasms probably elicit at least a weak immune reaction. However, the initial effect of the weak immune reaction on a nascent tumor is always stimulatory rather than inhibitory to tumor growth, assuming only that exposure to the tumor antigens did not antedate the initiation of the neoplasm (as may occur in some virally induced tumors). This conclusion derives from the observation that the relationship between the magnitude of an adaptive immune reaction and tumor growth is not linear but varies such that while large quantities of antitumor immune reactants tend to inhibit tumor growth, smaller quantities of the same reactants are, for unknown reasons, stimulatory. Any immune reaction must presumably be small before it can become large; hence the initial reaction to the first presentation of a tumor antigen must always be small and in the stimulatory portion of this nonlinear relationship. In mouse-skin carcinogenesis experiments it was found that premalignant papillomas were variously immunogenic, but that the carcinomas that arose in them were, presumably because of induced immune tolerance, nonimmunogenic in the animal of origin.
Automatic target recognition apparatus and method
Baumgart, Chris W.; Ciarcia, Christopher A.
2000-01-01
An automatic target recognition apparatus (10) is provided, having a video camera/digitizer (12) for producing a digitized image signal (20) representing an image containing therein objects which objects are to be recognized if they meet predefined criteria. The digitized image signal (20) is processed within a video analysis subroutine (22) residing in a computer (14) in a plurality of parallel analysis chains such that the objects are presumed to be lighter in shading than the background in the image in three of the chains and further such that the objects are presumed to be darker than the background in the other three chains. In two of the chains the objects are defined by surface texture analysis using texture filter operations. In another two of the chains the objects are defined by background subtraction operations. In yet another two of the chains the objects are defined by edge enhancement processes. In each of the analysis chains a calculation operation independently determines an error factor relating to the probability that the objects are of the type which should be recognized, and a probability calculation operation combines the results of the analysis chains.
Humphrey, L. David; Pyke, David A.
1998-01-01
The advantages of guerrilla and phalanx growth for the guerrilla Elymus lanceolatus ssp. lanceolatus and phalanx E. l. ssp. wawawaiensis were evaluated over 2 years in two taxon mixtures with a range of densities of each subspecies and under two levels of watering. Ramet numbers and biomass of the guerrilla subspecies were higher than those of the phalanx grass in the first year but in the second year declined greatly, while the phalanx grass showed no change in biomass and an increase in ramet numbers. High neighbour densities affected the phalanx subspecies more strongly than the guerrilla subspecies in the first year, but in the second year there were few differences between subspecies. Biomass of the guerrilla grass remained greater than that of the phalanx grass but ramet numbers were similar in the second year. For both subspecies in both years, probability of flowering decreased at higher neighbour densities, indicating adaptation for competitive ability. In the first year, biomass was more strongly reduced by densities than flowering was, but in the second year, when crowding was apparently greater, flowering was more severely affected. Genet survival was high and similar for both subspecies. The presumed advantage of guerrilla subspecies in exploiting open space was supported. The guerrilla grass exploited resources more quickly in the first year by faster growth and greater ramet production, but its biomass, ramet numbers and rhizome growth, and thus its advantage, were reduced in the second year. The phalanx subspecies had slower growth, produced more ramets in later years, and delayed flowering until later years. Although less able to exploit open resources, it appeared adapted to more stressful conditions, and may be able to exploit temporal resource pulses more effectively.
Reconciling Long-Wavelength Dynamic Topography, Geoid Anomalies and Mass Distribution on Earth
NASA Astrophysics Data System (ADS)
Hoggard, M.; Richards, F. D.; Ghelichkhan, S.; Austermann, J.; White, N.
2017-12-01
Since the first satellite observations in the late 1950s, we have known that that the Earth's non-hydrostatic geoid is dominated by spherical harmonic degree 2 (wavelengths of 16,000 km). Peak amplitudes are approximately ± 100 m, with highs centred on the Pacific Ocean and Africa, encircled by lows in the vicinity of the Pacific Ring of Fire and at the poles. Initial seismic tomography models revealed that the shear-wave velocity, and therefore presumably the density structure, of the lower mantle is also dominated by degree 2. Anti-correlation of slow, probably low density regions beneath geoid highs indicates that the mantle is affected by large-scale flow. Thus, buoyant features are rising and exert viscous normal stresses that act to deflect the surface and core-mantle boundary (CMB). Pioneering studies in the 1980s showed that a viscosity jump between the upper and lower mantle is required to reconcile these geoid and tomographically inferred density anomalies. These studies also predict 1-2 km of dynamic topography at the surface, dominated by degree 2. In contrast to this prediction, a global observational database of oceanic residual depth measurements indicates that degree 2 dynamic topography has peak amplitudes of only 500 m. Here, we attempt to reconcile observations of dynamic topography, geoid, gravity anomalies and CMB topography using instantaneous flow kernels. We exploit a density structure constructed from blended seismic tomography models, combining deep mantle imaging with higher resolution upper mantle features. Radial viscosity structure is discretised, and we invert for the best-fitting viscosity profile using a conjugate gradient search algorithm, subject to damping. Our results suggest that, due to strong sensitivity to radial viscosity structure, the Earth's geoid seems to be compatible with only ± 500 m of degree 2 dynamic topography.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
Gaussian model for emission rate measurement of heated plumes using hyperspectral data
NASA Astrophysics Data System (ADS)
Grauer, Samuel J.; Conrad, Bradley M.; Miguel, Rodrigo B.; Daun, Kyle J.
2018-02-01
This paper presents a novel model for measuring the emission rate of a heated gas plume using hyperspectral data from an FTIR imaging spectrometer. The radiative transfer equation (RTE) is used to relate the spectral intensity of a pixel to presumed Gaussian distributions of volume fraction and temperature within the plume, along a line-of-sight that corresponds to the pixel, whereas previous techniques exclusively presume uniform distributions for these parameters. Estimates of volume fraction and temperature are converted to a column density by integrating the local molecular density along each path. Image correlation velocimetry is then employed on raw spectral intensity images to estimate the volume-weighted normal velocity at each pixel. Finally, integrating the product of velocity and column density along a control surface yields an estimate of the instantaneous emission rate. For validation, emission rate estimates were derived from synthetic hyperspectral images of a heated methane plume, generated using data from a large-eddy simulation. Calculating the RTE with Gaussian distributions of volume fraction and temperature, instead of uniform distributions, improved the accuracy of column density measurement by 14%. Moreover, the mean methane emission rate measured using our approach was within 4% of the ground truth. These results support the use of Gaussian distributions of thermodynamic properties in calculation of the RTE for optical gas diagnostics.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
A kiloparsec-scale hyper-starburst in a quasar host less than 1 gigayear after the Big Bang.
Walter, Fabian; Riechers, Dominik; Cox, Pierre; Neri, Roberto; Carilli, Chris; Bertoldi, Frank; Weiss, Axel; Maiolino, Roberto
2009-02-05
The host galaxy of the quasar SDSS J114816.64+525150.3 (at redshift z = 6.42, when the Universe was less than a billion years old) has an infrared luminosity of 2.2 x 10(13) times that of the Sun, presumably significantly powered by a massive burst of star formation. In local examples of extremely luminous galaxies, such as Arp 220, the burst of star formation is concentrated in a relatively small central region of <100 pc radius. It is not known on which scales stars are forming in active galaxies in the early Universe, at a time when they are probably undergoing their initial burst of star formation. We do know that at some early time, structures comparable to the spheroidal bulge of the Milky Way must have formed. Here we report a spatially resolved image of [C ii] emission of the host galaxy of J114816.64+525150.3 that demonstrates that its star-forming gas is distributed over a radius of about 750 pc around the centre. The surface density of the star formation rate averaged over this region is approximately 1,000 year(-1) kpc(-2). This surface density is comparable to the peak in Arp 220, although about two orders of magnitude larger in area. This vigorous star-forming event is likely to give rise to a massive spheroidal component in this system.
ELECTRON MICROSCOPE STUDY OF MYCOBACTERIUM LEPRAE AND ITS ENVIRONMENT IN A VESICULAR LEPROUS LESION
Imaeda, Tamotsu; Convit, Jacinto
1962-01-01
Imaeda, Tamotsu (Instituto Venezolano de Investigaciones Cientificas, Caracas, Venezuela) and Jacinto Convit. Electron microscope study of Mycobacterium leprae and its environment in a vesicular leprous lesion. J. Bacteriol. 83:43–52. 1962.—Biopsied specimens of a borderline leprosy lesion were observed with the electron microscope. In this lesion, the majority of Mycobacterium leprae were laden with cytoplasmic components. The bacilli were separated from the cytoplasm of host cells by an enclosing membrane, thus differing from the environment of well-developed lepra cells in lepromatous lesions. The cell wall is composed of a moderately dense layer. A diffuse layer is discernible outside the cell wall, separated from it by a low density space. It is suggested that the cell wall is further coated by a low density layer, although the nature of the outermost diffuse layer has not yet been determined. The plasma membrane consists of a double layer, i.e., dense inner and outer layers separated by a low density space. The outer layer is closely adjacent to the cell wall. In the region where the outer layer of the plasma membrane enters the cytoplasm and is transformed into a complex membranous structure, the inner layer encloses this membranous configuration. Together they form the intracytoplasmic membrane system. In the bacterial cytoplasm, moderately dense, presumably polyphosphate bodies are apparent. As neither these bodies nor the intracytoplasmic membrane system are visible in the degenerating bacilli, it seems probable that these two components represent indicators of the state of bacillary activity. Images PMID:16561926
Compressible cavitation with stochastic field method
NASA Astrophysics Data System (ADS)
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
Geological Mapping of Pluto and Charon Using New Horizons Data
NASA Astrophysics Data System (ADS)
Moore, J. M.; Spencer, J. R.; McKinnon, W. B.; Howard, A. D.; White, O. M.; Umurhan, O. M.; Schenk, P. M.; Beyer, R. A.; Singer, K.; Stern, S. A.; Weaver, H. A.; Young, L. A.; Ennico Smith, K.; Olkin, C.; Horizons Geology, New; Geophysics Imaging Team
2016-06-01
Pluto and Charon exhibit strikingly different surface appearances, despite their similar densities and presumed bulk compositions. Systematic mapping has revealed that much of Pluto's surface can be attributed to surface-atmosphere interactions and the mobilization of volatile ices by insolation. Many mapped valley systems appear to be the consequence of glaciation involving nitrogen ice. Other geological activity requires or required internal heating. The convection and advection of volatile ices in Sputnik Planum can be powered by present-day radiogenic heat loss. On the other hand, the prominent mountains at the western margin of Sputnik Planum, and the strange, multi-km-high mound features to the south, probably composed of H2O, are young geologically as inferred by light cratering and superposition relationships. Their origin, and what drove their formation so late in Solar System history, is under investigation. The dynamic remolding of landscapes by volatile transport seen on Pluto is not unambiguously evident in the mapping of Charon. Charon does, however, display a large resurfaced plain and globally engirdling extensional tectonic network attesting to its early endogenic vigor.
The Geology of Pluto and Charon as Revealed by New Horizons
NASA Astrophysics Data System (ADS)
Moore, Jeffrey M.; Spencer, John R.; McKinnon, William B.; Stern, S. Alan; Young, Leslie A.; Weaver, Harold A.; Olkin, Cathy B.; Ennico, Kim; New Horizons GGI Team
2016-04-01
NASA's New Horizons spacecraft has revealed that Pluto and Charon exhibit strikingly different surface appearances, despite their similar densities and presumed bulk compositions. Much of Pluto's surface can be attributed to surface-atmosphere interactions and the mobilization of volatile ices by insolation. Many valley systems appear to be the consequence of glaciation involving nitrogen ice. Other geological activity requires or required internal heating. The convection and advection of volatile ices in Sputnik Planum can be powered by present-day radiogenic heat loss. On the other hand, the prominent mountains at the western margin of Sputnik Planum, and the strange, multi-km-high mound features to the south, probably composed of H2O, are young geologically as inferred by light cratering and superposition relationships. Their origin, and what drove their formation so late in Solar System history, is under investigation. The dynamic remolding of landscapes by volatile transport seen on Pluto is not unambiguously evident on Charon. Charon does, however, display a large resurfaced plain and globally engirdling extensional tectonic network attesting to its early endogenic vigor.
The Geology of Pluto and Charon as Revealed by New Horizons
NASA Technical Reports Server (NTRS)
Moore, Jeffrey M.; Spencer, John R.; McKinnon, William B.; Stern, S. Alan; Young, Leslie A.; Weaver, Harold A.; Olkin, Cathy B.; Ennico, Kim
2016-01-01
NASA's New Horizons spacecraft has revealed that Pluto and Charon exhibit strikingly different surface appearances, despite their similar densities and presumed bulk compositions. Much of Pluto's surface can be attributed to surface-atmosphere interactions and the mobilization of volatile ices by insolation. Many valley systems appear to be the consequence of glaciation involving nitrogen ice. Other geological activity requires or required internal heating. The convection and advection of volatile ices in Sputnik Planum can be powered by present-day radiogenic heat loss. On the other hand, the prominent mountains at the western margin of Sputnik Planum, and the strange, multi-km-high mound features to the south, probably composed of H2O, are young geologically as inferred by light cratering and superposition relationships. Their origin, and what drove their formation so late in Solar System history, is under investigation. The dynamic remolding of landscapes by volatile transport seen on Pluto is not unambiguously evident on Charon. Charon does, however, display a large resurfaced plain and globally engirdling extensional tectonic network attesting to its early endogenic vigor.
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.
Estimating loblolly pine size-density trajectories across a range of planting densities
Curtis L. VanderSchaaf; Harold E. Burkhart
2013-01-01
Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...
ERIC Educational Resources Information Center
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…
Affective and cognitive factors influencing sensitivity to probabilistic information.
Tyszka, Tadeusz; Sawicki, Przemyslaw
2011-11-01
In study 1 different groups of female students were randomly assigned to one of four probabilistic information formats. Five different levels of probability of a genetic disease in an unborn child were presented to participants (within-subject factor). After the presentation of the probability level, participants were requested to indicate the acceptable level of pain they would tolerate to avoid the disease (in their unborn child), their subjective evaluation of the disease risk, and their subjective evaluation of being worried by this risk. The results of study 1 confirmed the hypothesis that an experience-based probability format decreases the subjective sense of worry about the disease, thus, presumably, weakening the tendency to overrate the probability of rare events. Study 2 showed that for the emotionally laden stimuli, the experience-based probability format resulted in higher sensitivity to probability variations than other formats of probabilistic information. These advantages of the experience-based probability format are interpreted in terms of two systems of information processing: the rational deliberative versus the affective experiential and the principle of stimulus-response compatibility. © 2011 Society for Risk Analysis.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
Prehn, Richmond T.
2010-01-01
All nascent neoplasms probably elicit at least a weak immune reaction. However, the initial effect of the weak immune reaction on a nascent tumor is always stimulatory rather than inhibitory to tumor growth, assuming only that exposure to the tumor antigens did not antedate the initiation of the neoplasm (as may occur in some virally induced tumors). This conclusion derives from the observation that the relationship between the magnitude of an adaptive immune reaction and tumor growth is not linear but varies such that while large quantities of antitumor immune reactants tend to inhibit tumor growth, smaller quantities of the same reactants are, for unknown reasons, stimulatory. Any immune reaction must presumably be small before it can become large; hence the initial reaction to the first presentation of a tumor antigen must always be small and in the stimulatory portion of this nonlinear relationship. In mouse-skin carcinogenesis experiments it was found that premalignant papillomas were variously immunogenic, but that the carcinomas that arose in them were, presumably because of induced immune tolerance, nonimmunogenic in the animal of origin. PMID:20811480
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
ELECTRON MICROSCOPIC OBSERVATIONS OF AMOEBA PROTEUS IN GROWTH AND INANITION
Cohen, Adolph I.
1957-01-01
Electron microscopic observations have been made on growing and dividing specimens of Amoeba proteus and also on starving animals. Structures presumably corresponding to the mitochondria, alpha particles, vacuoles, and Golgi material are described. A new entity, designated as a foamy particle, is noted. Descriptions are given of the cytoplasmic and nuclear membranes. During division the inner, thick nuclear membrane component is seen to vanish and the outer membrane persist. Measurements suggest a gradual reappearance of the inner component with growth. Starving animals show a loss of cytoplasmic granularity and an increase in the electron density of mitochondria, presumably due to lipide accumulation. PMID:13481020
Electron microscopic observations of amoeba proteus in growth and inanition.
COHEN, A I
1957-11-25
Electron microscopic observations have been made on growing and dividing specimens of Amoeba proteus and also on starving animals. Structures presumably corresponding to the mitochondria, alpha particles, vacuoles, and Golgi material are described. A new entity, designated as a foamy particle, is noted. Descriptions are given of the cytoplasmic and nuclear membranes. During division the inner, thick nuclear membrane component is seen to vanish and the outer membrane persist. Measurements suggest a gradual reappearance of the inner component with growth. Starving animals show a loss of cytoplasmic granularity and an increase in the electron density of mitochondria, presumably due to lipide accumulation.
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Storkel, Holly L.; Lee, Jaehoon; Cox, Casey
2016-01-01
Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276
Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey
2016-11-01
Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
NASA Astrophysics Data System (ADS)
Barth, H.
An hypothesis is presented concerning the crucial influence of tides on the evolutionary transition from aquatic to land animal forms. The hypothesis suggests that the evolution of higher forms of life on a planet also depends on the existence of a planet-moon system in which the mass ratio of both constituents must be approximately equal to that of the earth-moon system, which is 81:1. The hypothesis is taken into account in the form of the probability factor fb in Drake's formula for estimating the presumed extraterrestrial civilizations in Milky Way which may conceivably make contact.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
OZARKS ISOPRENE EXPERIMENT (OZIE): MEASUREMENTS AND MODELING OF THE ISOPRENE VOLCANO
The Ozarks Isoprene Experiment (OZIE) was conducted in July 1998 in Missouri, Illinois, Indiana, and Oklahoma. OZIE was designed to investigate the presumed strong isoprene emission rates from the Missouri Ozarks, where there is a high density of oak trees that are efficient isop...
OZARK ISOPRENE EXPERIMENT ( OZIE ): MEASUREMENTS AND MODELING OF THE "ISOPRENE VOLCANO"
The Ozarks Isoprene Experiment (OZIE) was conducted in July 1998 in Missouri, Illinois, Indiana, and Oklahoma. OZIE was designed to investigate the presumed strong isoprene emission rates from the Missouri Ozarks, where there is a high density of oak trees that are efficient isop...
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
Residual Defect Density in Random Disks Deposits.
Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A C
2015-08-03
We investigate the residual distribution of structural defects in very tall packings of disks deposited randomly in large channels. By performing simulations involving the sedimentation of up to 50 × 10(9) particles we find all deposits to consistently show a non-zero residual density of defects obeying a characteristic power-law as a function of the channel width. This remarkable finding corrects the widespread belief that the density of defects should vanish algebraically with growing height. A non-zero residual density of defects implies a type of long-range spatial order in the packing, as opposed to only local ordering. In addition, we find deposits of particles to involve considerably less randomness than generally presumed.
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Current source density correlates of cerebellar Golgi and Purkinje cell responses to tactile input
Tahon, Koen; Wijnants, Mike; De Schutter, Erik
2011-01-01
The overall circuitry of the cerebellar cortex has been known for over a century, but the function of many synaptic connections remains poorly characterized in vivo. We used a one-dimensional multielectrode probe to estimate the current source density (CSD) of Crus IIa in response to perioral tactile stimuli in anesthetized rats and to correlate current sinks and sources to changes in the spike rate of corecorded Golgi and Purkinje cells. The punctate stimuli evoked two distinct early waves of excitation (at <10 and ∼20 ms) associated with current sinks in the granular layer. The second wave was putatively of corticopontine origin, and its associated sink was located higher in the granular layer than the first trigeminal sink. The distinctive patterns of granular-layer sinks correlated with the spike responses of corecorded Golgi cells. In general, Golgi cell spike responses could be linearly reconstructed from the CSD profile. A dip in simple-spike activity of coregistered Purkinje cells correlated with a current source deep in the molecular layer, probably generated by basket cell synapses, interspersed between sparse early sinks presumably generated by synapses from granule cells. The late (>30 ms) enhancement of simple-spike activity in Purkinje cells was characterized by the absence of simultaneous sinks in the granular layer and by the suppression of corecorded Golgi cell activity, pointing at inhibition of Golgi cells by Purkinje axon collaterals as a likely mechanism of late Purkinje cell excitation. PMID:21228303
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Evaluating detection probabilities for American marten in the Black Hills, South Dakota
Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.
2007-01-01
Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.
Fast ion beta limit measurements by collimated neutron detection in MST plasmas
NASA Astrophysics Data System (ADS)
Capecchi, William; Anderson, Jay; Bonofiglo, Phillip; Kim, Jungha; Sears, Stephanie
2015-11-01
Fast ion orbits in the reversed field pinch (RFP) are well ordered and classically confined despite magnetic field stochasticity generated by multiple tearing modes. Classical TRANSP modeling of a 1MW tangentially injected hydrogen neutral beam in MST deuterium plasmas predicts a core-localized fast ion density that can be up to 25% of the electron density and a fast ion beta of many times the local thermal beta. However, neutral particle analysis of an NBI-driven mode (presumably driven by a fast ion pressure gradient) shows mode-induced transport of core-localized fast ions and a saturated fast ion density. The TRANSP modeling is presumed valid until the onset of the beam-driven mode and gives an initial estimate of the volume-averaged fast ion beta of 1-2% (local core value up to 10%). A collimated neutron detector for fusion product profile measurements will be used to determine the spatial distribution of fast ions, allowing for a first measurement of the critical fast-ion pressure gradient required for mode destabilization. Testing/calibration data and initial fast-ion profiles will be presented. Characterization of both the local and global fast ion beta will be done for deuterium beam injection into deuterium plasmas for comparison to TRANSP predictions. Work supported by US DOE.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
Algur, Nurit; Avraham, Irit; Hammerman, Cathy; Kaplan, Michael
2012-08-01
To determine enzyme assay reference values for newborns in a Sephardic Jewish population at high risk for glucose-6-phosphate dehydrogenase (G6PD) deficiency. Quantitative G6PD testing was performed on umbilical cord blood. The reduction of nicotinamide adenine dinucleotide phosphate to nicotinamide adenine dinucleotide phosphate-oxidase, reflecting G6PD activity, was measured spectrophotometrically. Hemoglobin (Hb) was measured on the same sample. G6PD activity was recorded as U/g Hb. Males (N = 1502) were separated into 2 distinct groups: those <7 U/g Hb (n = 243 [16.2%], median 0.28 U/g Hb), designated G6PD deficient, presumably hemizygotes; and those ≥ 9 U/g Hb (n = 1256 [83.8%], 18.76 U/g Hb), designated G6PD normal, presumably hemizygotes. Female (n = 1298) values were a continuum and were categorized based on the male distribution: those <7 U/g Hb (n = 81 [6.2%], 4.84 U/g Hb), G6PD deficient, probably homozogytes; those ≥ 9.5 U/g Hb, equivalent to 50% of the male normal value, (n = 1153 (88.8%), 18.36 U/g Hb), G6PD normal, probably homozygotes; and those with intermediate values (n = 64 [4.9%], 8.61 U/g Hb), probable heterozygotes. Accurate identification of the male G6PD-deficient state was possible despite high normal neonatal G6PD values. Female values were presented as a continuum preventing accurate classification but were classified based on male phenotype for practical use. Copyright © 2012 Mosby, Inc. All rights reserved.
Adverse reactions of α2-adrenoceptor agonists in cats reported in 2003-2013 in Finland.
Raekallio, Marja R; Virtanen, Marika; Happonen, Irmeli; Vainio, Outi M
2017-07-01
To describe suspected adverse drug reactions in cats associated with use of α 2 -adrenoceptor agonists. Retrospective study. A total of 90 cats. Data were collected from reports on adverse reactions to veterinary medicines sent to the Finnish Medicines Agency during 2003-2013. All reports of suspected adverse reactions associated with use of α 2 -adrenoceptor agonists in cats were included. Probable pulmonary oedema was diagnosed based on post mortem or radiological examination, or presence of frothy or excess fluid from the nostrils or trachea. If only dyspnoea and crackles on auscultation were reported, possible pulmonary oedema was presumed. Pulmonary oedema was suspected in 61 cases. Of these cats, 37 were categorised as probable and 24 as possible pulmonary oedema. The first clinical signs had been noted between 1 minute and 2 days (median, 15 minutes) after α 2 -adrenoceptor agonist administration. Many cats probably had no intravenous overhydration when the first clinical signs were detected, as either they presumably had no intravenous cannula or the signs appeared before, during or immediately after cannulation. Of the 61 cats, 43 survived, 14 died and for four the outcome was not clearly stated. Pulmonary oedema is a perilous condition that may appear within minutes of an intramuscular administration of sedative or anaesthetic agent in cats. The symptoms were not caused by intravenous overhydration, at least in cats having no venous cannula when the first clinical signs were detected. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
Presumed PDF Modeling of Early Flame Propagation in Moderate to Intense Turbulence Environments
NASA Technical Reports Server (NTRS)
Carmen, Christina; Feikema, Douglas A.
2003-01-01
The present paper describes the results obtained from a one-dimensional time dependent numerical technique that simulates early flame propagation in a moderate to intense turbulent environment. Attention is focused on the development of a spark-ignited, premixed, lean methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. A Monte-Carlo particle tracking method, based upon the method of fractional steps, is utilized to simulate the phenomena represented by a probability density function (PDF) transport equation. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on three primary parameters that influence the initial flame kernel growth: the detailed ignition system characteristics, the mixture composition, and the nature of the flow field. The computational results of moderate and intense isotropic turbulence suggests that flames within the distributed reaction zone are not as vulnerable, as traditionally believed, to the adverse effects of increased turbulence intensity. It is also shown that the magnitude of the flame front thickness significantly impacts the turbulent consumption flame speed. Flame conditions studied have fuel equivalence ratio s in the range phi = 0.6 to 0.9 at standard temperature and pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmeister, Kathryn N. Gabet; Guildenbecher, Daniel Robert; Kearney, Sean P.
We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of themore » plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.« less
NASA Astrophysics Data System (ADS)
Kim, Vitaly P.; Hegai, Valery V.; Liu, Jann Yenq; Ryu, Kwangsun; Chung, Jong-Kyun
2017-12-01
The electric coupling between the lithosphere and the ionosphere is examined. The electric field is considered as a time- varying irregular vertical Coulomb field presumably produced on the Earth’s surface before an earthquake within its epicentral zone by some micro-processes in the lithosphere. It is shown that the Fourier component of this electric field with a frequency of 500 Hz and a horizontal scale-size of 100 km produces in the nighttime ionosphere of high and middle latitudes a transverse electric field with a magnitude of 20 mV/m if the peak value of the amplitude of this Fourier component is just 30 V/m. The time-varying vertical Coulomb field with a frequency of 500 Hz penetrates from the ground into the ionosphere by a factor of 7×105 more efficient than a time independent vertical electrostatic field of the same scale size. The transverse electric field with amplitude of 20 mV/m will cause perturbations in the nighttime F region electron density through heating the F region plasma resulting in a reduction of the downward plasma flux from the protonosphere and an excitation of acoustic gravity waves.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Quantifying bushfire penetration into urban areas in Australia
NASA Astrophysics Data System (ADS)
Chen, Keping; McAneney, John
2004-06-01
The extent and trajectory of bushfire penetration at the bushland-urban interface are quantified using data from major historical fires in Australia. We find that the maximum distance at which homes are destroyed is typically less than 700 m. The probability of home destruction emerges as a simple linear and decreasing function of distance from the bushland-urban boundary but with a variable slope that presumably depends upon fire regime and human intervention. The collective data suggest that the probability of home destruction at the forest edge is around 60%. Spatial patterns of destroyed homes display significant neighbourhood clustering. Our results provide revealing spatial evidence for estimating fire risk to properties and suggest an ember-attack model.
Christopher J. Fettig; Stephen R. McKelvey; Christopher P. Dabney; Dezene P.W. Huber
2012-01-01
Currently, techniques for managing western pine beetle, Dendroctonus brevicomis LeConte (Coleoptera: Curculionidae, Scolytinae), infestations are limited to tree removals (thinning) that reduce stand density and presumably host susceptibility, and/or the use of insecticides to protect individual trees. There continues to be significant interest in...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Cohen, Bradley S.; Belser, Emily H.; Killmaster, Charlie H.; Bowers, John W.; Irwin, Brian J.; Yabsley, Michael J.; Miller, Karl V.
2015-01-01
Intracranial abscess disease is a cause of natural mortality for mature male white-tailed deer (Odocoileus virginianus). Most cases of abscesses are associated with bacterial infection byTrueperella (Arcanobacterium) pyogenes, but a complete understanding of the epidemiology of this disease is lacking. We quantified the effects of individual characteristics, site-specific herd demographics, land cover, and soil variables in estimating the probability of this disease. We examined 7,545 white-tailed deer from 60 sites throughout Georgia US for signs of cranial abscesses, the predecessor of intracranial abscesses, and recorded the presence or absence of cranial abscesses for each individual examined. We detected no cranial abscesses in 2,562 female deer but 91 abscesses in 4,983 male deer examined (1.8%). A generalized linear mixed model, treating site as a random effect, was used to examine several potential explanatory risk factors including site-level landscape and soil characteristics (soil and forest type), demographic factors (deer density and male to female ratio), and individual host factors (deer sex and age). Model results indicated that the probability of a male having a cranial abscess increased with age and that adult sex ratio (male:female) was positively associated with this disease. Site-specific variables for land cover and soil types were not strongly associated with observations of the disease at the scale measured and a large amount of among-site variability remained. Given the demonstrated effect of age, gender, and local sex ratios but the remaining unexplained spatial variability, additional investigation into spatiotemporal variation of the presumed bacterial causative agent of cranial abscesses appears warranted.
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Ruddy, M.; McHugh, T. D.; Dale, J. W.; Banerjee, D.; Maguire, H.; Wilson, P.; Drobniewski, F.; Butcher, P.; Gillespie, S. H.
2002-01-01
Isolates from patients with confirmed tuberculosis from London were collected over 2.5 years between 1995 and 1997. Restriction fragment length polymorphism (RFLP) analysis was performed by the international standard technique as part of a multicenter epidemiological study. A total of 2,779 samples representing 2,500 individual patients from 56 laboratories were examined. Analysis of these samples revealed a laboratory cross-contamination rate of between 0.54%, when only presumed cases of cross-contamination were considered, and 0.93%, when presumed and possible cases were counted. Previous studies suggest an extremely wide range of laboratory cross-contamination rates of between 0.1 and 65%. These data indicate that laboratory cross-contamination has not been a common problem in routine practice in the London area, but in several incidents patients did receive full courses of therapy that were probably unnecessary. PMID:12409381
Meningitis caused by Oerskovia xanthineolytica.
Kailath, E J; Goldstein, E; Wagner, F H
1988-03-01
In summary, we describe a case of central nervous system infection with O. xanthineolytica in which the infecting microbe probably was engrafted on a ventricular shunt. The bacteria caused a smoldering meningitis that did not respond to penicillin and rifampin despite in vitro sensitivity, presumably because of inadequate cerebrospinal fluid penetration of the penicillin and the recognized difficulty of eradicating bacteria from contaminated shunts. Removal of the shunt and continued treatment with penicillin and rifampin resulted in cure.
A constraint on impact theories of chondrule formation
NASA Technical Reports Server (NTRS)
Kerridge, J. F.; Kieffer, S. W.
1977-01-01
The association between agglutinates and chondrule-like spherules, which characterizes the assemblage of impact-derived melt products in lunar regolith samples and some gas-rich achondrites, is not found in primitive chondrites. This observation suggests that impacts into a parent-body regolith are unlikely to have produced the chondrules. We believe that if chondrules were formed from impact melt, it was probably generated by jetting during particle-to-particle collisions, presumably in the nebula.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabyrov, Kairat; Musselwhite, Nathan; Melaet, Gérôme
As the impact of acids on catalytically driven chemical transformations is tremendous, fundamental understanding of catalytically relevant factors is essential for the design of more efficient solid acid catalysts. In this work, we employed a post-synthetic doping method to synthesize a highly selective hydroisomerization catalyst and to demonstrate the effect of acid strength and density, catalyst microstructure, and platinum nanoparticle size on the reaction rate and selectivity. Aluminum doped mesoporous silica catalyzed gas-phase n-hexadecane isomerization with remarkably high selectivity to monobranched isomers (~95%), producing a substantially higher amount of isomers than traditional zeolite catalysts. Mildly acidic sites generated by post-syntheticmore » aluminum grafting were found to be the main reason for its high selectivity. The flexibility of the post-synthetic doping method enabled us to systematically explore the effect of the acid site density on the reaction rate and selectivity, which has been extremely difficult to achieve with zeolite catalysts. We found that a higher density of Brønsted acid sites leads to higher cracking of n-hexadecane presumably due to an increased surface residence time. Furthermore, regardless of pore size and microstructure, hydroisomerization turnover frequency linearly increased as a function of Brønsted acid site density. In addition to strength and density of acid sites, platinum nanoparticle size affected catalytic activity and selectivity. The smallest platinum nanoparticles produced the most effective bifunctional catalyst presumably because of higher percolation into aluminum doped mesoporous silica, generating more 'intimate' metallic and acidic sites. Finally, the aluminum doped silica catalyst was shown to retain its remarkable selectivity towards isomers even at increased reaction conversions.« less
Sabyrov, Kairat; Musselwhite, Nathan; Melaet, Gérôme; ...
2017-01-01
As the impact of acids on catalytically driven chemical transformations is tremendous, fundamental understanding of catalytically relevant factors is essential for the design of more efficient solid acid catalysts. In this work, we employed a post-synthetic doping method to synthesize a highly selective hydroisomerization catalyst and to demonstrate the effect of acid strength and density, catalyst microstructure, and platinum nanoparticle size on the reaction rate and selectivity. Aluminum doped mesoporous silica catalyzed gas-phase n-hexadecane isomerization with remarkably high selectivity to monobranched isomers (~95%), producing a substantially higher amount of isomers than traditional zeolite catalysts. Mildly acidic sites generated by post-syntheticmore » aluminum grafting were found to be the main reason for its high selectivity. The flexibility of the post-synthetic doping method enabled us to systematically explore the effect of the acid site density on the reaction rate and selectivity, which has been extremely difficult to achieve with zeolite catalysts. We found that a higher density of Brønsted acid sites leads to higher cracking of n-hexadecane presumably due to an increased surface residence time. Furthermore, regardless of pore size and microstructure, hydroisomerization turnover frequency linearly increased as a function of Brønsted acid site density. In addition to strength and density of acid sites, platinum nanoparticle size affected catalytic activity and selectivity. The smallest platinum nanoparticles produced the most effective bifunctional catalyst presumably because of higher percolation into aluminum doped mesoporous silica, generating more 'intimate' metallic and acidic sites. Finally, the aluminum doped silica catalyst was shown to retain its remarkable selectivity towards isomers even at increased reaction conversions.« less
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
IRT-LR-DIF with Estimation of the Focal-Group Density as an Empirical Histogram
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Item response theory-likelihood ratio-differential item functioning (IRT-LR-DIF) is used to evaluate the degree to which items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Usually, the latent distribution is presumed normal for both…
Christopher J. Fettig; Stephen R. McKelvey; Christopher P. Dabney; Dezene P.W. Huber
2012-01-01
Currently, techniques for managing western pine beetle, Dendroctonus brevicomis LeConte (Coleoptera: Curculionidae, Scolytinae), infestations are limited to tree removals (thinning) that reduce stand density and presumably host susceptibility, and/or the use of insecticides to protect individual trees. There continues to be significant interest in...
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Geochemistry of thermal water from selected wells, Boise, Idaho
Mariner, R.H.; Young, H.W.; Parliman, D.J.; Evans, William C.
1989-01-01
Samples of thermal water from selected wells in the Boise area were analyzed for chemical composition; stable isotopes of hydrogen, oxygen, and dissolved carbon; radioactive carbon; and dissolved-gas concentrations. Chemically, the waters are virtually identical to those of the adjacent Idaho batholith. Isotopically, the thermal waters are more depleted in deuterium and oxygen-18 than coldwater springs in the presumed recharge area. Chemical and isotopic data indicate the presence of two separate geothermal systems. Radioactive carbon and dissolved helium concentrations are interpreted to indicate recharge during the Pleistocene. Hot water in or southeast of Boise probably recharged 20,000 to 30,000 years ago, and warm water 2.5 miles northwest of Boise probably recharged at least 15,000 years ago.
SPENDING TOO MUCH TIME AT THE GALACTIC BAR: CHAOTIC FANNING OF THE OPHIUCHUS STREAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price-Whelan, Adrian M.; Johnston, Kathryn V.; Sesar, Branimir
2016-06-20
The Ophiuchus stellar stream is peculiar: (1) its length is short given the age of its constituent stars, and (2) several probable member stars have dispersions in sky position and velocity that far exceed those seen within the stream. The stream’s proximity to the Galactic center suggests that its dynamical history is significantly influenced by the Galactic bar. We explore this hypothesis with models of stream formation along orbits consistent with Ophiuchus’ properties in a Milky Way potential model that includes a rotating bar. In all choices for the rotation parameters of the bar, orbits fit to the stream aremore » strongly chaotic. Mock streams generated along these orbits qualitatively match the observed properties of the stream: because of chaos, stars stripped early generally form low-density, high-dispersion “fans” leaving only the most recently disrupted material detectable as a strong over-density. Our models predict that there should be a significant amount of low-surface-brightness tidal debris around the stream with a complex phase-space morphology. The existence of or lack of these features could provide interesting constraints on the Milky Way bar and would rule out formation scenarios for the stream. This is the first time that chaos has been used to explain the properties of a stellar stream and is the first demonstration of the dynamical importance of chaos in the Galactic halo. The existence of long, thin streams around the Milky Way, presumably formed along non- or weakly chaotic orbits, may represent only a subset of the total population of disrupted satellites.« less
NASA Astrophysics Data System (ADS)
Trekels, Hendrik; Driesen, Mario; Vanschoenwinkel, Bram
2017-11-01
Globally, moss associated invertebrates remain poorly studied and it is largely unknown to what extent their diversity is driven by local environmental conditions or the landscape context. Here, we investigated small scale drivers of invertebrate communities in a moss landscape in a temperate forest in Western Europe. By comparing replicate quadrats of 5 different moss species in a continuous moss landscape, we found that mosses differed in invertebrate density and community composition. Although, in general, richness was similar among moss species, some invertebrate taxa were significantly linked to certain moss species. Only moss biomass and not relative moisture content could explain differences in invertebrate densities among moss species. Second, we focused on invertebrate communities associated with the locally common moss species Kindbergia praelonga in isolated moss patches on dead tree trunks to look at effects of patch size, quality, heterogeneity and connectivity on invertebrate communities. Invertebrate richness was higher in patches under closed canopies than under more open canopies, presumably due to the higher input of leaf litter and/or lower evaporation. In addition, increased numbers of other moss species in the same patch seemed to promote invertebrate richness in K. praelonga, possibly due to mass effects. Since invertebrate richness was unaffected by patch size and isolation, dispersal was probably not limiting in this system with patches separated by tens of meters, or stochastic extinctions may be uncommon. Overall, we conclude that invertebrate composition in moss patches may not only depend on local patch conditions, in a particular moss species, but also on the presence of other moss species in the direct vicinity.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Diamond burr debridement of 34 canine corneas with presumed corneal calcareous degeneration.
Nevile, Jessica C; Hurn, Simon D; Turner, Andrew G; Morton, John
2016-07-01
To describe the signalment, presence of systemic and/or ocular comorbidities, times to detected healing and probabilities of recurrence after diamond burr debridement (DBD) of eyes with presumed corneal calcareous degeneration and secondary ulceration and/or ocular pain. Twenty-six dogs with 42 eyes affected, 34 eyes treated with DBD. A case series was conducted using medical records from a private veterinary ophthalmology referral practice. Dogs were included if they had white or gray corneal opacity consistent with corneal calcareous degeneration with either erosive or superficial ulceration and/or ocular pain in at least one eye and had at least one such eye treated with DBD. DBD was performed with a battery-operated handheld motorized burr (The Alger Company, Inc. Lago Vista, TX, USA), and a bandage contact lens was placed in the majority of eyes (30/34). Eyes were considered healed when the cornea was fluorescein negative, and there were no signs of ocular pain. Patient data (signalment, recurrence) were extracted from medical records. Dogs were first re-examined 7-62 days after treatment (median: 13 days). All DBD-treated eyes healed within 62 days (% healed: 100%; one-sided 97.5% CI: 90-100%, median: 14 days), 82% of eyes (28/34) were healed at first re-examination (median: 13 days after treatment), and all were healed by their second examination (median: 24 days). Of the 34 treated eyes, 11 were lost to follow up; 11 of the remaining 23 eyes recurred. Estimated 1-year recurrence probability was 58% (95% CI: 35-83%). Seven dogs had systemic disease; 7 had a history of prior ocular disease or intraocular surgery. Diamond burr debridement is a safe and effective treatment for rapid resolution of superficial corneal ulceration and ocular pain secondary to presumed corneal calcareous degeneration in dogs. © 2015 American College of Veterinary Ophthalmologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
NASA Astrophysics Data System (ADS)
Tulp, Ingrid; Craeymeersch, Johan; Leopold, Mardik; van Damme, Cindy; Fey, Frouke; Verdaat, Hans
2010-12-01
The razor clam Ensis directus was introduced to Europe presumably as larvae in ballast water around 1978. Starting in the German Bight it spread northward and southward along the continental coastline. Currently it is the most common shellfish species in the Dutch coastal zone, where it mainly occurs in the Voordelta and off the Wadden Sea islands. The mean density of E. directus in the Dutch coastal zone increased from around 2-5 individuals m -2 in the late '90's to around 12-19 individuals m -2 from 2002 onwards. Diet studies show that E. directus makes up a significant proportion in the current diet of plaice, sole, dab, flounder and dragonet and in the diet of eider and common scoter. In recent years E. directus contributed 20-100% of the total wet weight in fish stomachs. The proportion E. directus in the diet increases with fish length. Based on stomach contents of oiled and beached birds and of faeces samples the recent frequency of occurrence is 85-90% in eider and 26% in common scoter. Also waders, gulls and corvids prey on E. directus but the contribution to the diet is still unquantified. Because of its great burying depth the species is not easily accessible. Fish either profit from massive die-offs that regularly occur, or they extract (probably only the smaller) individuals from the sediment. Sea ducks can extract E. directus from the sediment, while shorebirds and gulls feed on dying E. directus washing up on the shore. E. directus is possibly an important food item for fish and seabirds when they occur in high densities and in the right size classes. Since the availability depends greatly on massive die-offs, shell size, burying depth and water depth, it is probably not a very reliable food source. Judging from the role E. directus currently plays for the higher trophic levels, its introduction must have caused a major change in the food relations in its distribution area.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Andrew Youngblood; Kerry L. Metlen; Kent Coe
2006-01-01
In many fire-dependent forests in the United States, changes occurring in the last century have resulted in overstory structures, conifer densities, down woody structure and understory plant communities that deviate from those described historically. With these changes, many forests are presumed to be unsustainable. Broad-scale treatments are proposed to promote stand...
NASA Astrophysics Data System (ADS)
Bruinsma, Sean L.; Forbes, Jeffrey M.
2010-08-01
Densities derived from accelerometer measurements on the GRACE, CHAMP, and Air Force/SETA satellites near 490, 390, and 220 km, respectively, are used to elucidate global-scale characteristics of traveling atmospheric disturbances (TADs). Several characteristics elucidated in numerical simulations are confirmed in this study, namely: (1) propagation speeds increase from the lower thermosphere to the upper thermosphere; (2) propagation to the equator and even into the opposite hemisphere can occur; (3) greater attenuation of TADs occurs during daytime and at higher levels of solar activity (i.e., more wave activity during nighttime and solar minimum), presumably due to the greater influence of ion drag. In addition, we find that the occurrence of significant TAD activity emanating from the auroral regions does not reflect a clear relation with the level of planetary magnetic activity as measured by Kp. There is also evidence of waves originating in the tropics, presumably due to convective sources; to some extent this may contribute to the Kp and solar flux relationships noted above. Further elucidation of local time, season, and altitude dependences of TAD propagation characteristics may be forthcoming from density measurements from the GOCE and Swarm missions.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
Wanders, Johanna Olga Pauline; Bakker, Marije Fokje; Veldhuis, Wouter Bernard; Peeters, Petra Huberdina Maria; van Gils, Carla Henrica
2015-05-30
High weight and high percentage mammographic breast density are both breast cancer risk factors but are negatively correlated. Therefore, we wanted to obtain more insight into this apparent paradox. We investigated in a longitudinal study how weight change over menopause is related to changes in mammographic breast features. Five hundred ninety-one participants of the EPIC-NL cohort were divided into three groups according to their prospectively measured weight change over menopause: (1) weight loss (more than -3.0 %), (2) stable weight (between -3.0 % and +3.0 %), and (3) weight gain (more than 3.0 %). SPSS GLM univariate analysis was used to determine both the mean breast measure changes in, and the trend over, the weight change groups. Over a median period of 5 years, the mean changes in percent density in these groups were -5.0 % (95 % confidence interval (CI) -8.0; -2.1), -6.8 % (95 % CI -9.0; -4.5), and -10.2 % (95 % CI -12.5; -7.9), respectively (P-trend = 0.001). The mean changes in dense area were -16.7 cm(2) (95 % CI -20.1; -13.4), -16.4 cm(2) (95 % CI -18.9; -13.9), and -18.1 cm(2) (95 % CI -20.6; -15.5), respectively (P-trend = 0.437). Finally, the mean changes in nondense area were -6.1 cm(2) (95 % CI -11.9; -0.4), -0.6 cm(2) (95 % CI -4.9; 3.8), and 5.3 cm(2) (95 % CI 0.9; 9.8), respectively (P-trend < 0.001). Going through menopause is associated with a decrease in both percent density and dense area. Owing to an increase in the nondense tissue, the decrease in percent density is largest in women who gain weight. The decrease in dense area is not related to weight change. So the fact that both high percent density and high weight or weight gain are associated with high postmenopausal breast cancer risk can probably not be explained by an increase (or slower decrease) of dense area in women gaining weight compared with women losing weight or maintaining a stable weight. These results suggest that weight and dense area are presumably two independent postmenopausal breast cancer risk factors.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shin, Hyun Kyung; Choi, Bongsik; Talkner, Peter; Lee, Eok Kyun
2014-12-07
Based on the generalized Langevin equation for the momentum of a Brownian particle a generalized asymptotic Einstein relation is derived. It agrees with the well-known Einstein relation in the case of normal diffusion but continues to hold for sub- and super-diffusive spreading of the Brownian particle's mean square displacement. The generalized asymptotic Einstein relation is used to analyze data obtained from molecular dynamics simulations of a two-dimensional soft disk fluid. We mainly concentrated on medium densities for which we found super-diffusive behavior of a tagged fluid particle. At higher densities a range of normal diffusion can be identified. The motion presumably changes to sub-diffusion for even higher densities.
NASA Astrophysics Data System (ADS)
Shin, Hyun Kyung; Choi, Bongsik; Talkner, Peter; Lee, Eok Kyun
2014-12-01
Based on the generalized Langevin equation for the momentum of a Brownian particle a generalized asymptotic Einstein relation is derived. It agrees with the well-known Einstein relation in the case of normal diffusion but continues to hold for sub- and super-diffusive spreading of the Brownian particle's mean square displacement. The generalized asymptotic Einstein relation is used to analyze data obtained from molecular dynamics simulations of a two-dimensional soft disk fluid. We mainly concentrated on medium densities for which we found super-diffusive behavior of a tagged fluid particle. At higher densities a range of normal diffusion can be identified. The motion presumably changes to sub-diffusion for even higher densities.
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Vegetable oil fortified feeds in the nutrition of very low birthweight babies.
Vaidya, U V; Hegde, V M; Bhave, S A; Pandit, A N
1992-12-01
Two kinds of oils (i) Polyunsaturated fatty acids (PUFA) rich Safflower oil, and (ii) Medium chain triglyceride (MCT) rich Coconut oil were added to the feeds of 46 very low birthweight (VLBW) babies to see if such a supplementation is capable of enhancing their weight gain. Twenty two well matched babies who received no fortification served as controls. The oil fortification raised the energy density of the feeds from approximately 67 kcal/dl to 79 kcal/dl. Feed volumes were restricted to a maximum of 200 ml/kg/day. The mean weight gain was highest and significantly higher than the controls in the Coconut oil group (19.47 +/- 8.67 g/day or 13.91 g/day). Increase in the triceps skinfold thickness and serum triglycerides were also correspondingly higher in this group. The lead in the weight gain in this group continued in the follow up period (corrected age 3 months). As against this, higher weight gain in Safflower oil group (13.26 +/- 6.58 g/day) as compared to the controls (11.59 +/- 5.33 g/day), failed to reach statistically significant proportions, probably because of increased statistically significant proportions, probably because of increased steatorrhea (stool fat 4+ in 50% of the samples tested). The differences in the two oil groups are presumably because of better absorption of MCT rich coconut oil. However, individual variations in weight gain amongst the babies were wide so that some control babies had higher growth rates than oil fortified ones. The technique of oil fortification is fraught with dangers of intolerance, contamination and aspiration. Long term effects of such supplementation are largely unknown.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Massoudieh, A.; Dentz, M.; Le Borgne, T.
2017-12-01
In heterogeneous media, the velocity distribution and the spatial correlation structure of velocity for solute particles determine the breakthrough curves and how they evolve as one moves away from the solute source. The ability to predict such evolution can help relating the spatio-statistical hydraulic properties of the media to the transport behavior and travel time distributions. While commonly used non-local transport models such as anomalous dispersion and classical continuous time random walk (CTRW) can reproduce breakthrough curve successfully by adjusting the model parameter values, they lack the ability to relate model parameters to the spatio-statistical properties of the media. This in turns limits the transferability of these models. In the research to be presented, we express concentration or flux of solutes as a distribution over their velocity. We then derive an integrodifferential equation that governs the evolution of the particle distribution over velocity at given times and locations for a particle ensemble, based on a presumed velocity correlation structure and an ergodic cross-sectional velocity distribution. This way, the spatial evolution of breakthrough curves away from the source is predicted based on cross-sectional velocity distribution and the connectivity, which is expressed by the velocity transition probability density. The transition probability is specified via a copula function that can help construct a joint distribution with a given correlation and given marginal velocities. Using this approach, we analyze the breakthrough curves depending on the velocity distribution and correlation properties. The model shows how the solute transport behavior evolves from ballistic transport at small spatial scales to Fickian dispersion at large length scales relative to the velocity correlation length.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
Groups of meteorite-producing meteoroids containing carbonaceous chondrite meteorites
NASA Astrophysics Data System (ADS)
Konovalova, N. A.; A.. Ibrohimov, A.; Kalashnikova, T. M.
2017-09-01
Proposed probable links of meteorite and meteorite-producing fireballs were been considered. Group associations between meteorite-producing meteoroids and meteorites were been determined for four carbonaceous chondrites Murchison, Maribo, Shutters Mill and Tagish Lake and potentially meteorite-producing bolides on the basis of links of their orbits. In result the several meteorite-producing sporadic slowly fireballs were found as the possible members of groups of four studied carbonaceous chondrite meteorites. One can presume that at present the identified groups may still contain large meteorite-dropping bodies.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Observations of mirror waves and plasma depletion layer upstream of Saturn's magnetopause
NASA Technical Reports Server (NTRS)
Violante, L.; Cattaneo, M. B. Bavassano; Moreno, G.; Richardson, J. D.
1995-01-01
The two inbound traversals of the Saturn's magnetosheath by Voyagers 1 and 2 have been studied using plasma and magnetic field data. In a great portion of the subsolar magnetosheath, large-amplitude compressional waves are observed at low frequency (approximately 0.1 f(sub p)) in a high-beta plasma regime. The fluctuations of the magnetic field magnitude and ion density are anticorrelated, as are those of the magnetic and thermal pressures. The normals to the structures are almost orthogonal to the background field, and the Doppler ratio is on the average small. Even though the data do not allow the determination of the ion thermal anisotropy, the observations are consistent with values of T(sub perpendicular)/T(sub parallel) greater than 1, producing the onset of the mirror instability. All the above features indicate that the waves should be most probably identified with mirror modes. One of the two magnetopause crossings is of the high-shear type and the above described waves are seen until the magnetopause. The other crossing is of the low-shear type and, similarly to what has been observed at Earth, a plasma depletion occurs close to the magnetopause. In this layer, waves with smaller amplitude, presumably of the mirror mode, are present together with higher-frequency waves showing a transverse component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Liyi; Makishima, Kazuo; Yagi, Masafumi
We report the detection of an X-ray absorption feature near the galaxy M86 in the Virgo cluster. The absorber has a column density of 2-3 × 10{sup 20} cm{sup –2}, and its position coincides with the peak of an intracluster H I cloud which was removed from the galaxy NGC 4388 presumably by ram pressure. These results indicate that the H I cloud is located in front of M86 along the line-of-sight, and suggest that the stripping was primarily created by an interaction between NGC 4388 and the hot plasmas of the Virgo cluster, not the M86 halo. By calculatingmore » an X-ray temperature map, we further detected an X-ray counterpart of the H I cloud up to ≈3' south of M86. It has a temperature of 0.89 keV and a mass of ∼4.5 × 10{sup 8} M {sub ☉}, exceeding the estimated H I gas mass. The high hot-to-cold gas ratio in the cloud indicates a significant evaporation of the H I gas, probably by thermal conduction from the hotter cluster plasma with a sub-Spitzer rate.« less
Hybrid fs/ps CARS for Sooting and Particle-laden Flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmeister, Kathryn N. Gabet; Guildenbecher, Daniel Robert; Kearney, Sean P.
2015-12-01
We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of themore » plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.« less
Computations of steady-state and transient premixed turbulent flames using pdf methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulek, T.; Lindstedt, R.P.
1996-03-01
Premixed propagating turbulent flames are modeled using a one-point, single time, joint velocity-composition probability density function (pdf) closure. The pdf evolution equation is solved using a Monte Carlo method. The unclosed terms in the pdf equation are modeled using a modified version of the binomial Langevin model for scalar mixing of Valino and Dopazo, and the Haworth and Pope (HP) and Lagrangian Speziale-Sarkar-Gatski (LSSG) models for the viscous dissipation of velocity and the fluctuating pressure gradient. The source terms for the presumed one-step chemical reaction are extracted from the rate of fuel consumption in laminar premixed hydrocarbon flames, computed usingmore » a detailed chemical kinetic mechanism. Steady-state and transient solutions are obtained for planar turbulent methane-air and propane-air flames. The transient solution method features a coupling with a Finite Volume (FV) code to obtain the mean pressure field. The results are compared with the burning velocity measurements of Abdel-Gayed et al. and with velocity measurements obtained in freely propagating propane-air flames by Videto and Santavicca. The effects of different upstream turbulence fields, chemical source terms (different fuels and strained/unstrained laminar flames) and the influence of the velocity statistics models (HP and LSSG) are assessed.« less
Gerasimov, A V; Kostyuchenko, V P; Solovieva, A S; Olovnikov, A M
2014-10-01
We found that some morphological properties of the pineal gland and submandibular salivary gland of mice are significantly distinct at the new and full moon. We suppose that the differences are initiated by the displacements of the electron-dense concretions in the secretory vesicles of pinealocytes. This presumably occurs under the influence of the gravitational field, which periodically changes during different phases of the moon. It seems that the pinealocyte is both an endocrine and gravisensory cell. A periodic secretion of the pineal gland probably stimulates, in a lunaphasic mode, the neuroendocrine system that, in turn, periodically exerts influence on different organs of the body. The observed effect probably serves, within the lifelong clock of a brain, to control development and aging in time.
A 'new' Cromer-related high frequency antigen probably antithetical to WES.
Daniels, G L; Green, C A; Darr, F W; Anderson, H; Sistonen, P
1987-01-01
An antibody to a high frequency antigen, made in a WES+ Black antenatal patient (Wash.), failed to react with the red cells of a presumed WES+ homozygote and is, therefore, probably antithetical to anti-WES. Like anti-WES, it reacted with papain, ficin, trypsin or neuraminidase treated cells but not with alpha-chymotrypsin or pronase treated cells and was specifically inhibited by concentrated serum. It also reacted more strongly in titration with WES- cells than with WES+ cells. The antibody is Cromer-related as it failed to react with Inab phenotype (IFC-) cells and reacted only weakly with Dr(a-) cells. Wash. cells and those of the other possible WES+ homozygote are Cr(a+) Tc(a+b-c-) Dr(a+) IFC+ but reacted only very weakly with anti-Esa.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-11-25
On 17 December 1976 the Liberian tanker SANSINENA, moored at berth 46, Union Oil Terminal, Los Angeles Harbor, California, exploded and burned while taking on ballast and bunkers. The casualty resulted in six members of the SANSINENA's crew known dead, and 22 injured. Two crewmembers and one terminal security guard are missing and presumed dead. Also approximately 36 personal injuries were suffered by the general public. This report contains the U.S. Coast Guard Marine Board of Investigation report and the Action taken by the Commandant to determine the probable cause of the casualty and the recommendations to prevent recurrence. Themore » Commandant concurred with the Marine Board that the probable cause of the casualty was the ignition of a hydrocarbon vapor cloud over the afterdeck of the SANSINENA. The source of ignition cannot be positively identified; however, it was most probably located in the vicinity of the midship deckhouse.« less
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
Weak Magnetic Fields in Two Herbig Ae Systems: The SB2 AK Sco and the Presumed Binary HD 95881
NASA Astrophysics Data System (ADS)
Järvinen, S. P.; Carroll, T. A.; Hubrig, S.; Ilyin, I.; Schöller, M.; Castelli, F.; Hummel, C. A.; Petr-Gotzens, M. G.; Korhonen, H.; Weigelt, G.; Pogodin, M. A.; Drake, N. A.
2018-05-01
We report the detection of weak mean longitudinal magnetic fields in the Herbig Ae double-lined spectroscopic binary AK Sco and in the presumed spectroscopic Herbig Ae binary HD 95881 using observations with the High Accuracy Radial velocity Planet Searcher polarimeter (HARPSpol) attached to the European Southern Observatory’s (ESO’s) 3.6 m telescope. Employing a multi-line singular value decomposition method, we detect a mean longitudinal magnetic field < {B}{{z}}> =-83+/- 31 G in the secondary component of AK Sco on one occasion. For HD 95881, we measure < {B}{{z}}> =-93+/- 25 G and < {B}{{z}}> =105+/- 29 G at two different observing epochs. For all the detections the false alarm probability is smaller than 10‑5. For AK Sco system, we discover that accretion diagnostic Na I doublet lines and photospheric lines show intensity variations over the observing nights. The double-lined spectral appearance of HD 95881 is presented here for the first time.
Headings, V E
1980-08-01
The range of expression of homosexuality and its association with certain cultural, environmental, and genetic factors are most consistent with the concept of a multifactorial trait. Additionally, genetic heterogeneity in this phenotype (alternative mutants corresponding to a single phenotype) is highly probable. In certain nonhuman and presumably in human species the normal sexual development of the hypothalamus is guided by an appropriate exposure to androgen at a critical early stage, and this in turn presumably contributes to sociopsychologic sex development. Particularly instructive in this regard have been the monogenic experiments of nature in man--XY females with insensitivity to androgens, congenital adrenal hyperplasia, and male pseudohermaphrodites (5-alpha-reductase deficiency). Additionally, in the human, sociopsychologic sex also appears to be molded by sex assigned at birth and sex of rearing. Several of the intersexuality syndromes and psychoses are accompanied by increased homosexuality, but a majority of homosexuals are not in these categories. A limited number of family studies, including twins, tentatively suggests a heritable risk, at least in some families.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Continuous-time random-walk model for financial distributions
NASA Astrophysics Data System (ADS)
Masoliver, Jaume; Montero, Miquel; Weiss, George H.
2003-02-01
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.
Andrew Youngblood; Clinton S. Wright; Roger D. Ottmar; James D. McIver
2007-01-01
In many fire-prone forests in the United States, changes occurring in the last century have resulted in overstory structures, conifer densities, down woody structure, and fuel loads that deviate from those described historically. With these changes, forests are presumed to be unsustainable. Broad-scale treatments are proposed to reduce fuels and promote stand...
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Strong gravitational lensing statistics as a test of cosmogonic scenarios
NASA Technical Reports Server (NTRS)
Cen, Renyue; Gott, J. Richard, III; Ostriker, Jeremiah P.; Turner, Edwin L.
1994-01-01
Gravitational lensing statistics can provide a direct and powerful test of cosmic structure formation theories. Since lensing tests, directly, the magnitude of the nonlinear mass density fluctuations on lines of sight to distant objects, no issues of 'bias' (of mass fluctuations with respect to galaxy density fluctuations) exist here, although lensing observations provide their own ambiguities of interpretation. We develop numerical techniques for generating model density distributions with the very large spatial dynamic range required by lensing considerations and for identifying regions of the simulations capable of multiple image lensing in a conservative and computationally efficient way that should be accurate for splittings significantly larger than 3 seconds. Applying these techniques to existing standard Cold dark matter (CDM) (Omega = 1) and Primeval Baryon Isocurvature (PBI) (Omega = 0.2) simulations (normalized to the Cosmic Background Explorer Satellite (COBE) amplitude), we find that the CDM model predicts large splitting (greater than 8 seconds) lensing events roughly an order-of-magnitude more frequently than the PBI model. Under the reasonable but idealized assumption that lensing structrues can be modeled as singular isothermal spheres (SIS), the predictions can be directly compared to observations of lensing events in quasar samples. Several large splitting (Delta Theta is greater than 8 seconds) cases are predicted in the standard CDM model (the exact number being dependent on the treatment of amplification bias), whereas none is observed. In a formal sense, the comparison excludes the CDM model at high confidence (essentially for the same reason that CDM predicts excessive small-scale cosmic velocity dispersions.) A very rough assessment of low-density but flat CDM model (Omega = 0.3, Lambda/3H(sup 2 sub 0) = 0.7) indicates a far lower and probably acceptable level of lensing. The PBI model is consistent with, but not strongly tested by, the available lensing data, and other open models would presumably do as well as PBI. These preliminary conclusions and the assumptions on which they are based can be tested and the analysis can be applied to other cosmogonic models by straightforward extension of the work presented here.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
NASA Astrophysics Data System (ADS)
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Competition between harvester ants and rodents in the cold desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.
1979-09-30
Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less
Molecular Identification of Bacteria from Aseptically Loose Implants
Kobayashi, Naomi; Procop, Gary W.; Krebs, Viktor; Kobayashi, Hideo
2008-01-01
Polymerase chain reaction (PCR) assays have been used to detect bacteria adherent to failed orthopaedic implants, but some PCR assays have had problems with probable false-positive results. We used a combination of a Staphylococcus species-specific PCR and a universal PCR followed by DNA sequencing to identify bacteria on implants retrieved from 52 patients (92 implants) at revision arthroplasty. We addressed two questions in this study: (1) Is this method able to show the existence of bacterial DNA on presumed aseptic loosed implants?; and (2) What proportion of presumed aseptic or culture-negative implants was positive for bacterial DNA by PCR? Fourteen implants (15%) were believed infected, whereas 74 implants (85%) were believed aseptic. Each implant was sonicated and the resulting solution was submitted for dual real-time PCR assay and culture. All implants believed aseptically loose were culture-negative, but nine of the 74 (12%) had bacterial DNA by PCR; two (2.7%) were PCR-positive and also showed histologic findings suggestive of infection. Uniquely developed PCR and bacterial sequencing assays showed bacterial DNA on 12% of implants removed for presumed aseptic loosening. Additional studies are needed to determine the clinical importance of bacterial DNA detected by PCR but not by conventional culture. Level of Evidence: Level III, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:18438724
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Spatial and temporal patterns of coexistence between competing Aedes mosquitoes in urban Florida
Juliano, S. A.
2009-01-01
Understanding mechanisms fostering coexistence between invasive and resident species is important in predicting ecological, economic, or health impacts of invasive species. The mosquito Aedes aegypti coexists at some urban sites in southeastern United States with invasive Aedes albopictus, which is often superior in interspecific competition. We tested predictions for three hypotheses of species coexistence: seasonal condition-specific competition, aggregation among individual water-filled containers, and colonization–competition tradeoff across spatially partitioned habitat patches (cemeteries) that have high densities of containers. We measured spatial and temporal patterns of abundance for both species among water-filled resident cemetery vases and experimentally positioned standard cemetery vases and ovitraps in metropolitan Tampa, Florida. Consistent with the seasonal condition-specific competition hypothesis, abundances of both species in resident and standard cemetery vases were higher early in the wet season (June) versus late in the wet season (September), but the proportional increase of A. albopictus was greater than that of A. aegypti, presumably due to higher dry-season egg mortality and strong wet-season competitive superiority of larval A. albopictus. Spatial partitioning was not evident among cemeteries, a result inconsistent with the colonization-competition tradeoff hypothesis, but both species were highly independently aggregated among standard cemetery vases and ovitraps, which is consistent with the aggregation hypothesis. Densities of A. aegypti but not A. albopictus differed among land use categories, with A. aegypti more abundant in ovitraps in residential areas compared to industrial and commercial areas. Spatial partitioning among land use types probably results from effects of land use on conditions in both terrestrial and aquatic-container environments. These results suggest that both temporal and spatial variation may contribute to local coexistence between these Aedes in urban areas. PMID:19263086
Spatial and temporal patterns of coexistence between competing Aedes mosquitoes in urban Florida.
Leisnham, Paul T; Juliano, S A
2009-05-01
Understanding mechanisms fostering coexistence between invasive and resident species is important in predicting ecological, economic, or health impacts of invasive species. The mosquito Aedes aegypti coexists at some urban sites in southeastern United States with invasive Aedes albopictus, which is often superior in interspecific competition. We tested predictions for three hypotheses of species coexistence: seasonal condition-specific competition, aggregation among individual water-filled containers, and colonization-competition tradeoff across spatially partitioned habitat patches (cemeteries) that have high densities of containers. We measured spatial and temporal patterns of abundance for both species among water-filled resident cemetery vases and experimentally positioned standard cemetery vases and ovitraps in metropolitan Tampa, Florida. Consistent with the seasonal condition-specific competition hypothesis, abundances of both species in resident and standard cemetery vases were higher early in the wet season (June) versus late in the wet season (September), but the proportional increase of A. albopictus was greater than that of A. aegypti, presumably due to higher dry-season egg mortality and strong wet-season competitive superiority of larval A. albopictus. Spatial partitioning was not evident among cemeteries, a result inconsistent with the colonization-competition tradeoff hypothesis, but both species were highly independently aggregated among standard cemetery vases and ovitraps, which is consistent with the aggregation hypothesis. Densities of A. aegypti but not A. albopictus differed among land use categories, with A. aegypti more abundant in ovitraps in residential areas compared to industrial and commercial areas. Spatial partitioning among land use types probably results from effects of land use on conditions in both terrestrial and aquatic-container environments. These results suggest that both temporal and spatial variation may contribute to local coexistence between these Aedes in urban areas.
Lyme disease in Wisconsin: epidemiologic, clinical, serologic, and entomologic findings.
Davis, J P; Schell, W L; Amundson, T E; Godsey, M S; Spielman, A; Burgdorfer, W; Barbour, A G; LaVenture, M; Kaslow, R A
1984-01-01
In 1980-82, 80 individuals (71 Wisconsin residents) had confirmed Lyme disease (LD-c) reported; 39 additional patients had probable or possible LD. All cases of LD-c occurred during May-November; 73 percent occurred during June-July; 54 (68 percent) occurred in males. The mean age was 38.7 years (range, 7-77 years). Among LD-c patients, likely exposure to the presumed vector Ixodes dammini (ID) occurred in 22 different Wisconsin counties. Antibodies to the ID spirochete that causes LD occurred in 33 of 49 LD-c cases versus 0 of 18 in ill controls (p less than .001) and in 13 of 26 LD-c cases treated with penicillin or tetracycline versus 16 of 19 LD-c cases not treated. Early antibiotic therapy appears to blunt the antibody response to the ID spirochete. Regional tick surveys conducted in Wisconsin during each November in 1979-82 have demonstrated regions of greater density of ID. Utilizing comparable tick collection in these surveys, increases were noted in the percentage of deer with ID from 24 percent (31/128) in 1979 to 38 percent (58/152) in 1981, in the standardized mean value of ID/deer from 1.0 in 1979 to 2.2 in 1981, in the percentage of ID of the total ticks collected from 13 percent in 1979 to 71 percent in 1981, or in the ratio of ID to Dermacentor albipictus ticks from 0.14 in 1979 to 2.44 in 1981. However, a reduction in the density of ID/deer was noted generally throughout Wisconsin in 1982 when compared to 1981. LD is widespread in Wisconsin, with ecologic and clinical features similar to those occurring along the eastern seaboard.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
Ocelot (Leopardus pardalis) Density in Central Amazonia.
Rocha, Daniel Gomes da; Sollmann, Rahel; Ramalho, Emiliano Esterci; Ilha, Renata; Tan, Cedric K W
2016-01-01
Ocelots (Leopardus pardalis) are presumed to be the most abundant of the wild cats throughout their distribution range and to play an important role in the dynamics of sympatric small-felid populations. However, ocelot ecological information is limited, particularly for the Amazon. We conducted three camera-trap surveys during three consecutive dry seasons to estimate ocelot density in Amanã Reserve, Central Amazonia, Brazil. We implemented a spatial capture-recapture (SCR) model that shared detection parameters among surveys. A total effort of 7020 camera-trap days resulted in 93 independent ocelot records. The estimate of ocelot density in Amanã Reserve (24.84 ± SE 6.27 ocelots per 100 km2) was lower than at other sites in the Amazon and also lower than that expected from a correlation of density with latitude and rainfall. We also discuss the importance of using common parameters for survey scenarios with low recapture rates. This is the first density estimate for ocelots in the Brazilian Amazon, which is an important stronghold for the species.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
The Heuristic Value of p in Inductive Statistical Inference
Krueger, Joachim I.; Heck, Patrick R.
2017-01-01
Many statistical methods yield the probability of the observed data – or data more extreme – under the assumption that a particular hypothesis is true. This probability is commonly known as ‘the’ p-value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p-value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p-value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say. PMID:28649206
The Heuristic Value of p in Inductive Statistical Inference.
Krueger, Joachim I; Heck, Patrick R
2017-01-01
Many statistical methods yield the probability of the observed data - or data more extreme - under the assumption that a particular hypothesis is true. This probability is commonly known as 'the' p -value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p -value has been subjected to much speculation, analysis, and criticism. We explore how well the p -value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p -value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p -value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.
Enceladus: An Active Cryovolcanic Satellite
NASA Technical Reports Server (NTRS)
Spencer, J. R.; Barr, Amy C.; Esposito, L. W.; Helfenstein, P.; Ingersoll, A. P.; Jaumann, R.; McKay, C. P.; Nimmo, F.; Waite, J. H.
2009-01-01
Enceladus is one of the most remarkable satellites in the solar system, as revealed by Cassini's detection of active plumes erupting from warm fractures near its south pole. This discovery makes Enceladus the only icy satellite known to exhibit ongoing internally driven geological activity. The activity is presumably powered by tidal heating maintained by Enceladus 2:1 mean-motion resonance with Dione, but many questions remain. For instance, it appears difficult or impossible to maintain the currently observed radiated power (probably at least 6 GW) in steady state. It is also not clear how Enceladus first entered its current self-maintaining warm and dissipative state initial heating from non-tidal sources is probably required. There are also many unanswered questions about Enceladus interior. The silicate fraction inferred from its density of 1.68 g per cubic centimeter is probably differentiated into a core, though we have only indirect evidence for differentiation. Above the core there is probably a global or regional water layer, inferred from several models of tidal heating, and an ice shell thick enough to support the 1 kilometer amplitude topography seen on Enceladus. It is possible that dissipation is largely localized beneath the south polar region. Enceladus surface geology, ranging from moderately cratered terrain to the virtually crater-free active south polar region, is highly diverse, tectonically complex, and remarkably symmetrical about the rotation axis and the direction to Saturn. South polar activity is concentrated along the four tiger stripe fractures, which radiate heat at temperatures up to at least 167 K and are the source of multiple plumes ejecting 200 kilograms per second of H2O vapor along with significant N2 (or C2H4), CO2, CH4, NH3, and higher-mass hydrocarbons. The escaping gas maintains Saturn's neutral gas torus, and the plumes also eject a large number of micron-sized H2O ice grains that populate Saturn's E-ring. The mechanism that powers the plumes is not well understood, and whether liquid water is involved is a subject of active debate (but likely nonetheless). Enceladus provides a promising potential habitat for life in the outer solar system, and the active plumes allow the unique opportunity for direct sampling of that zone. Enceladus is thus a prime target for Cassini's continued exploration of the Saturn system, and will be a tempting target for future missions.
NASA Technical Reports Server (NTRS)
Kuiper, T. B. H.
1980-01-01
Evolutionary arguments are presented in favor of the existence of civilization on a galactic scale. Patterns of physical, chemical, biological, social and cultural evolution leading to increasing levels of complexity are pointed out and explained thermodynamically in terms of the maximization of free energy dissipation in the environment of the organized system. The possibility of the evolution of a global and then a galactic human civilization is considered, and probabilities that the galaxy is presently in its colonization state and that life could have evolved to its present state on earth are discussed. Fermi's paradox of the absence of extraterrestrials in light of the probability of their existence is noted, and a variety of possible explanations is indicated. Finally, it is argued that although mankind may be the first occurrence of intelligence in the galaxy, it is unjustified to presume that this is so.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Car accidents induced by a bottleneck
NASA Astrophysics Data System (ADS)
Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid
2017-12-01
Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.
Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination
Sinkkonen, Aki
2005-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163
Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination
Sinkkonen, Aki
2006-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
ERIC Educational Resources Information Center
Rispens, Judith; Baker, Anne; Duinmeijer, Iris
2015-01-01
Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words
Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.
2012-01-01
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774
Fractional Brownian motion with a reflecting wall
NASA Astrophysics Data System (ADS)
Wada, Alexander H. O.; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior
NASA Astrophysics Data System (ADS)
Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah
2018-01-01
The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.
Effects of heterogeneous traffic with speed limit zone on the car accidents
NASA Astrophysics Data System (ADS)
Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.
2016-06-01
Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
Hypotheses to explain the origin of species in Amazonia.
Haffer, J
2008-11-01
The main hypotheses proposed to explain barrier formation separating populations and causing the differentiation of species in Amazonia during the course of geological history are based on different factors, as follow: (1) Changes in the distribution of land and sea or in the landscape due to tectonic movements or sea level fluctuations (Paleogeography hypothesis), (2) the barrier effect of Amazonian rivers (River hypothesis), (3) a combination of the barrier effect of broad rivers and vegetational changes in northern and southern Amazonia (River-refuge hypothesis), (4) the isolation of humid rainforest blocks near areas of surface relief in the periphery of Amazonia separated by dry forests, savannas and other intermediate vegetation types during dry climatic periods of the Tertiary and Quaternary (Refuge hypothesis), (5) changes in canopy-density due to climatic reversals (Canopy-density hypothesis) (6) the isolation and speciation of animal populations in small montane habitat pockets around Amazonia due to climatic fluctuations without major vegetational changes (Museum hypothesis), (7) competitive species interactions and local species isolations in peripheral regions of Amazonia due to invasion and counterinvasion during cold/warm periods of the Pleistocene (Disturbance-vicariance hypothesis) and (8) parapatric speciation across steep environmental gradients without separation of the respective populations (Gradient hypothesis). Several of these hypotheses probably are relevant to a different degree for the speciation processes in different faunal groups or during different geological periods. The basic paleogeography model refers mainly to faunal differentiation during the Tertiary and in combination with the Refuge hypothesis. Milankovitch cycles leading to global main hypotheses proposed to explain barrier formation separating populations and causing the differentiation of species in Amazonia during the course of geological history are based on different factors, as follow: (1) Changes in the distribution of land and sea or in the landscape due to tectonic movements or sea level fluctuations (Paleogeography hypothesis), (2) the barrier effect of Amazonian rivers (River hypothesis), (3) a combination of the barrier effect of broad rivers and vegetational changes in northern and southern Amazonia (River-refuge hypothesis), (4) the isolation of humid rainforest blocks near areas of surface relief in the periphery of Amazonia separated by dry forests, savannas and other intermediate vegetation types during dry climatic periods of the Tertiary and Quaternary (Refuge hypothesis), (5) changes in canopy-density due to climatic reversals (Canopy-density hypothesis) (6) the isolation and speciation of animal populations in small montane habitat pockets around Amazonia due to climatic fluctuations without major vegetational changes (Museum hypothesis), (7) competitive species interactions and local species isolations in peripheral regions of Amazonia due to invasion and counterinvasion during cold/warm periods of the Pleistocene (Disturbance-vicariance hypothesis) and (8) parapatric speciation across steep environmental gradients without separation of the respective populations (Gradient hypothesis). Several of these hypotheses probably are relevant to a different degree for the speciation processes in different faunal groups or during different geological periods. The basic paleogeography model refers mainly to faunal differentiation during the Tertiary and in combination with the Refuge hypothesis. Milankovitch cycles leading to global climatic-vegetational changes affected the biomes of the world not only during the Pleistocene but also during the Tertiary and earlier geological periods. New geoscientific evidence for the effect of dry climatic periods in Amazonia supports the predictions of the Refuge hypothesis. The disturbance-vicariance hypothesis refers to the presumed effect of cold/warm climatic phases of the Pleistocene only and is of limited general relevance because most extant species originated earlier and probably through paleogeographic changes and the formation of ecological refuges during the Tertiary.
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
ERIC Educational Resources Information Center
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
ERIC Educational Resources Information Center
van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.
2016-01-01
The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
Assessing hypotheses about nesting site occupancy dynamics
Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle
2011-01-01
Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.
Size and DNA distributions of electrophoretically separated cultured human kidney cells
NASA Technical Reports Server (NTRS)
Kunze, M. E.; Plank, L. D.; Todd, P. W.
1985-01-01
Electrophoretic purification of purifying cultured cells according to function presumes that the size of cycle phase of a cell is not an overriding determinant of its electrophoretic velocity in an electrophoretic separator. The size distributions and DNA distributions of fractions of cells purified by density gradient electrophoresis were determined. No systematic dependence of electrophoretic migration upward in a density gradient column upon either size or DNA content were found. It was found that human leukemia cell populations, which are more uniform function and found in all phases of the cell cycle during exponential growth, separated on a vertical sensity gradient electrophoresis column according to their size, which is shown to be strictly cell cycle dependent.
Atmosphere and ionosphere of venus from the mariner v s-band radio occultation measurement.
Kliore, A; Levy, G S; Cain, D L; Fjeldbo, G; Rasool, S I
1967-12-29
Measurements of the frequency, phase, and amplitude of the S-band radio signal of Mariner V as it passed behind Venus were used to obtain the effects of refraction in its atmosphere and ionosphere. Profiles of refractivity, temperature, pressure, and density in the neutral atmosphere, as well as electron density in the daytime ionosphere, are presented. A constant scale height was observed above the tropopause, and the temperature increased with an approximately linear lapse rate below the tropopause to the level at which signal was lost, presumably because heavy defocusing attenuation occurred as critical refraction was approached. An ionosphere having at least two maxima was observed at only 85 kilometers above the tropopause.
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
Density probability distribution functions of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2008-10-01
In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
Grauel, M. Katharina; Reddy-Alla, Suneel; Willmes, Claudia G.; Brockmann, Marisa M.; Trimbuch, Thorsten; Rosenmund, Tanja; Pangalos, Maria; Vardar, Gülçin; Stumpf, Alexander; Walter, Alexander M.; Rost, Benjamin R.; Eickholt, Britta J.; Haucke, Volker; Schmitz, Dietmar; Sigrist, Stephan J.; Rosenmund, Christian
2016-01-01
The tight spatial coupling of synaptic vesicles and voltage-gated Ca2+ channels (CaVs) ensures efficient action potential-triggered neurotransmitter release from presynaptic active zones (AZs). Rab-interacting molecule-binding proteins (RIM-BPs) interact with Ca2+ channels and via RIM with other components of the release machinery. Although human RIM-BPs have been implicated in autism spectrum disorders, little is known about the role of mammalian RIM-BPs in synaptic transmission. We investigated RIM-BP2–deficient murine hippocampal neurons in cultures and slices. Short-term facilitation is significantly enhanced in both model systems. Detailed analysis in culture revealed a reduction in initial release probability, which presumably underlies the increased short-term facilitation. Superresolution microscopy revealed an impairment in CaV2.1 clustering at AZs, which likely alters Ca2+ nanodomains at release sites and thereby affects release probability. Additional deletion of RIM-BP1 does not exacerbate the phenotype, indicating that RIM-BP2 is the dominating RIM-BP isoform at these synapses. PMID:27671655
A statistical analysis of flank eruptions on Etna volcano
NASA Astrophysics Data System (ADS)
Mulargia, Francesco; Tinti, Stefano; Boschi, Enzo
1985-02-01
A singularly complete record exists for the eruptive activity of Etna volcano. The time series of occurrence of flank eruptions in the period 1600-1980, in which the record is presumably complete, is found to follow a stationary Poisson process. A revision of the available data shows that eruption durations are rather well correlated with the estimates of the volume of lava flows. This implies that the magnitude of an eruption can be defined directly by its duration. Extreme value statistics are then applied to the time series, using duration as a dependent variable. The probability of occurrence of a very long (300 days) eruption is greater than 50% only in time intervals of the order of 50 years. The correlation found between duration and total output also allows estimation of the probability of occurrence of a major event which exceeds a given duration and total flow of lava. The composite probabilities do not differ considerably from the pure ones. Paralleling a well established application to seismic events, extreme value theory can be profitably used in volcanic risk estimates, provided that appropriate account is also taken of all other variables.
Herlyn, Holger; Taraschewski, Horst
2017-04-01
Different conceptions exist regarding structure, function, and evolution of the muscles that move the acanthocephalan presoma, including the proboscis, i.e., the usually hooked hold-fast anchoring these endoparasites to the intestinal wall of their vertebrate definitive hosts. In order to clarify the unresolved issues, we carried out a light microscopic analysis of series of semi-thin sections and whole mounts representing the three traditional acanthocephalan classes: Archiacanthocephala (Macracanthorhynchus hirudinaceus), Eoacanthocephala (Paratenuisentis ambiguus, Tenuisentis niloticus), and Palaeacanthocephala (Acanthocephalus anguillae, Echinorhynchus truttae, Pomphorhynchus laevis, Corynosoma sp.). Combining our data with published light, transmission electron, and scanning electron microscopic data, we demonstrate that receptacle protrusor and proboscis receptacle in Archi- and Eoacanthocephala are homologous to the outer and inner wall of the proboscis receptacle in Palaeacanthocephala. Besides the proboscis receptacle and a "surrounding muscle," the last common ancestor of Acanthocephala presumably possessed a proboscis retractor, receptacle retractor, neck retractor (continuous with lemnisci compressors), and retinacula. These muscles most probably evolved in the acanthocephalan stem line. Moreover, the last common ancestor of Acanthocephala presumably possessed only a single layer of muscular cords under the presomal tegument while the metasomal body wall had circular and longitudinal strands. Two lateral receptacle flexors (also lateral receptacle protrusors), an apical muscle plate (surrounding one or two apical sensory organs), a midventral longitudinal muscle, and the differentiation of longitudinal body wall musculature at the base of the proboscis probably emerged within Archiacanthocephala. All muscles have a common organization principle: a peripheral layer of contractile filaments encloses the cytoplasm.
Fractional Brownian motion with a reflecting wall.
Wada, Alexander H O; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
Large eddy simulation of forced ignition of an annular bluff-body burner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subramanian, V.; Domingo, P.; Vervisch, L.
2010-03-15
The optimization of the ignition process is a crucial issue in the design of many combustion systems. Large eddy simulation (LES) of a conical shaped bluff-body turbulent nonpremixed burner has been performed to study the impact of spark location on ignition success. This burner was experimentally investigated by Ahmed et al. [Combust. Flame 151 (2007) 366-385]. The present work focuses on the case without swirl, for which detailed measurements are available. First, cold-flow measurements of velocities and mixture fractions are compared with their LES counterparts, to assess the prediction capabilities of simulations in terms of flow and turbulent mixing. Timemore » histories of velocities and mixture fractions are recorded at selected spots, to probe the resolved probability density function (pdf) of flow variables, in an attempt to reproduce, from the knowledge of LES-resolved instantaneous flow conditions, the experimentally observed reasons for success or failure of spark ignition. A flammability map is also constructed from the resolved mixture fraction pdf and compared with its experimental counterpart. LES of forced ignition is then performed using flamelet fully detailed tabulated chemistry combined with presumed pdfs. Various scenarios of flame kernel development are analyzed and correlated with typical flow conditions observed in this burner. The correlations between, velocities and mixture fraction values at the sparking time and the success or failure of ignition, are then further discussed and analyzed. (author)« less
Emerging tropical diseases in Australia. Part 3. Australian bat lyssavirus.
Moore, P R; Jansen, C C; Graham, G C; Smith, I L; Craig, S B
2010-12-01
Since its discovery in a juvenile black flying fox (Pteropus alecto) in 1996, Australian bat lyssavirus (ABLV) has become the cause of a potentially important emerging disease for health authorities in Australia, with two human deaths (one in 1996 and one in 1998) attributed to the virus in the north-eastern state of Queensland. In Australia, the virus has been isolated from all four species of flying fox found on the mainland (i.e. P. alecto, P. scapulatus, P. poliocephalus and P. conspicillatus) as well as a single species of insectivorous bat (Saccolaimus flaviventris). Australian bat lyssavirus belongs to the Lyssavirus genus and is closely related, genetically, to the type strain of Rabies virus (RABV). Clinically, patients infected with ABLV have displayed the 'classical' symptoms of rabies and a similar disease course. This similarity has led to the belief that the infection and dissemination of ABLV in the body follows the same pathways as those followed by RABV. Following the two ABLV-related deaths in Queensland, protocols based on the World Health Organization's guidelines for RABV prophylaxis were implemented and, presumably in consequence, no human infection with ABLV has been recorded since 1998. ABLV will, however, probably always have an important part to play in the health of Australians as the density of the human population in Australia and, consequently, the level of interaction between humans and flying foxes increase.
A comparison of LLDPE-based nanocomposites containing multi-walled carbon nanotubes and graphene
NASA Astrophysics Data System (ADS)
Vasileiou, Alexandros; Docoslis, Aristides; Kontopoulou, Marianna
2015-05-01
Composites of linear-low density polyethylene (LLDPE) with multi-walled carbon nanotubes (MWCNT) and thermally reduced graphene (TRGO) were produced by melt compounding. The composites were compatibilized by grafting aromatic pyridine groups onto the LLDPE backbone. The aromatic moieties established non-covalent π-π interactions with the carbon nanostructures, thus allowing for efficient dispersion, without compromizing their electrical properties. By using identical matrices, it was possible to investigate the effects of filler geometry on the electrical, mechanical and rheological properties of the composites. The 1-D nature and smaller surface area of the MWCNT facilitated their dispersion within the polymer matrix, whereas the graphene agglomerates appeared to breakup through an erosion mechanism. The resulting mixture of aggregates and individual graphene platelets favored lower electrical and rheological percolation thresholds. However the maximum electrical conductivity achieved in the TRGO/LLDPE was lower by about an order of magnitude compared to the MWCNT/LLDPE composites, probably due to residual oxygen in the graphene's structure. TRGO based composites presented higher moduli at the same filler loadings, while elongations at break were comparable. All composites exhibited time-dependent rheological properties, indicative of their tendency to aggregate. A more pronounced increase in viscoelastic properties was noted in the composites containing TRGO, presumably due to the higher surface area of the graphene platelets, and the presence of larger aggregates.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.
2011-01-01
Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne
2011-01-01
Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Korman, Josh; Yard, Mike
2017-01-01
Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.
Wavefronts, actions and caustics determined by the probability density of an Airy beam
NASA Astrophysics Data System (ADS)
Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón
2018-07-01
The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
A comparison between block and smooth modeling in finite element simulations of tDCS*
Indahlastari, Aprinda; Sadleir, Rosalind J.
2018-01-01
Current density distributions in five selected structures, namely, anterior superior temporal gyrus (ASTG), hippocampus (HIP), inferior frontal gyrus (IFG), occipital lobe (OCC) and pre-central gyrus (PRC) were investigated as part of a comparison between electrostatic finite element models constructed directly from MRI-resolution data (block models), and smoothed tetrahedral finite element models (smooth models). Three electrode configurations were applied, mimicking different tDCS therapies. Smooth model simulations were found to require three times longer to complete. The percentage differences between mean and median current densities of each model type in arbitrarily chosen brain structures ranged from −33.33–48.08%. No clear relationship was found between structure volumes and current density differences between the two model types. Tissue regions nearby the electrodes demonstrated the least percentage differences between block and smooth models. Therefore, block models may be adequate to predict current density values in cortical regions presumed targeted by tDCS. PMID:26737023
Oak regeneration and overstory density in the Missouri Ozarks
David R. Larsen; Monte A. Metzger
1997-01-01
Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...
Critically Ill Children During the 2009–2010 Influenza Pandemic in the United States
Vaughn, Frances; Sullivan, Ryan; Rubinson, Lewis; Thompson, B. Taylor; Yoon, Grace; Smoot, Elizabeth; Rice, Todd W.; Loftis, Laura L.; Helfaer, Mark; Doctor, Allan; Paden, Matthew; Flori, Heidi; Babbitt, Christopher; Graciano, Ana Lia; Gedeit, Rainer; Sanders, Ronald C.; Giuliano, John S.; Zimmerman, Jerry; Uyeki, Timothy M.
2011-01-01
BACKGROUND: The 2009 pandemic influenza A (H1N1) (pH1N1) virus continues to circulate worldwide. Determining the roles of chronic conditions and bacterial coinfection in mortality is difficult because of the limited data for children with pH1N1-related critical illness. METHODS: We identified children (<21 years old) with confirmed or probable pH1N1 admitted to 35 US PICUs from April 15, 2009, through April 15, 2010. We collected data on demographics, baseline health, laboratory results, treatments, and outcomes. RESULTS: Of 838 children with pH1N1 admitted to a PICU, the median age was 6 years, 58% were male, 70% had ≥1 chronic health condition, and 88.2% received oseltamivir (5.8% started before PICU admission). Most patients had respiratory failure with 564 (67.3%) receiving mechanical ventilation; 162 (19.3%) received vasopressors, and 75 (8.9%) died. Overall, 71 (8.5%) of the patients had a presumed diagnosis of early (within 72 hours after PICU admission) Staphylococcus aureus coinfection of the lung with 48% methicillin-resistant S aureus (MRSA). In multivariable analyses, preexisting neurologic conditions or immunosuppression, encephalitis (1.7% of cases), myocarditis (1.4% of cases), early presumed MRSA lung coinfection, and female gender were mortality risk factors. Among 251 previously healthy children, only early presumed MRSA coinfection of the lung (relative risk: 8 [95% confidence interval: 3.1–20.6]; P < .0001) remained a mortality risk factor. CONCLUSIONS: Children with preexisting neurologic conditions and immune compromise were at increased risk of pH1N1-associated death after PICU admission. Secondary complications of pH1N1, including myocarditis, encephalitis, and clinical diagnosis of early presumed MRSA coinfection of the lung, were mortality risk factors. PMID:22065262
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1995-01-01
For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.
Propensity, Probability, and Quantum Theory
NASA Astrophysics Data System (ADS)
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Laboratory-Tutorial Activities for Teaching Probability
ERIC Educational Resources Information Center
Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.
2006-01-01
We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…
Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA
Yarra, Allyson N.; Magoulick, Daniel D.
2018-01-01
Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Committor of elementary reactions on multistate systems
NASA Astrophysics Data System (ADS)
Király, Péter; Kiss, Dóra Judit; Tóth, Gergely
2018-04-01
In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Reptile trade and the risk of exotic tick introductions into southern South American countries.
González-Acuña, D; Beldoménico, P M; Venzal, J M; Fabry, M; Keirans, J E; Guglielmone, A A
2005-01-01
Ticks exotic for the Neotropical region were found on Python regius imported into Argentina and Chile. All ticks (7 males and 3 females) were classified as Amblyomma latum Koch, 1844 ( = Aponomma latum). Additionally, four lots comprising 18 males of the Argentinean tortoise tick, Amblyomma argentinae Neumann, 1904, were found on a terrestrial tortoise, Chelonoidis chilensis, and on three terrestrial tortoises (probably C. chilensis) imported to Uruguay presumably from Argentina). These findings alert us to the risk of expanding the distribution of reptile parasites and their diseases into regions previously free of these parasites.
Free-living pathogens: life-history constraints and strain competition.
Caraco, Thomas; Wang, Ing-Nang
2008-02-07
Many pathogen life histories include a free-living stage, often with anatomical and physiological adaptations promoting persistence outside of host tissues. More durable particles presumably require that the pathogen metabolize more resources per particle. Therefore, we hypothesize functional dependencies, pleiotropic constraints, between the rate at which free-living particles decay outside of host tissues and other pathogen traits, including virulence, the probability of infecting a host upon contact, and pathogen reproduction within host tissues. Assuming that pathogen strains compete for hosts preemptively, we find patterns in trait dependencies predicting whether or not strain competition favors a highly persistent free-living stage.
Murn, Campbell; Holloway, Graham J
2016-10-01
Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Can we estimate molluscan abundance and biomass on the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.
2017-11-01
Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.
NASA Astrophysics Data System (ADS)
Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.
2018-04-01
We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.
The structure of particle cloud premixed flames
NASA Technical Reports Server (NTRS)
Seshadri, K.; Berlad, A. L.
1992-01-01
The structure of premixed flames propagating in combustible systems containing uniformly distributed volatile fuel particles in an oxidizing gas mixture is analyzed. This analysis is motivated by experiments conducted at NASA Lewis Research Center on the structure of flames propagating in combustible mixtures of lycopodium particles and air. Several interesting modes of flame propagation were observed in these experiments depending on the number density and the initial size of the fuel particle. The experimental results show that steady flame propagation occurs even if the initial equivalence ratio of the combustible mixture based on the gaseous fuel available in the particles, phi sub u, is substantially larger than unity. A model is developed to explain these experimental observations. In the model, it is presumed that the fuel particles vaporize first to yield a gaseous fuel of known chemical composition which then reacts with oxygen in a one-step overall process. The activation energy of the chemical reaction is presumed to be large. The activation energy characterizing the kinetics of vaporization is also presumed to be large. The equations governing the structure of the flame were integrated numerically. It is shown that the interplay of vaporization kinetics and oxidation process can result in steady flame propagation in combustible mixtures where the value of phi sub u is substantially larger than unity. This prediction is in agreement with experimental observations.
NASA Astrophysics Data System (ADS)
Fukuyoshi, Shuichi; Nakayoshi, Tomoki; Takahashi, Ohgi; Oda, Akifumi
2017-03-01
In order to elucidate the reason why glutamic acid residues have lesser racemisation reactivity than asparaginic acid, we investigated the racemisation energy barrier of piperidinedione, which is the presumed intermediate of the isomerisation reaction of L-Glu to D-Glu, by density functional theory calculations. In two-water-molecule-assisted racemisation, the activation barrier for keto-enol isomerisation was 28.1 kcal/mol. The result showed that the activation barrier for the racemisation of glutamic acid residues was not different from that for the racemisation of aspartic acid residues. Thus, glutamic acid residues can possibly cause the racemisation reaction if the cyclic intermediate stably exists.
NASA Astrophysics Data System (ADS)
Wellons, Sarah; Torrey, Paul
2017-06-01
Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.
Radiative transition of hydrogen-like ions in quantum plasma
NASA Astrophysics Data System (ADS)
Hu, Hongwei; Chen, Zhanbin; Chen, Wencong
2016-12-01
At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Gregoire, K P; Becker, J G
2012-09-01
Agricultural crop residues contain high amounts of biochemical energy as cellulose and lignin. A portion of this biomass could be sustainably harvested for conversion to bioenergy to help offset fossil fuel consumption. In this study, the potential for converting lignocellulosic biomass directly to electricity in a microbial fuel cell (MFC) was explored. Design elements of tubular air cathode MFCs and leach-bed bioreactors were integrated to develop a new solid-substrate MFC in which cellulose hydrolysis, fermentation, and anode respiration occurred in a single chamber. Electricity was produced continuously from untreated corncob pellets for >60 d. Addition of rumen fluid increased power production, presumably by providing growth factors to anode-respiring bacteria. Periodic exposure to oxygen also increased power production, presumably by limiting the diversion of electrons to methanogenesis. In the absence of methanogenesis, bioaugmentation with Geobacter metallireducens further improved MFC performance. Under these conditions, the maximum power density was 230 mW/m(3). Copyright © 2012 Elsevier Ltd. All rights reserved.
Epidemics in interconnected small-world networks.
Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong
2015-01-01
Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Horn, J. E.; Harter, T.
2011-06-01
Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
Predictions of malaria vector distribution in Belize based on multispectral satellite data.
Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J
1996-03-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Predictions of malaria vector distribution in Belize based on multispectral satellite data
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.
1996-01-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation
Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.
1998-01-01
We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.
Interstellar scattering of the Vela pulsar
NASA Technical Reports Server (NTRS)
Backer, D. C.
1974-01-01
The frequency dependence of the parameters of interstellar scattering between 837 and 8085 MHz for the Vela pulsar are consistent with thin-screen models of strong scattering. The magnitudes of the parameters indicate an anomalous turbulence along the path when they are compared with results for other pulsars with comparable column densities of free electrons in the line of sight. This anomaly is due presumably to the Gum Nebula. The decorrelation frequency, appropriately defined, is related to the pulse broadening time by 2 pi as predicted theoretically.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo
2018-01-01
Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Metabolism of cholesteryl esters of rat very low density lipoproteins.
Faergeman, O; Havel, R J
1975-06-01
Rat very low density lipoproteins (d smaller than 1.006), biologically labeled in esterified and free cholesterol, were obtained form serum 6 h after intravenous injection of particulate (3-H) cholesterol. When injected into recipient animals, the esterified cholesterol was cleared form plasma with a half-life of 5 min. After 15 min, 71% of the injected esterified (3-H) cholesterol had been taken up by the liver, where it was rapidly hydrolyzed. After 60 min only 3.3% of the amount injected had been transferred, via lipoproteins of intermediate density, to the low density lipoproteins of plasma (d 1.019-1.063). Both uptake in the liver and transfer to low density lipoproteins occurred without change of distribution of 3-H in the various cholesteryl esters. 3-H appearing in esterified cholesterol of high density lipoproteins (d greater than 1.063) was derived from esterification, presumably by lecithin: cholesterol acyltransferase, of simultaneously injected free (3-H) cholesterol. Content of free (3-H) cholesterol in the very low density lipoproteins used for injection could be reduced substantially by incubation with erythrocytes. This procedure, however, increased the rate of clearance of the lipoproteins after injection into recipient rats. These studies show that hepatic removal is the major catabolic pathway for cholesteryl esters of rat very low density lipoproteins and that transfer to low density lipoproteins occurs to only a minor extent.
NASA Astrophysics Data System (ADS)
Richmond, Orien Manu Wright
The secretive California Black Rail (Laterallus jamaicensis coturniculus ) has a disjunct and poorly understood distribution. After a new population was discovered in Yuba County in 1994, we conducted call playback surveys from 1994--2006 in the Sierra foothills and Sacramento Valley region to determine the distribution and residency of Black Rails, estimate densities, and obtain estimates of site occupancy and detection probability. We found Black Rails at 164 small, widely scattered marshes distributed along the lower western slopes of the Sierra Nevada foothills, from just northeast of Chico (Butte County) to Rocklin (Placer County). Marshes were surrounded by a matrix of unsuitable habitat, creating a patchy or metapopulation structure. We observed Black Rails nesting and present evidence that they are year-round residents. Assuming perfect detectability we estimated a lower-bound mean Black Rail density of 1.78 rails ha-1, and assuming a detection probability of 0.5 we estimated a mean density of 3.55 rails ha-1. We test if the presence of the larger Virginia Rail (Laterallus limicola) affects probabilities of detection or occupancy of the smaller California Black Rail in small freshwater marshes that range in size from 0.013-13.99 ha. We hypothesized that Black Rail occupancy should be lower in small marshes when Virginia Rails are present than when they are absent, because resources are presumably more limited and interference competition should increase. We found that Black Rail detection probability was unaffected by the detection of Virginia Rails, while, surprisingly, Black and Virginia Rail occupancy were positively associated even in small marshes. The average probability of Black Rail occupancy was higher when Virginia Rails were present (0.74 +/- 0.053) than when they were absent (0.36 +/- 0.069), and for both species occupancy increased with marsh size. We assessed the impact of winter (November-May) cattle grazing on occupancy of California Black Rails inhabiting a network of freshwater marshes in the northern Sierra Nevada foothills of California. As marsh birds are difficult to detect, we collected repeated presence/absence data via call playback surveys and used the "random changes in occupancy" parameterization of a multi-season occupancy model to examine relationships between occupancy and covariates, while accounting for detection probability. Wetland vegetation cover was significantly lower at winter-grazed sites than at ungrazed sites during the grazing season in 2007 but not in 2008. Winter grazing had little effect on Black Rail occupancy at irrigated marshes. However, at non-irrigated marshes fed by natural springs and streams, winter-grazed sites had lower occupancy than ungrazed sites, especially at larger marsh sizes (>0.5 ha). Black Rail occupancy was positively associated with marsh area, irrigation as a water source and summer cover, and negatively associated with isolation. We evaluate the performance of nine topographic features (aspect, downslope flow distance to streams, elevation, horizontal distance to sinks, horizontal distance to streams, plan curvature, profile curvature, slope and topographic wetness index) on freshwater wetland classification accuracy in the Sierra foothills of California. To evaluate object-based classification accuracy we test both within-image and between-image predictions using six different classification schemes (naive Bayes, the C4.5 decision tree classifier, k-nearest neighbors, boosted logistic regression, random forest, and a support vector machine classifier) in the classification software package Weka 3.6.2. Adding topographic features had mostly positive effects on classification accuracy for within-image tests, but mostly negative effects on accuracy for between-image tests. The topographic wetness index was the most beneficial topographic feature in both the within-image and between-image tests for distinguishing wetland objects from other "green" objects (irrigated pasture and woodland) and shadows. Our results suggest that there is a benefit to using a more complex index of topography than simple measures such as elevation for the goal of mapping small palustrine emergent wetlands, but this benefit, for the most part, has poor transferability when applied between image sections. (Abstract shortened by UMI.)
Stochastic transport models for mixing in variable-density turbulence
NASA Astrophysics Data System (ADS)
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun
2013-01-01
Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
A partial differential equation for pseudocontact shift.
Charnock, G T P; Kuprov, Ilya
2014-10-07
It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
NASA Astrophysics Data System (ADS)
Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping
2015-05-01
It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.
Acrocentric chromosome associations in man.
Jacobs, P A; Mayer, M; Morton, N E
1976-01-01
Heterogeneity among chromosomes was found to be a highly significant source of variation for association proportions, while culture, slide, and observer were negligible sources of variation for association proportions although important for numbers of associations. The consequences of these results for tests of group differences are discussed. It seems evident that each pair of acrocentric chromosomes has its own characteristic probability of entering into association. This is presumably a combination of the probability for each individual member of the pair, a proposition easily tested utilizing acrocentric chromosomes carrying polymorphisms which allow each member of the pair to be individually recognized. A mathematical theory for pairwise satellite association was developed and shown to fit observations on banded chromosomes. While we found very significant heterogeneity among individuals in the frequency with which different chromosomes entered into associations, there was no significant evidence for preferential association between any particular chromosomes, either heterologous or homologous. This finding in our material of apparently random associations between different chromosomes is contrary to claims made by other investigators and should be tested on other material. No correlation was found between the phenotype of the chromosome, as judged by cytogenetic polymorphisms, and its probability of association. PMID:795295
Population Ecology of Nitrifiers in a Stream Receiving Geothermal Inputs of Ammonium
Cooper, A. Bryce
1983-01-01
The distribution, activity, and generic diversity of nitrifying bacteria in a stream receiving geothermal inputs of ammonium were studied. The high estimated rates of benthic nitrate flux (33 to 75 mg of N · m−2 · h−1) were a result of the activity of nitrifiers located in the sediment. Nitrifying potentials and ammonium oxidizer most probable numbers in the sediments were at least one order of magnitude higher than those in the waters. Nitrifiers in the oxygenated surface (0 to 2 cm) sediments were limited by suboptimal temperature, pH, and substrate level. Nitrifiers in deep (nonsurface) oxygenated sediments did not contribute significantly to the changes measured in the levels of inorganic nitrogen species in the overlying waters and presumably derived their ammonium supply from ammonification within the sediment. Ammonium-oxidizing isolates obtained by a most-probable number nonenrichment procedure were species of either Nitrosospira or Nitrosomonas, whereas all those obtained by an enrichment procedure (i.e., selective culture) were Nitrosomonas spp. The efficiency of the most-probable-number method for enumerating ammonium oxidizers was calculated to be between 0.05 and 2.0%, suggesting that measurements of nitrifying potentials provide a better estimate of nitrifying populations. PMID:16346261
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps
Adam Brandt
2015-11-15
This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Information Density and Syntactic Repetition.
Temperley, David; Gildea, Daniel
2015-11-01
In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastner, S.O.; Bhatia, A.K.
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, R.L.
1963-03-01
Twenty-four hrs after 1000-r x irradiation of a small area of the skin of rabbits, they were injected with a suspension of staphylococci subcutaneously in the irradiated area and in a nonirradiated control area. Localized lesions were most marked in nonirradiated areas and in some animals no visible lesions were noted in irradiated areas. The mild inflammatory lesions in irradiated areas corroborate the point of view, that irradiation is anti-inflammatory. The reason for mild lesions must be the reduced localization of staphylococci in irradiated areas and their escape from those areas. It is suggested that the antilocalizing property of irradiationmore » is probably the basis for the antiinflammatory action. When the concentration of organisms injected in the irradiated area was high (10 billion), septicemia and death of the rabbits occurred, presumably as a result of the antilocalizing property of irradiation, which permitted the escape of the organisms from the injected area. In animals injected in irradiated areas with either 0.5 or 5 billion organisms, no septicemia was observed, presumably because of the high natural immunity of rabbits to staphylococci. (TCO)« less
The risks and returns of stock investment in a financial market
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2013-03-01
The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
A comparative study of nonparametric methods for pattern recognition
NASA Technical Reports Server (NTRS)
Hahn, S. F.; Nelson, G. D.
1972-01-01
The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,
2014-01-01
Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Emergence flux declines disproportionately to larval density along a stream metals gradient
Schmidt, Travis S.; Kraus, Johanna M.; Walters, David M.; Wanty, Richard B.
2013-01-01
Effects of contaminants on adult aquatic insect emergence are less well understood than effects on insect larvae. We compared responses of larval density and adult emergence along a metal contamination gradient. Nonlinear threshold responses were generally observed for larvae and emergers. Larval densities decreased significantly at low metal concentrations but precipitously at concentrations of metal mixtures above aquatic life criteria (Cumulative Criterion Accumulation Ratio (CCAR) ≥ 1). In contrast, adult emergence declined precipitously at low metal concentrations (CCAR ≤ 1), followed by a modest decline above this threshold. Adult emergence was a more sensitive indicator of the effect of low metals concentrations on aquatic insect communities compared to larvae, presumably because emergence is limited by a combination of larval survival and other factors limiting successful emergence. Thus effects of exposure to larvae are not manifest until later in life (during metamorphosis and emergence). This loss in emergence reduces prey subsidies to riparian communities at concentrations considered safe for aquatic life. Our results also challenge the widely held assumption that adult emergence is a constant proportion of larval densities in all streams.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Bremer, J. E.; Harter, T.
2012-08-01
Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).
The identification of liquid ethane in Titan's Ontario Lacus
Brown, R.H.; Soderblom, L.A.; Soderblom, J.M.; Clark, R.N.; Jaumann, R.; Barnes, J.W.; Sotin, Christophe; Buratti, B.; Baines, K.H.; Nicholson, P.D.
2008-01-01
Titan was once thought to have global oceans of light hydrocarbons on its surface, but after 40 close flybys of Titan by the Cassini spacecraft, it has become clear that no such oceans exist. There are, however, features similar to terrestrial lakes and seas, and widespread evidence for fluvial erosion, presumably driven by precipitation of liquid methane from Titan's dense, nitrogen-dominated atmosphere. Here we report infrared spectroscopic data, obtained by the Visual and Infrared Mapping Spectrometer (VIMS) on board the Cassini spacecraft, that strongly indicate that ethane, probably in liquid solution with methane, nitrogen and other low-molecular-mass hydrocarbons, is contained within Titan's Ontario Lacus.
Free-living pathogens: life-history constraints and strain competition
Caraco, Thomas; Wang, Ing-Nang
2008-01-01
Many pathogen life histories include a free-living stage, often with anatomical and physiological adaptations promoting persistence outside of host tissues. More durable particles presumably require that the pathogen metabolize more resources per particle. Therefore, we hypothesize functional dependencies, pleiotropic constraints, between the rate at which free-living particles decay outside of host tissues and other pathogen traits, including virulence, the probability of infecting a host upon contact, and pathogen reproduction within host tissues. Assuming that pathogen strains compete for hosts preemptively, we find patterns in trait dependencies predicting whether or not strain competition favors a highly persistent free-living stage. PMID:18062992
Amino acid codes in mitochondria as possible clues to primitive codes
NASA Technical Reports Server (NTRS)
Jukes, T. H.
1981-01-01
Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.
NASA Astrophysics Data System (ADS)
Stephanik, Brian Michael
This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.
Analytical approach to an integrate-and-fire model with spike-triggered adaptation
NASA Astrophysics Data System (ADS)
Schwalger, Tilo; Lindner, Benjamin
2015-12-01
The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.
Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target
Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji
2009-01-01
In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326
Methicillin-resistant Staphylococcus aureus aortitis in a cardiac transplant patient.
Lubin, Jeffrey S
2009-11-01
A 57-year-old heart transplant patient presented to the Emergency Department with mild epigastric pain, nausea, and vomiting for two days. Aside from a recent hospitalization for replacement of his hemodialysis catheter, he had otherwise not been ill. He was afebrile, slightly hypertensive, and slightly tachycardic with mild tenderness over the left upper quadrant, but no guarding, rebound tenderness, or masses. His WBC count was elevated at 16.1 (normal: 3.8-10.6). A computed tomography of the abdomen showed an area of low attenuation surrounding the aorta, surrounded more peripherally by an area of higher density. He went urgently to the operating room for a presumed contained rupture of the thoracic aorta. During the operation the surgeons noted inflammatory changes, rather than rupture, and resected and replaced the affected section. Cultures from a peri-aortic swab grew methicillin-resistant Staphylococcus aureus. Among complications of cardiac transplantation, aortic involvement can be a source of significant morbidity and mortality. Primary bacterial aortitis is, however, a rare event with instances of less than 3% in all patients. The presentation of these infections may be subtle, making diagnosis difficult and requiring a high index of suspicion. CT is the initial imaging technique of choice. Therapy frequently involves surgery in addition to broad-spectrum antibiotics. This patient's infection most likely originated from an infected dialysis catheter, the one that had just been replaced, and was probably kept from becoming more symptomatic by the administration of vancomycin during the previous admission.
Kearney, Sean P; Guildenbecher, Daniel R
2016-06-20
We apply ultrafast pure-rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning, aluminized ammonium-perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laser-based diagnostics, with intense background luminosity and scattering from hot metal particles as large as several hundred micrometers in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminum-particle-seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulate-enhanced laser-induced breakdown. Introduction of femtosecond/picosecond (fs/ps) laser pulses improves CARS detection by providing time-gated elimination of strong nonresonant background interference. Single-laser-shot fs/ps CARS spectra were acquired from the burning propellant plume, with picosecond probe-pulse delays of 0 and 16 ps from the femtosecond pump and Stokes pulses. At zero delay, nonresonant background overwhelms the Raman-resonant spectroscopic features. Time-delayed probing results in the acquisition of background-free spectra that were successfully fit for temperature and relative oxygen content. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements with the CARS measurement volume positioned within 3 mm or less of the burning propellant surface. The results show that ultrafast CARS is a potentially enabling technology for probing harsh, particle-laden flame environments.
Kearney, Sean P.; Guildenbecher, Daniel R.
2016-06-20
We apply ultrafast pure-rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning, aluminized ammonium-perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laser-based diagnostics, with intense background luminosity and scattering from hot metal particles as large as several hundred micrometers in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminum-particle-seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulate-enhanced laser-induced breakdown. Introduction of femtosecond/picosecond (fs/ps)more » laser pulses improves CARS detection by providing time-gated elimination of strong nonresonant background interference. Single-laser-shot fs/ps CARS spectra were acquired from the burning propellant plume, with picosecond probe-pulse delays of 0 and 16 ps from the femtosecond pump and Stokes pulses. At zero delay, nonresonant background overwhelms the Raman-resonant spectroscopic features. Time-delayed probing results in the acquisition of background-free spectra that were successfully fit for temperature and relative oxygen content. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements with the CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Lastly, the results show that ultrafast CARS is a potentially enabling technology for probing harsh, particle-laden flame environments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kearney, Sean P.; Guildenbecher, Daniel R.
We apply ultrafast pure-rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning, aluminized ammonium-perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laser-based diagnostics, with intense background luminosity and scattering from hot metal particles as large as several hundred micrometers in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminum-particle-seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulate-enhanced laser-induced breakdown. Introduction of femtosecond/picosecond (fs/ps)more » laser pulses improves CARS detection by providing time-gated elimination of strong nonresonant background interference. Single-laser-shot fs/ps CARS spectra were acquired from the burning propellant plume, with picosecond probe-pulse delays of 0 and 16 ps from the femtosecond pump and Stokes pulses. At zero delay, nonresonant background overwhelms the Raman-resonant spectroscopic features. Time-delayed probing results in the acquisition of background-free spectra that were successfully fit for temperature and relative oxygen content. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements with the CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Lastly, the results show that ultrafast CARS is a potentially enabling technology for probing harsh, particle-laden flame environments.« less
NASA Astrophysics Data System (ADS)
Nikolaevich Lipatnikov, Andrei; Nishiki, Shinnosuke; Hasegawa, Tatsuya
2015-05-01
The linear relation between the mean rate of product creation and the mean scalar dissipation rate, derived in the seminal paper by K.N.C. Bray ['The interaction between turbulence and combustion', Proceedings of the Combustion Institute, Vol. 17 (1979), pp. 223-233], is the cornerstone for models of premixed turbulent combustion that deal with the dissipation rate in order to close the reaction rate. In the present work, this linear relation is straightforwardly validated by analysing data computed earlier in the 3D Direct Numerical Simulation (DNS) of three statistically stationary, 1D, planar turbulent flames associated with the flamelet regime of premixed combustion. Although the linear relation does not hold at the leading and trailing edges of the mean flame brush, such a result is expected within the framework of Bray's theory. However, the present DNS yields substantially larger (smaller) values of an input parameter cm (or K2 = 1/(2cm - 1)), involved by the studied linear relation, when compared to the commonly used value of cm = 0.7 (or K2 = 2.5). To gain further insight into the issue and into the eventual dependence of cm on mixture composition, the DNS data are combined with the results of numerical simulations of stationary, 1D, planar laminar methane-air flames with complex chemistry, with the results being reported in terms of differently defined combustion progress variables c, i.e. the normalised temperature, density, or mole fraction of CH4, O2, CO2 or H2O. Such a study indicates the dependence of cm both on the definition of c and on the equivalence ratio. Nevertheless, K2 and cm can be estimated by processing the results of simulations of counterpart laminar premixed flames. Similar conclusions were also drawn by skipping the DNS data, but invoking a presumed beta probability density function in order to evaluate cm for the differently defined c's and various equivalence ratios.
Reciprocal inhibition between motor neurons of the tibialis anterior and triceps surae in humans.
Yavuz, Utku Ş; Negro, Francesco; Diedrichs, Robin; Farina, Dario
2018-05-01
Motor neurons innervating antagonist muscles receive reciprocal inhibitory afferent inputs to facilitate the joint movement in the two directions. The present study investigates the mutual transmission of reciprocal inhibitory afferent inputs between the tibialis anterior (TA) and triceps surae (soleus and medial gastrocnemius) motor units. We assessed this mutual mechanism in large populations of motor units for building a statistical distribution of the inhibition amplitudes during standardized input to the motor neuron pools to minimize the effect of modulatory pathways. Single motor unit activities were identified using high-density surface electromyography (HDsEMG) recorded from the TA, soleus (Sol), and medial gastrocnemius (GM) muscles during isometric dorsi- and plantarflexion. Reciprocal inhibition on the antagonist muscle was elicited by electrical stimulation of the tibial (TN) or common peroneal nerves (CPN). The probability density distributions of reflex strength for each muscle were estimated to examine the strength of mutual transmission of reciprocal inhibitory input. The results showed that the strength of reciprocal inhibition in the TA motor units was fourfold greater than for the GM and the Sol motor units. This suggests an asymmetric transmission of reciprocal inhibition between ankle extensor and flexor muscles. This asymmetry cannot be explained by differences in motor unit type composition between the investigated muscles since we sampled low-threshold motor units in all cases. Therefore, the differences observed for the strength of inhibition are presumably due to a differential reciprocal spindle afferent input and the relative contribution of nonreciprocal inhibitory pathways. NEW & NOTEWORTHY We investigated the mutual transmission of reciprocal inhibition in large samples of motor units using a standardized input (electrical stimulation) to the motor neurons. The results demonstrated that the disynaptic reciprocal inhibition exerted between ankle flexor and extensor muscles is asymmetric. The functional implication of asymmetric transmission may be associated with the neural strategies of postural control.
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki
2014-01-01
The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099
Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2002-01-01
A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...
Properties of Traffic Risk Coefficient
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu
2009-10-01
We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.
Probability mass first flush evaluation for combined sewer discharges.
Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong
2010-01-01
The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
The regolith history of 14307. [lunar breccia
NASA Technical Reports Server (NTRS)
Bernatowicz, T.; Hohenberg, C. M.; Morgan, C. J.; Podosek, F. A.; Drozd, R. J.; Lugmair, G.
1977-01-01
Noble gas and trace element analyses of matrix and a clast from breccia 14307 are reported. This sample was exposed to a large neutron fluence, as seen by an elevated Sm-150/Sm-149 ratio and by noble gases, particularly Xe-136 from neutron fission of U-235. Strong constraints on the exposure history result from combined consideration of Sm-150, Xe-136, and spallation noble gases. Both clast and matrix were irradiated for about 1 AE under substantial shielding beginning at least 2 AE ago, probably more than 3 AE ago. The manifestations of soil exposure seen in the matrix - solar wind gases, glass formation, etc. - thus must have been acquired in an ancient epoch. The matrix has had a longer exposure to cosmic rays than the clast, presumably during its prebrecciation history as a soil. Brecciation probably occurred more than 1 AE ago, perhaps more than 3 AE ago, but at least 0.4 AE after the formation of the matrix constituents.
NASA Technical Reports Server (NTRS)
Hayatsu, R.; Matsuoka, S.; Anders, E.; Scott, R. G.; Studier, M. H.
1977-01-01
Degradation techniques, including pyrolysis, depolymerization, and oxidation, were used to study the insoluble polymer from the Murchison C2 chondrite. Oxidation with Cr2O7(2-) or O2/UV led to the identification of 15 aromatic ring systems. Of 11 aliphatic acids identified, three dicarboxylic acids presumably came from hydroaromatic portions of the polymer, whereas eight monocarboxylic acids probably derive from bridging groups or ring substituents. Depolymerization with CF3COO4 yielded some of the same ring systems, as well as alkanes (C1 through C8) and alkenes (C2 through C8), alkyl (C1 through C5) benzenes and naphthalenes, and methyl- or dimethyl -indene, -indane, -phenol, -pyrrole, and -pyridine. All these compounds were detected below 200 C, and are therefore probably indigenous constituents. The properties of the meteoritic polymer were compared with the properties of a synthetic polymer produced by the Fischer-Tropsch reaction. It is suggested that the meteoritic polymer was also produced by surface catalysis.
The difficulty of ultraviolet emssion from supernovae
NASA Technical Reports Server (NTRS)
Colgate, S. A.
1971-01-01
There are certain conceptual difficulties in the theory of the generation of ultraviolet radiation which is presumed for the creation of the optical fluorescence mechanism of supernova light emission and ionization of a nebula as large as the Gum nebula. Requirements concerning the energy distribution of the ultraviolet photons are: 1) The energy of the greater part of the photons must be sufficient to cause both helium fluorescence and hydrogen ionization. 2) If the photons are emitted in an approximate black body spectrum, the fraction of energy emitted in the optical must be no more than what is already observed. Ultraviolet black body emission depends primarily on the energy source. The probability that the wide mixture of elements present in the interstellar medium and supernova ejecta results in an emission localized in a limited region with less than 0.001 emission in the visible, for either ionization or fluorescence ultraviolet, is remote. Therefore transparent emission must be excluded as unlikely, and black body or at least quasi-black-body emission is more probable.
Statistical analysis of dislocations and dislocation boundaries from EBSD data.
Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N
2017-08-01
Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Self-Supervised Dynamical Systems
NASA Technical Reports Server (NTRS)
Zak, Michail
2003-01-01
Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
Quantum mechanical probability current as electromagnetic 4-current from topological EM fields
NASA Astrophysics Data System (ADS)
van der Mark, Martin B.
2015-09-01
Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
Liver histology during Mipomersen therapy for severe hypercholesterolemia.
Hashemi, Nikroo; Odze, Robert D; McGowan, Mary P; Santos, Raul D; Stroes, Erik S G; Cohen, David E
2014-01-01
Mipomersen is an antisense oligonucleotide that inhibits apolipoprotein B synthesis and lowers plasma low-density lipoprotein cholesterol even in the absence of low-density lipoprotein receptor function, presumably from inhibition of hepatic production of triglyceride-rich very low-density lipoprotein particles. By virtue of this mechanism, mipomersen therapy commonly results in the development of hepatic steatosis. Because this is frequently accompanied by alanine aminotransferase elevations, concern has arisen that mipomersen could promote the development of steatohepatitis, which could in turn lead to fibrosis and cirrhosis over time. The objective of this study was to assess the liver biopsy findings in patients treated with mipomersen. We describe 7 patients who underwent liver biopsy during the mipomersen clinical development programs. Liver biopsies were reviewed by a single, blinded pathologist. The histopathological features were characterized by simple steatosis, without significant inflammation or fibrosis. These findings suggest that hepatic steatosis resulting from mipomersen is distinct from nonalcoholic steatohepatitis. Copyright © 2014 National Lipid Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Causal illusions in children when the outcome is frequent
2017-01-01
Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent. PMID:28898294
NASA Astrophysics Data System (ADS)
Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi
2014-07-01
{110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Technical Reports Server (NTRS)
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Spacecraft Collision Avoidance
NASA Astrophysics Data System (ADS)
Bussy-Virat, Charles
The rapid increase of the number of objects in orbit around the Earth poses a serious threat to operational spacecraft and astronauts. In order to effectively avoid collisions, mission operators need to assess the risk of collision between the satellite and any other object whose orbit is likely to approach its trajectory. Several algorithms predict the probability of collision but have limitations that impair the accuracy of the prediction. An important limitation is that uncertainties in the atmospheric density are usually not taken into account in the propagation of the covariance matrix from current epoch to closest approach time. The Spacecraft Orbital Characterization Kit (SpOCK) was developed to accurately predict the positions and velocities of spacecraft. The central capability of SpOCK is a high accuracy numerical propagator of spacecraft orbits and computations of ancillary parameters. The numerical integration uses a comprehensive modeling of the dynamics of spacecraft in orbit that includes all the perturbing forces that a spacecraft is subject to in orbit. In particular, the atmospheric density is modeled by thermospheric models to allow for an accurate representation of the atmospheric drag. SpOCK predicts the probability of collision between two orbiting objects taking into account the uncertainties in the atmospheric density. Monte Carlo procedures are used to perturb the initial position and velocity of the primary and secondary spacecraft from their covariance matrices. Developed in C, SpOCK supports parallelism to quickly assess the risk of collision so it can be used operationally in real time. The upper atmosphere of the Earth is strongly driven by the solar activity. In particular, abrupt transitions from slow to fast solar wind cause important disturbances of the atmospheric density, hence of the drag acceleration that spacecraft are subject to. The Probability Distribution Function (PDF) model was developed to predict the solar wind speed five days in advance. In particular, the PDF model is able to predict rapid enhancements in the solar wind speed. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. En-semble forecasts provide the forecasters with an estimation of the uncertainty in the prediction, which can be used to derive uncertainties in the atmospheric density and in the drag acceleration. The dissertation then demonstrates that uncertainties in the atmospheric density result in large uncertainties in the prediction of the probability of collision. As an example, the effects of a geomagnetic storm on the probability of collision are illustrated. The research aims at providing tools and analyses that help understand and predict the effects of uncertainties in the atmospheric density on the probability of collision. The ultimate motivation is to support mission operators in making the correct decision with regard to a potential collision avoidance maneuver by providing an uncertainty on the prediction of the probability of collision instead of a single value. This approach can help avoid performing unnecessary costly maneuvers, while making sure that the risk of collision is fully evaluated.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Brownian Motion with Active Fluctuations
NASA Astrophysics Data System (ADS)
Romanczuk, Pawel; Schimansky-Geier, Lutz
2011-06-01
We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.
Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-04-17
We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Density PDFs of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2012-09-01
The probability distribution functions (PDFs) of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5∘ and |b|≥ 5∘ are considered separately. Our results provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Exploration of thermal counterflow in He II using particle tracking velocimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mastracci, Brian; Guo, Wei
Flow visualization using particle image velocimetry (PIV) and particularly particle tracking velocimetry (PTV) has been applied to thermal counterflow in He II for nearly two decades now, but the results remain difficult to interpret because tracer particle motion can be influenced by both the normal fluid and superfluid components of He II as well as the quantized vortex tangle. For instance, in one early experiment it was observed (using PTV) that tracer particles move at the normal fluid velocity v n, while in another it was observed (using PIV) that particles move at v n/2. Besides the different visualization methods,more » the range of applied heat flux investigated by these experiments differed by an order of magnitude. To resolve this apparent discrepancy and explore the statistics of particle motion in thermal counterflow, we apply the PTV method to a wide range of heat flux at a number of different fluid temperatures. In our analysis, we introduce a scheme for analyzing the velocity of particles presumably moving with the normal fluid separately from those presumably influenced by the quantized vortex tangle. Our results show that for lower heat flux there are two distinct peaks in the streamwise particle velocity probability density function (PDF), with one centered at the normal fluid velocity v n (named G2 for convenience) while the other is centered near v n/2 (G1). For higher heat flux there is a single peak centered near v n/2 (G3). Using our separation scheme, we show quantitatively that there is no size difference between the particles contributing to G1 and G2. We also show that nonclassical features of the transverse particle velocity PDF arise entirely from G1, while the corresponding PDF for G2 exhibits the classical Gaussian form. The G2 transverse velocity fluctuation, backed up by second sound attenuation in decaying counterflow, suggests that large-scale turbulence in the normal fluid is absent from the two-peak region. We offer a brief discussion of the physical mechanisms that may be responsible for our observations, revealing that G1 velocity fluctuations may be linked to fluctuations of quantized vortex line velocity, and suggest a number of numerical simulations that may reveal the underlying physics in detail.« less
Exploration of thermal counterflow in He II using particle tracking velocimetry
Mastracci, Brian; Guo, Wei
2018-06-22
Flow visualization using particle image velocimetry (PIV) and particularly particle tracking velocimetry (PTV) has been applied to thermal counterflow in He II for nearly two decades now, but the results remain difficult to interpret because tracer particle motion can be influenced by both the normal fluid and superfluid components of He II as well as the quantized vortex tangle. For instance, in one early experiment it was observed (using PTV) that tracer particles move at the normal fluid velocity v n, while in another it was observed (using PIV) that particles move at v n/2. Besides the different visualization methods,more » the range of applied heat flux investigated by these experiments differed by an order of magnitude. To resolve this apparent discrepancy and explore the statistics of particle motion in thermal counterflow, we apply the PTV method to a wide range of heat flux at a number of different fluid temperatures. In our analysis, we introduce a scheme for analyzing the velocity of particles presumably moving with the normal fluid separately from those presumably influenced by the quantized vortex tangle. Our results show that for lower heat flux there are two distinct peaks in the streamwise particle velocity probability density function (PDF), with one centered at the normal fluid velocity v n (named G2 for convenience) while the other is centered near v n/2 (G1). For higher heat flux there is a single peak centered near v n/2 (G3). Using our separation scheme, we show quantitatively that there is no size difference between the particles contributing to G1 and G2. We also show that nonclassical features of the transverse particle velocity PDF arise entirely from G1, while the corresponding PDF for G2 exhibits the classical Gaussian form. The G2 transverse velocity fluctuation, backed up by second sound attenuation in decaying counterflow, suggests that large-scale turbulence in the normal fluid is absent from the two-peak region. We offer a brief discussion of the physical mechanisms that may be responsible for our observations, revealing that G1 velocity fluctuations may be linked to fluctuations of quantized vortex line velocity, and suggest a number of numerical simulations that may reveal the underlying physics in detail.« less
NASA Astrophysics Data System (ADS)
Benda, L. E.
2009-12-01
Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and sediment storage are described by their probability densities. There are a number of general principles (hypotheses) that stem from this conceptual and numerical framework that may inform the science of erosion and sedimentation in river basins. Rainstorms and other perturbations, characterized by probability distributions of event frequency and magnitude, stochastically drive sediment influx to channel networks. The frequency-magnitude distribution of sediment supply that is typically skewed reflects strong interactions among climate, topography, vegetation, and geotechnical controls that vary between regions; the distribution varies systematically with basin area and the spatial pattern of erosion sources. Probability densities of sediment flux and storage evolve from more to less skewed forms downstream in river networks due to the convolution of the population of sediment sources in a watershed that should vary with climate, network patterns, topography, spatial scale, and degree of erosion asynchrony. The sediment flux and storage distributions are also transformed downstream due to diffusion, storage, interference, and attrition. In stochastic systems, the characteristically pulsed sediment supply and transport can create translational or stationary-diffusive valley and channel depositional landforms, the geometries of which are governed by sediment flux-network interactions. Episodic releases of sediment to the network can also drive a system memory reflected in a Hurst Effect in sediment yields and thus in sedimentological records. Similarly, discreet events of punctuated erosion on hillslopes can lead to altered surface and subsurface properties of a population of erosion source areas that can echo through time and affect subsequent erosion and sediment flux rates. Spatial patterns of probability densities have implications for the frequency and magnitude of sediment transport and storage and thus for the formation of alluvial and colluvial landforms throughout watersheds. For instance, the combination and interference of probability densities of sediment flux at confluences creates patterns of riverine heterogeneity, including standing waves of sediment with associated age distributions of deposits that can vary from younger to older depending on network geometry and position. Although the watershed world of probability densities is rarified and typically confined to research endeavors, it has real world implications for the day-to-day work on hillslopes and in fluvial systems, including measuring erosion, sediment transport, mapping channel morphology and aquatic habitats, interpreting deposit stratigraphy, conducting channel restoration, and applying environmental regulations. A question for the geomorphology community is whether the stochastic framework is useful for advancing our understanding of erosion and sedimentation and whether it should stimulate research to further develop, refine and test these and other principles. For example, a changing climate should lead to shifts in probability densities of erosion, sediment flux, storage, and associated habitats and thus provide a useful index of climate change in earth science forecast models.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
Coulomb Impurity Potential RbCl Quantum Pseudodot Qubit
NASA Astrophysics Data System (ADS)
Ma, Xin-Jun; Qi, Bin; Xiao, Jing-Lin
2015-08-01
By employing a variational method of Pekar type, we study the eigenenergies and the corresponding eigenfunctions of the ground and the first-excited states of an electron strongly coupled to electron-LO in a RbCl quantum pseudodot (QPD) with a hydrogen-like impurity at the center. This QPD system may be used as a two-level quantum qubit. The expressions of electron's probability density versus time and the coordinates, and the oscillating period versus the Coulombic impurity potential and the polaron radius have been derived. The investigated results indicate ① that the probability density of the electron oscillates in the QPD with a certain oscillating period of , ② that due to the presence of the asymmetrical potential in the z direction of the RbCl QPD, the electron probability density shows double-peak configuration, whereas there is only one peak if the confinement is a two-dimensional symmetric structure in the xy plane of the QPD, ③ that the oscillation period is a decreasing function of the Coulombic impurity potential, whereas it is an increasing one of the polaron radius.
NASA Technical Reports Server (NTRS)
Pfaff, R.; Rowland, D.; Klenzing, J.; Freudenreich, H.; Bromund, K.; Liebrecht, C.; Roddy, P.; Hunton, D.
2009-01-01
DC electric field observations and associated plasma drifts gathered with the Vector Electric Field Investigation on the Air Force Communication/Navigation Outage Forecasting System (C/NOFS) satellite typically reveal considerable variation at large scales (approximately 100's of km), in both daytime and nighttime cases, with enhanced structures usually confined to the nightside. Although such electric field structures are typically associated with plasma density depletions and structures, as observed by the Planar Langmuir Probe on C/NOFS, what is surprising is the number of cases in which large amplitude, structured DC electric fields are observed without a significant plasma density counterpart structure, including their appearance at times when the ambient plasma density appears relatively quiescent. We investigate the relationship of such structured DC electric fields and the ambient plasma density in the C/NOFS satellite measurements observed thus far, taking into account both plasma density depletions and enhancements. We investigate the mapping of the electric fields along magnetic field lines from distant altitudes and latitudes to locations where the density structures, which presumably formed the original seat of the electric fields, are no longer discernible in the observations. In some cases, the electric field structures and spectral characteristics appear to mimic those associated with equatorial spread-F processes, providing important clues to their origins. We examine altitude, seasonal, and longitudinal effects in an effort to establish the origin of such structured DC electric fields observed both with, and without, associated plasma density gradients
Does probability of occurrence relate to population dynamics?
Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.
2014-01-01
Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Summary of intrinsic and extrinsic factors affecting detection probability of marsh birds
Conway, C.J.; Gibbs, J.P.
2011-01-01
Many species of marsh birds (rails, bitterns, grebes, etc.) rely exclusively on emergent marsh vegetation for all phases of their life cycle, and many organizations have become concerned about the status and persistence of this group of birds. Yet, marsh birds are notoriously difficult to monitor due to their secretive habits. We synthesized the published and unpublished literature and summarized the factors that influence detection probability of secretive marsh birds in North America. Marsh birds are more likely to respond to conspecific than heterospecific calls, and seasonal peak in vocalization probability varies among co-existing species. The effectiveness of morning versus evening surveys varies among species and locations. Vocalization probability appears to be positively correlated with density in breeding Virginia Rails (Rallus limicola), Soras (Porzana carolina), and Clapper Rails (Rallus longirostris). Movement of birds toward the broadcast source creates biases when using count data from callbroadcast surveys to estimate population density. Ambient temperature, wind speed, cloud cover, and moon phase affected detection probability in some, but not all, studies. Better estimates of detection probability are needed. We provide recommendations that would help improve future marsh bird survey efforts and a list of 14 priority information and research needs that represent gaps in our current knowledge where future resources are best directed. ?? Society of Wetland Scientists 2011.
Self-Organization in 2D Traffic Flow Model with Jam-Avoiding Drive
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-04-01
A stochastic cellular automaton (CA) model is presented to investigate the traffic jam by self-organization in the two-dimensional (2D) traffic flow. The CA model is the extended version of the 2D asymmetric exclusion model to take into account jam-avoiding drive. Each site contains either a car moving to the up, a car moving to the right, or is empty. A up car can shift right with probability p ja if it is blocked ahead by other cars. It is shown that the three phases (the low-density phase, the intermediate-density phase and the high-density phase) appear in the traffic flow. The intermediate-density phase is characterized by the right moving of up cars. The jamming transition to the high-density jamming phase occurs with higher density of cars than that without jam-avoiding drive. The jamming transition point p 2c increases with the shifting probability p ja. In the deterministic limit of p ja=1, it is found that a new jamming transition occurs from the low-density synchronized-shifting phase to the high-density moving phase with increasing density of cars. In the synchronized-shifting phase, all up cars do not move to the up but shift to the right by synchronizing with the move of right cars. We show that the jam-avoiding drive has an important effect on the dynamical jamming transition.
Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-06-15
The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.
Single-molecule stochastic times in a reversible bimolecular reaction
NASA Astrophysics Data System (ADS)
Keller, Peter; Valleriani, Angelo
2012-08-01
In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.
Probability density function approach for compressible turbulent reacting flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.
1994-01-01
The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
Does fragmentation of Urtica habitats affect phytophagous and predatory insects differentially?
Zabel, Jörg; Tscharntke, Teja
1998-09-01
Effects of habitat fragmentation on the insect community of stinging nettle (Urtica dioica L.) were studied, using 32 natural nettle patches of different area and degree of isolation in an agricultural landscape. Habitat fragmentation reduced the species richness of Heteroptera, Auchenorrhyncha, and Coleoptera, and the abundance of populations. Habitat isolation and area reduction did not affect all insect species equally. Monophagous herbivores had a higher probability of absence from small patches than all (monophagous and polyphagous) herbivore species, and the percentage of monophagous herbivores increased with habitat area. Abundance and population variability of species were negatively correlated and could both be used as a predictor of the percentage of occupied habitats. Species richness of herbivores correlated (positively) with habitat area, while species richness of predators correlated (negatively) with habitat isolation. In logistic regressions, the probability of absence of monophagous herbivores from habitat patches could only be explained by habitat area (in 4 out of 10 species) and predator absence probability only by habitat isolation (in 3 out of 14 species). Presumably because of the instability of higher-trophic-level populations and dispersal limitation, predators were more affected by habitat isolation than herbivores, while they did not differ from herbivore populations with respect to abundance or variability. Thus increasing habitat connectivity in the agricultural landscape should primarily promote predator populations.
NASA Astrophysics Data System (ADS)
Abong`O, B. O.; Momba, M. N. B.; Rodda, N.
The current study explored the health risk of E. coli O157:H7 to diarrhoeic confirmed and non-confirmed HIV/AIDS patients due to their exposure to presumed ingestion of water, meat products and vegetables ostensibly contaminated with E. coli O157:H7. Strains of E. coli O157:H7 were isolated by enrichment culture and on Cefixime-Telurite Sorbitol MacConkey agar. Average counts of presumptive E. coli O157 were used for dose-response assessment. Probability of infection to confirmed and non-confirmed HIV/AIDS patients was 20 and 27% from meat and meat products, 21% and 15% from vegetables and 100% due to ingestion of 1500 mL person-1 day-1 of water. Drinking water had higher probability of transmitting E. coli O157:H7 infections than meat and meat products and vegetables. Probability of E. coli O157:H7 infections were high for confirmed HIV/AIDS patients than for non-confirmed patients. Water and foods consumed by HIV/AIDS patients should be safe of any microbial contaminants, these waters and foods should as well be investigated for other enteric pathogens to establish their safety.
NASA Astrophysics Data System (ADS)
Perrin, Jérôme; Takeda, Yoshihiko; Hirano, Naoto; Takeuchi, Yoshiaki; Matsuda, Akihisa
1989-03-01
The deposition rate of hydrogenated amorphous silicon films in SiH 4 glow-discharge is drastically enhanced upon addition of B 2H 6 when the gas-phase concentration exceeds 10 -4. This cannot be attributed to gas-phase reactions and must be interpreted as an increase of the sticking probability of the dominant SiH 3 radical. However, the total surface loss probability ( β) of SiH 3 which includes both sticking ( s) and recombination ( γ) increases only above 10 -2 B 2H 6 concentration, which reveals that between 10 -4 and 10 -2 the ratio {s}/{β} increases. A precursor-state model is proposed in which SiH 3 first physisorbs on the H-covered surface and migrates until it recombines, or chemisorbs on a free dangling bond site. At a typical deposition temperature of 200° C, the only mechanism of creation of dangling bonds in the absence of B 2H 6 is precisely the recombination of SiH 3 as SiH 4 by H abstraction, which limits the sticking probability to a fraction of β. This restriction is overcome with the help of hydroboron radicals, presumably BH 3, which catalyze H 2 desorption.
Geweke, Jan; Shirhatti, Pranav R; Rahinov, Igor; Bartels, Christof; Wodtke, Alec M
2016-08-07
In this work we seek to examine the nature of collisional energy transfer between HCl and Au(111) for nonreactive scattering events that sample geometries near the transition state for dissociative adsorption by varying both the vibrational and translational energy of the incident HCl molecules in the range near the dissociation barrier. Specifically, we report absolute vibrational excitation probabilities for HCl(v = 0 → 1) and HCl(v = 1 → 2) scattering from clean Au(111) as a function of surface temperature and incidence translational energy. The HCl(v = 2 → 3) channel could not be observed-presumably due to the onset of dissociation. The excitation probabilities can be decomposed into adiabatic and nonadiabatic contributions. We find that both contributions strongly increase with incidence vibrational state by a factor of 24 and 9, respectively. This suggests that V-T as well as V-EHP coupling can be enhanced near the transition state for dissociative adsorption at a metal surface. We also show that previously reported HCl(v = 0 → 1) excitation probabilities [Q. Ran et al., Phys. Rev. Lett. 98, 237601 (2007)]-50 times smaller than those reported here-were influenced by erroneous assignment of spectroscopic lines used in the data analysis.
Large Scale Data Analysis and Knowledge Extraction in Communication Data
2017-03-31
this purpose, we developed a novel method the " Correlation Density Ran!C’ which finds probability density distribution of related frequent event on all...which is called " Correlation Density Rank", is developed to derive the community tree from the network. As in the real world, where a network is...Community Structure in Dynamic Social Networks using the Correlation Density Rank," 2014 ASE BigData/SocialCom/Cybersecurity Conference, Stanford
Hydrogen and Sulfur from Hydrogen Sulfide. 5. Anodic Oxidation of Sulfur on Activated Glassy Carbon
1988-12-05
electrolyses of H S can probably be carried out at high rates with modest cell voltages in the range 1-1.5 V. The variation in anode current densities...of H2S from solutions of NaSH in aqueous NaOH was achieved using suitably ac- tivated glassy carbon anodes. Thus electrolyses of H2S can probably be...passivation by using a basic solvent at 850C. Using an H2S-saturated 6M NaOH solution, they conducted electrolyses for extended periods at current densities
Continuation of probability density functions using a generalized Lyapunov approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl
Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
NASA Technical Reports Server (NTRS)
Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard
1988-01-01
The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.
Exact joint density-current probability function for the asymmetric exclusion process.
Depken, Martin; Stinchcombe, Robin
2004-07-23
We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society
The identification of liquid ethane in Titan's Ontario Lacus
Brown, R.H.; Soderblom, L.A.; Soderblom, J.M.; Clark, R.N.; Jaumann, R.; Barnes, J.W.; Sotin, Christophe; Buratti, B.; Baines, K.H.; Nicholson, P.D.
2008-01-01
Titan was once thought to have global oceans of light hydrocarbons on its surface, but after 40 close flybys of Titan by the Cassini spacecraft, it has become clear that no such oceans exist. There are, however, features similar to terrestrial lakes and seas, and widespread evidence for fluvial erosion, presumably driven by precipitation of liquid methane from Titan's dense, nitrogen-dominated atmosphere. Here we report infrared spectroscopic data, obtained by the Visual and Infrared Mapping Spectrometer (VIMS) on board the Cassini spacecraft, that strongly indicate that ethane, probably in liquid solution with methane, nitrogen and other low-molecular-mass hydrocarbons, is contained within Titan's Ontario Lacus. ??2008 Macmillan Publishers Limited. All rights reserved.
Darwin, artificial selection, and poverty.
Sanchez, Luis
2010-03-01
This paper argues that the processes of evolutionary selection are becoming increasingly artificial, a trend that goes against the belief in a purely natural selection process claimed by Darwin's natural selection theory. Artificial selection is mentioned by Darwin, but it was ignored by Social Darwinists, and it is all but absent in neo-Darwinian thinking. This omission results in an underestimation of probable impacts of artificial selection upon assumed evolutionary processes, and has implications for the ideological uses of Darwin's language, particularly in relation to poverty and other social inequalities. The influence of artificial selection on genotypic and phenotypic adaptations arguably represents a substantial shift in the presumed path of evolution, a shift laden with both biological and political implications.
Multivariate Density Estimation and Remote Sensing
NASA Technical Reports Server (NTRS)
Scott, D. W.
1983-01-01
Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.
A Balanced Approach to Adaptive Probability Density Estimation.
Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy
2017-01-01
Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.
Adaptive detection of noise signal according to Neumann-Pearson criterion
NASA Astrophysics Data System (ADS)
Padiryakov, Y. A.
1985-03-01
Optimum detection according to the Neumann-Pearson criterion is considered in the case of a random Gaussian noise signal, stationary during measurement, and a stationary random Gaussian background interference. Detection is based on two samples, their statistics characterized by estimates of their spectral densities, it being a priori known that sample A from the signal channel is either the sum of signal and interference or interference alone and sample B from the reference interference channel is an interference with the same spectral density as that of the interference in sample A for both hypotheses. The probability of correct detection is maximized on the average, first in the 2N-dimensional space of signal spectral density and interference spectral density readings, by fixing the probability of false alarm at each point so as to stabilize it at a constant level against variation of the interference spectral density. Deterministic decision rules are established. The algorithm is then reduced to equivalent detection in the N-dimensional space of the ratio of sample A readings to sample B readings.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.
2011-01-01
Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.
NASA Astrophysics Data System (ADS)
Paganoni, Matteo; Al Harthi, Amena; Morad, Daniel; Morad, Sadoon; Ceriani, Andrea; Mansurbeg, Howri; Al Suwaidi, Aisha; Al-Aasm, Ihsan S.; Ehrenberg, Stephen N.; Sirat, Manhal
2016-04-01
Bed-parallel stylolites are a widespread diagenetic feature in Lower Cretaceous limestone reservoirs, Abu Dhabi, United Arab Emirates (UAE). Diagenetic calcite, dolomite, kaolin and small amounts of pyrite, fluorite, anhydrite and sphalerite occur along and in the vicinity of the stylolites. Petrographic observations, negative δ18OVPDB, fluid inclusion microthermometry, and enrichment in 87Sr suggest that these cements have precipitated from hot basinal brines, which migrated along the stylolites and genetically related microfractures (tension gashes). Fluid migration was presumably related to lateral tectonic compression events related to the foreland basin formation. The low solubility of Al3 + in formation waters suggests that kaolin precipitation was linked to derivation of organic acids during organic matter maturation, probably in siliciclastic source rocks. The mass released from stylolitization was presumably re-precipitated as macro- and microcrystalline calcite cement in the host limestones. The flanks of the oilfield (water zone) display more frequent presence and higher amplitude of stylolites, lower porosity and permeability, higher homogenization temperatures and more radiogenic composition of carbonates compared to the crest (oil zone). This indicates that oil emplacement retards diagenesis. This study demonstrates that stylolitization plays a crucial role in fluid flow and diagenesis of carbonate reservoirs during basin evolution.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Assessing environmental DNA detection in controlled lentic systems.
Moyer, Gregory R; Díaz-Ferguson, Edgardo; Hill, Jeffrey E; Shea, Colin
2014-01-01
Little consideration has been given to environmental DNA (eDNA) sampling strategies for rare species. The certainty of species detection relies on understanding false positive and false negative error rates. We used artificial ponds together with logistic regression models to assess the detection of African jewelfish eDNA at varying fish densities (0, 0.32, 1.75, and 5.25 fish/m3). Our objectives were to determine the most effective water stratum for eDNA detection, estimate true and false positive eDNA detection rates, and assess the number of water samples necessary to minimize the risk of false negatives. There were 28 eDNA detections in 324, 1-L, water samples collected from four experimental ponds. The best-approximating model indicated that the per-L-sample probability of eDNA detection was 4.86 times more likely for every 2.53 fish/m3 (1 SD) increase in fish density and 1.67 times less likely for every 1.02 C (1 SD) increase in water temperature. The best section of the water column to detect eDNA was the surface and to a lesser extent the bottom. Although no false positives were detected, the estimated likely number of false positives in samples from ponds that contained fish averaged 3.62. At high densities of African jewelfish, 3-5 L of water provided a >95% probability for the presence/absence of its eDNA. Conversely, at moderate and low densities, the number of water samples necessary to achieve a >95% probability of eDNA detection approximated 42-73 and >100 L, respectively. Potential biases associated with incomplete detection of eDNA could be alleviated via formal estimation of eDNA detection probabilities under an occupancy modeling framework; alternatively, the filtration of hundreds of liters of water may be required to achieve a high (e.g., 95%) level of certainty that African jewelfish eDNA will be detected at low densities (i.e., <0.32 fish/m3 or 1.75 g/m3).
NASA Astrophysics Data System (ADS)
Valageas, P.
2000-02-01
In this article we present an analytical calculation of the probability distribution of the magnification of distant sources due to weak gravitational lensing from non-linear scales. We use a realistic description of the non-linear density field, which has already been compared with numerical simulations of structure formation within hierarchical scenarios. Then, we can directly express the probability distribution P(mu ) of the magnification in terms of the probability distribution of the density contrast realized on non-linear scales (typical of galaxies) where the local slope of the initial linear power-spectrum is n=-2. We recover the behaviour seen by numerical simulations: P(mu ) peaks at a value slightly smaller than the mean < mu >=1 and it shows an extended large mu tail (as described in another article our predictions also show a good quantitative agreement with results from N-body simulations for a finite smoothing angle). Then, we study the effects of weak lensing on the derivation of the cosmological parameters from SNeIa. We show that the inaccuracy introduced by weak lensing is not negligible: {cal D}lta Omega_mega_m >~ 0.3 for two observations at z_s=0.5 and z_s=1. However, observations can unambiguously discriminate between Omega_mega_m =0.3 and Omega_mega_m =1. Moreover, in the case of a low-density universe one can clearly distinguish an open model from a flat cosmology (besides, the error decreases as the number of observ ed SNeIa increases). Since distant sources are more likely to be ``demagnified'' the most probable value of the observed density parameter Omega_mega_m is slightly smaller than its actual value. On the other hand, one may obtain some valuable information on the properties of the underlying non-linear density field from the measure of weak lensing distortions.
NASA Astrophysics Data System (ADS)
Kahn, Amanda S.; Ruhl, Henry A.; Smith, Kenneth L.
2012-12-01
Density and average size of two species of abyssal sponges were analyzed at Station M (∼4100 m depth) over an 18-year time-series (1989-2006) using camera sled transects. Both sponge taxa share a similar plate-like morphology despite being within different families, and both showed similar variations in density and average body size over time, suggesting that the same factors may control the demographics of both species. Peaks in significant cross correlations between increases in particulate organic carbon flux and corresponding increases in sponge density occurred with a time lag of 13 months. Sponge density also fluctuated with changes in two climate indices: the NOI with a time lag of 18 months and NPGO with a time lag of 15 months. The results support previous suggestions that increased particulate organic carbon flux may induce recruitment or regeneration in deep-sea sponges. It is unknown whether the appearance of young individuals results from recruitment, regeneration, or both, but the population responses to seasonal and inter-annual changes in food supply demonstrate that sponge populations are dynamic and are capable of responding to inter-annual changes despite being sessile and presumably slow-growing.
Density of American black bears in New Mexico
Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.; Liley, Stewart
2018-01-01
Considering advances in noninvasive genetic sampling and spatially explicit capture–recapture (SECR) models, the New Mexico Department of Game and Fish sought to update their density estimates for American black bear (Ursus americanus) populations in New Mexico, USA, to aide in setting sustainable harvest limits. We estimated black bear density in the Sangre de Cristo, Sandia, and Sacramento Mountains, New Mexico, 2012–2014. We collected hair samples from black bears using hair traps and bear rubs and used a sex marker and a suite of microsatellite loci to individually genotype hair samples. We then estimated density in a SECR framework using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We sampled the populations using 554 hair traps and 117 bear rubs and collected 4,083 hair samples. We identified 725 (367 male, 358 female) individuals. Our density estimates varied from 16.5 bears/100 km2 (95% CI = 11.6–23.5) in the southern Sacramento Mountains to 25.7 bears/100 km2 (95% CI = 13.2–50.1) in the Sandia Mountains. Overall, detection probability at the activity center (g0) was low across all study areas and ranged from 0.00001 to 0.02. The low values of g0 were primarily a result of half of all hair samples for which genotypes were attempted failing to produce a complete genotype. We speculate that the low success we had genotyping hair samples was due to exceedingly high levels of ultraviolet (UV) radiation that degraded the DNA in the hair. Despite sampling difficulties, we were able to produce density estimates with levels of precision comparable to those estimated for black bears elsewhere in the United States.
NASA Astrophysics Data System (ADS)
Helble, Tyler Adam
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.
Timescales of isotropic and anisotropic cluster collapse
NASA Astrophysics Data System (ADS)
Bartelmann, M.; Ehlers, J.; Schneider, P.
1993-12-01
From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.
Probabilistic cluster labeling of imagery data
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1980-01-01
The problem of obtaining the probabilities of class labels for the clusters using spectral and spatial information from a given set of labeled patterns and their neighbors is considered. A relationship is developed between class and clusters conditional densities in terms of probabilities of class labels for the clusters. Expressions are presented for updating the a posteriori probabilities of the classes of a pixel using information from its local neighborhood. Fixed-point iteration schemes are developed for obtaining the optimal probabilities of class labels for the clusters. These schemes utilize spatial information and also the probabilities of label imperfections. Experimental results from the processing of remotely sensed multispectral scanner imagery data are presented.
New finding that might explain why the skin wrinkles more on various parts of the face.
Tamatsu, Yuichi; Tsukahara, Kazue; Sugawara, Yasushi; Shimada, Kazuyuki
2015-09-01
The mechanism of formation of facial wrinkles has not been fully clarified due to the existence of many distinct influential factors. To clarify the relationship between facial wrinkles and structures in the skin, especially sebaceous glands, image analysis was performed on the forehead and lateral canthus regions of cadaveric skin specimens; 58 male and female donated cadavers (age range at death 20s - 90 s) were included in the study. Specimens were obtained from forehead and lateral canthus region after measuring wrinkle depth. Then tissue slices were prepared to observe the sebaceous gland and its density was measured and analyzed in relation to wrinkle depth, retinacula cutis density, dermal thickness, and solar elastosis degree. A correlation was found between sebaceous gland density and wrinkle depth in forehead specimens with a lower retinacula cutis density. Wrinkles were shallower in specimens with a higher sebaceous gland density. However, no such correlation was found in lateral canthus wrinkles, presumably due to the lack of sebaceous glands in that region. In addition, specimens with a higher sebaceous gland density tended to have a thicker dermis and/or less solar elastosis. Sebaceous gland density seems to be one of the multiple factors that prevent wrinkle deepening, and that is why wrinkles are deeper in the lateral canthus area than in the forehead. Functional studies will elucidate the mechanism of wrinkle formation in the future. © 2015 Wiley Periodicals, Inc.
Morley, B J; Garner, L L
1990-06-11
Sodium-dependent, high-affinity choline uptake (HACU) and the density of alpha-bungarotoxin (BuTX) receptor-binding sites were measured in the hippocampus following the intraventricular infusion of ethylcholine aziridinium ion (AF64A), a neurotoxin that competes with choline at high-affinity choline transport sites and may result in the degeneration of cholinergic axons. Eight days after the infusion of AF64A into the lateral ventricles (2.5 nmol/side), HACU was depleted by 60% in the hippocampus of experimental animals in comparison with controls, but the density of BuTX-binding sites was not altered. The administration of 15 mg/ml of choline chloride in the drinking water increased the density of BuTX-binding sites, as previously reported by this laboratory. The administration of AF64A did not prevent the effect of exogenous choline on the density of binding sites, nor did choline treatment alter the effect of AF64A on HACU. These data indicate that the density of BuTX-binding sites in the hippocampus is not altered following a substantial decrease in HACU and presumed degeneration of cholinergic axons. Since the effect of exogenous choline was not prevented by AF64A treatment, the data are interpreted to support the hypothesis that the increase in the density of BuTX-binding sites following dietary choline supplementation is attributable to a direct effect of choline on receptor sites.
Anastasiadis, Anastasios; Onal, Bulent; Modi, Pranjal; Turna, Burak; Duvdevani, Mordechai; Timoney, Anthony; Wolf, J Stuart; De La Rosette, Jean
2013-12-01
This study aimed to explore the relationship between stone density and outcomes of percutaneous nephrolithotomy (PCNL) using the Clinical Research Office of the Endourological Society (CROES) PCNL Global Study database. Patients undergoing PCNL treatment were assigned to a low stone density [LSD, ≤ 1000 Hounsfield units (HU)] or high stone density (HSD, > 1000 HU) group based on the radiological density of the primary renal stone. Preoperative characteristics and outcomes were compared in the two groups. Retreatment for residual stones was more frequent in the LSD group. The overall stone-free rate achieved was higher in the HSD group (79.3% vs 74.8%, p = 0.113). By univariate regression analysis, the probability of achieving a stone-free outcome peaked at approximately 1250 HU. Below or above this density resulted in lower treatment success, particularly at very low HU values. With increasing radiological stone density, operating time decreased to a minimum at approximately 1000 HU, then increased with further increase in stone density. Multivariate non-linear regression analysis showed a similar relationship between the probability of a stone-free outcome and stone density. Higher treatment success rates were found with low stone burden, pelvic stone location and use of pneumatic lithotripsy. Very low and high stone densities are associated with lower rates of treatment success and longer operating time in PCNL. Preoperative assessment of stone density may help in the selection of treatment modality for patients with renal stones.
The propagator of stochastic electrodynamics
NASA Astrophysics Data System (ADS)
Cavalleri, G.
1981-01-01
The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.
Winter habitat selection of mule deer before and during development of a natural gas field
Sawyer, H.; Nielson, R.M.; Lindzey, F.; McDonald, L.L.
2006-01-01
Increased levels of natural gas exploration, development, and production across the Intermountain West have created a variety of concerns for mule deer (Odocoileus hemionus) populations, including direct habitat loss to road and well-pad construction and indirect habitat losses that may occur if deer use declines near roads or well pads. We examined winter habitat selection patterns of adult female mule deer before and during the first 3 years of development in a natural gas field in western Wyoming. We used global positioning system (GPS) locations collected from a sample of adult female mule deer to model relative frequency or probability of use as a function of habitat variables. Model coefficients and predictive maps suggested mule deer were less likely to occupy areas in close proximity to well pads than those farther away. Changes in habitat selection appeared to be immediate (i.e., year 1 of development), and no evidence of well-pad acclimation occurred through the course of the study; rather, mule deer selected areas farther from well pads as development progressed. Lower predicted probabilities of use within 2.7 to 3.7 km of well pads suggested indirect habitat losses may be substantially larger than direct habitat losses. Additionally, some areas classified as high probability of use by mule deer before gas field development changed to areas of low use following development, and others originally classified as low probability of use were used more frequently as the field developed. If areas with high probability of use before development were those preferred by the deer, observed shifts in their distribution as development progressed were toward less-preferred and presumably less-suitable habitats.
Dual Approach To Superquantile Estimation And Applications To Density Fitting
2016-06-01
incorporate additional constraints to improve the fidelity of density estimates in tail regions. We limit our investigation to data with heavy tails, where...samples of various heavy -tailed distributions. 14. SUBJECT TERMS probability density estimation, epi-splines, optimization, risk quantification...limit our investigation to data with heavy tails, where risk quantification is typically the most difficult. Demonstrations are provided in the form of
NASA Astrophysics Data System (ADS)
Ruan, Shaohong; Swaminathan, Nedunchezhian; Darbyshire, Oliver
2014-03-01
This study focuses on the modelling of turbulent lifted jet flames using flamelets and a presumed Probability Density Function (PDF) approach with interest in both flame lift-off height and flame brush structure. First, flamelet models used to capture contributions from premixed and non-premixed modes of the partially premixed combustion in the lifted jet flame are assessed using a Direct Numerical Simulation (DNS) data for a turbulent lifted hydrogen jet flame. The joint PDFs of mixture fraction Z and progress variable c, including their statistical correlation, are obtained using a copula method, which is also validated using the DNS data. The statistically independent PDFs are found to be generally inadequate to represent the joint PDFs from the DNS data. The effects of Z-c correlation and the contribution from the non-premixed combustion mode on the flame lift-off height are studied systematically by including one effect at a time in the simulations used for a posteriori validation. A simple model including the effects of chemical kinetics and scalar dissipation rate is suggested and used for non-premixed combustion contributions. The results clearly show that both Z-c correlation and non-premixed combustion effects are required in the premixed flamelets approach to get good agreement with the measured flame lift-off heights as a function of jet velocity. The flame brush structure reported in earlier experimental studies is also captured reasonably well for various axial positions. It seems that flame stabilisation is influenced by both premixed and non-premixed combustion modes, and their mutual influences.
Boudko, D Y; Switzer-Dunlap, M; Hadfield, M G
1999-01-05
Two sensory-cell types, subepithelial sensory cells (SSCs) and intraepithelial sensory cells (ISCs), were identified in the anterior sensory organs (ASO: pairs of rhinophores and oral tentacles, and the anterior field formed by the oral plate and cephalic shield) of the nudibranch Phestilla sibogae after filling through anterior nerves with the neuronal tracers biocytin and Lucifer Yellow. A third type of sensory cells, with subepithelial somata and tufts of stiff-cilia (TSCs, presumably rheoreceptors), was identified after uptake of the mitochondrial dye DASPEI. Each sensory-cell type has a specific spatial distribution in the ASO. The highest density of ISCs is in the oral tentacles (approximately 1,200/mm2), SSCs in the middle parts of the rhinophores (>4,000/mm2), and TSCs in the tips of cephalic tentacles (100/mm2). These morphologic data, together with electrophysiologic evidence for greater chemical sensitivity of the rhinophores than the oral tentacles (Murphy and Hadfield [1997] Comp. Biochem. Physiol. 118A:727-735; Boudko et al. [1997] Soc. Neurosci. Abstr. 23:1787), led us to conclude that the two pairs of chemosensory tentacles serve different chemosensory functions in P. sibogae; i.e., ISCs and the oral tentacles serve contact- or short-distance chemoreception, and SSCs and the rhinophores function for long-distance chemoreception or olfaction. If this is true, then the ISC subsystem probably represents an earlier stage in the evolution and adaptations of gastropod chemosensory biology, whereas among the opisthobranchs, the SSC subsystem evolved with the rhinophores from ancestral cephalaspidean opisthobranchs.
A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy
Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.
2011-01-01
Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125
Mode switching in volcanic seismicity: El Hierro 2011-2013
NASA Astrophysics Data System (ADS)
Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.
2016-05-01
The Gutenberg-Richter b value is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. Such studies typically analyze discrete time windows or phases, but the choice of such windows is subjective and can introduce significant bias. Here we minimize this sample bias by iteratively sampling catalogs with randomly chosen windows and then stack the resulting probability density functions for the estimated b>˜ value to determine a net probability density function. We examine data from the El Hierro seismic catalog during a period of unrest in 2011-2013 and demonstrate clear multimodal behavior. Individual modes are relatively stable in time, but the most probable b>˜ value intermittently switches between modes, one of which is similar to that of tectonic seismicity. Multimodality is primarily associated with intermittent activation and cessation of activity in different parts of the volcanic system rather than with respect to any systematic inferred underlying process.
NASA Astrophysics Data System (ADS)
Dufty, J. W.
1984-09-01
Diffusion of a tagged particle in a fluid with uniform shear flow is described. The continuity equation for the probability density describing the position of the tagged particle is considered. The diffusion tensor is identified by expanding the irreversible part of the probability current to first order in the gradient of the probability density, but with no restriction on the shear rate. The tensor is expressed as the time integral of a nonequilibrium autocorrelation function for the velocity of the tagged particle in its local fluid rest frame, generalizing the Green-Kubo expression to the nonequilibrium state. The tensor is evaluated from results obtained previously for the velocity autocorrelation function that are exact for Maxwell molecules in the Boltzmann limit. The effects of viscous heating are included and the dependence on frequency and shear rate is displayed explicitly. The mode-coupling contributions to the frequency and shear-rate dependent diffusion tensor are calculated.
Nonequilibrium dynamics of a pure dry friction model subjected to colored noise
NASA Astrophysics Data System (ADS)
Geffert, Paul M.; Just, Wolfram
2017-06-01
We investigate the impact of noise on a two-dimensional simple paradigmatic piecewise-smooth dynamical system. For that purpose, we consider the motion of a particle subjected to dry friction and colored noise. The finite correlation time of the noise provides an additional dimension in phase space, causes a nontrivial probability current, and establishes a proper nonequilibrium regime. Furthermore, the setup allows for the study of stick-slip phenomena, which show up as a singular component in the stationary probability density. Analytic insight can be provided by application of the unified colored noise approximation, developed by Jung and Hänggi [Phys. Rev. A 35, 4464(R) (1987), 10.1103/PhysRevA.35.4464]. The analysis of probability currents and of power spectral densities underpins the observed stick-slip transition, which is related with a critical value of the noise correlation time.
Nonequilibrium dynamics of a pure dry friction model subjected to colored noise.
Geffert, Paul M; Just, Wolfram
2017-06-01
We investigate the impact of noise on a two-dimensional simple paradigmatic piecewise-smooth dynamical system. For that purpose, we consider the motion of a particle subjected to dry friction and colored noise. The finite correlation time of the noise provides an additional dimension in phase space, causes a nontrivial probability current, and establishes a proper nonequilibrium regime. Furthermore, the setup allows for the study of stick-slip phenomena, which show up as a singular component in the stationary probability density. Analytic insight can be provided by application of the unified colored noise approximation, developed by Jung and Hänggi [Phys. Rev. A 35, 4464(R) (1987)0556-279110.1103/PhysRevA.35.4464]. The analysis of probability currents and of power spectral densities underpins the observed stick-slip transition, which is related with a critical value of the noise correlation time.
Stevens, C.H.; Stone, P.
2005-01-01
An imbricate system of north-trending, east-directed thrust faults of late Early Permian to middle Early Triassic (most likely Late Permian) age forms a belt in east-central California extending from the Mount Morrison roof pendant in the eastern Sierra Nevada to Death Valley. Six major thrust faults typically with a spacing of 15-20 km, original dips probably of 25-35??, and stratigraphic throws of 2-5 km compose this structural belt, which we call the Sierra Nevada-Death Valley thrust system. These thrusts presumably merge into a de??collement at depth, perhaps at the contact with crystalline basement, the position of which is unknown. We interpret the deformation that produced these thrusts to have been related to the initiation of convergent plate motion along a southeast-trending continental margin segment probably formed by Pennsylvanian transform truncation. This deformation apparently represents a period of tectonic transition to full-scale convergence and arc magmatism along the continental margin beginning in the Late Triassic in central California. ?? 2005 Elsevier B.V. All rights reserved.
Multipartite entanglement characterization of a quantum phase transition
NASA Astrophysics Data System (ADS)
Costantini, G.; Facchi, P.; Florio, G.; Pascazio, S.
2007-07-01
A probability density characterization of multipartite entanglement is tested on the one-dimensional quantum Ising model in a transverse field. The average and second moment of the probability distribution are numerically shown to be good indicators of the quantum phase transition. We comment on multipartite entanglement generation at a quantum phase transition.
ERIC Educational Resources Information Center
Nair, Vishnu K. K.; Biedermann, Britta; Nickels, Lyndsey
2017-01-01
Purpose: Previous research has shown that the language-learning mechanism is affected by bilingualism resulting in a novel word learning advantage for bilingual speakers. However, less is known about the factors that might influence this advantage. This article reports an investigation of 2 factors: phonotactic probability and phonological…
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
Geology and Composition of Pluto and Charon from New Horizons
NASA Astrophysics Data System (ADS)
Spencer, John R.; Stern, S. Alan; Moore, Jeffrey M.; Grundy, W. M.; McKinnon, William B.; Cruikshank, Dale P.; Weaver, Harold A.; Olkin, Catherine B.; Young, Leslie; Ennico, Kimberly; New Horizons Geology/Geophysics and Composition Theme Teams
2016-10-01
Data gathered by New Horizons during its July 2015 flyby has revolutionized our understanding of the geology and surface composition of Pluto and Charon. While much of Pluto's ice shell is ancient and rigid, as evinced by locally high crater densities and deep graben, much of the surface has been reworked, up to the present day, by a bewildering variety of geological processes. These include deposition and erosion of kilometers of mantle material, sublimation, apparent cryovolcanism, chaotic breakup of the crust to form rugged mountains, erosion and creation of channel networks by probable glacial action, and active glaciation. Pluto's anti-Charon hemisphere is dominated by 1000 km wide field of actively convecting nitrogen and other ices, informally called Sputnik Planum, occupying a large depression of probable impact origin. Color and composition is very varied, and is dominated by dark red tholins and N2, CH4, and CO ices, with H2O ice bedrock also exposed in many places. Apart from Sputnik Planum, color and composition is strongly correlated with latitude, showing the importance of insolation in controlling ice distribution. Charon shows pervasive extensional tectonism and locally extensive cryovolcanic resurfacing, both dating from early in solar system history. Its color and surface composition, dominated by H2O ice plus NH3 hydrate, is remarkably uniform apart from a thin deposit of dark red material near the north pole which may be due to cold-trapping and radiolysis of hydrocarbons escaping from Pluto. Neither Pluto nor Charon is likely to have experienced tidal heating during the period when observable landforms were created. Charon's surface shows resurfacing comparable in extent and age to many Saturnian and Uranian satellites such as Dione or Ariel, suggesting that observed activity on these satellites may not necessarily be tidally-driven. Pluto demonstrates that resurfacing on small volatile-rich icy bodies can be powered for at least 4.5 Ga by ongoing radiogenic and residual early heat alone, though the fact that Triton shows much more pervasive resurfacing than Pluto provides some evidence that Triton, unlike Pluto, has access to an additional heat source, presumably tidal.
Yackulic, Charles B.; Reid, Janice; Davis, Raymond; Hines, James E.; Nichols, James D.; Forsman, Eric
2012-01-01
In this paper, we modify dynamic occupancy models developed for detection-nondetection data to allow for the dependence of local vital rates on neighborhood occupancy, where neighborhood is defined very flexibly. Such dependence of occupancy dynamics on the status of a relevant neighborhood is pervasive, yet frequently ignored. Our framework permits joint inference about the importance of neighborhood effects and habitat covariates in determining colonization and extinction rates. Our specific motivation is the recent expansion of the Barred Owl (Strix varia) in western Oregon, USA, over the period 1990-2010. Because the focal period was one of dramatic range expansion and local population increase, the use of models that incorporate regional occupancy (sources of colonists) as determinants of dynamic rate parameters is especially appropriate. We began our analysis of 21 years of Barred Owl presence/nondetection data in the Tyee Density Study Area (TDSA) by testing a suite of six models that varied only in the covariates included in the modeling of detection probability. We then tested whether models that used regional occupancy as a covariate for colonization and extinction outperformed models with constant or year-specific colonization or extinction rates. Finally we tested whether habitat covariates improved the AIC of our models, focusing on which habitat covariates performed best, and whether the signs of habitat effects are consistent with a priori hypotheses. We conclude that all covariates used to model detection probability lead to improved AIC, that regional occupancy influences colonization and extinction rates, and that habitat plays an important role in determining extinction and colonization rates. As occupancy increases from low levels toward equilibrium, colonization increases and extinction decreases, presumably because there are more and more dispersing juveniles. While both rates are affected, colonization increases more than extinction decreases. Colonization is higher and extinction is lower in survey polygons with more riparian forest. The effects of riparian forest on extinction rates are greater than on colonization rates. Model results have implications for management of the invading Barred Owl, both through habitat alteration and removal.
Systematic Onset of Periodic Patterns in Random Disk Packings
NASA Astrophysics Data System (ADS)
Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A. C.
2018-04-01
We report evidence of a surprising systematic onset of periodic patterns in very tall piles of disks deposited randomly between rigid walls. Independently of the pile width, periodic structures are always observed in monodisperse deposits containing up to 1 07 disks. The probability density function of the lengths of disordered transient phases that precede the onset of periodicity displays an approximately exponential tail. These disordered transients may become very large when the channel width grows without bound. For narrow channels, the probability density of finding periodic patterns of a given period displays a series of discrete peaks, which, however, are washed out completely when the channel width grows.
NASA Technical Reports Server (NTRS)
Wood, B. J.; Ablow, C. M.; Wise, H.
1973-01-01
For a number of candidate materials of construction for the dual air density explorer satellites the rate of oxygen atom loss by adsorption, surface reaction, and recombination was determined as a function of surface and temperature. Plain aluminum and anodized aluminum surfaces exhibit a collisional atom loss probability alpha .01 in the temperature range 140 - 360 K, and an initial sticking probability. For SiO coated aluminum in the same temperature range, alpha .001 and So .001. Atom-loss on gold is relatively rapid alpha .01. The So for gold varies between 0.25 and unity in the temperature range 360 - 140 K.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
NASA Technical Reports Server (NTRS)
Wilson, Lonnie A.
1987-01-01
Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.
An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses
NASA Technical Reports Server (NTRS)
Lee, Man Hoi; Spergel, David N.
1990-01-01
The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
NASA Astrophysics Data System (ADS)
Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.
2018-05-01
As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
NASA Astrophysics Data System (ADS)
Ponte, Aurélien L.; Klein, Patrice; Dunphy, Michael; Le Gentil, Sylvie
2017-03-01
The performance of a tentative method that disentangles the contributions of a low-mode internal tide on sea level from that of the balanced mesoscale eddies is examined using an idealized high resolution numerical simulation. This disentanglement is essential for proper estimation from sea level of the ocean circulation related to balanced motions. The method relies on an independent observation of the sea surface water density whose variations are 1/dominated by the balanced dynamics and 2/correlate with variations of potential vorticity at depth for the chosen regime of surface-intensified turbulence. The surface density therefore leads via potential vorticity inversion to an estimate of the balanced contribution to sea level fluctuations. The difference between instantaneous sea level (presumably observed with altimetry) and the balanced estimate compares moderately well with the contribution from the low-mode tide. Application to realistic configurations remains to be tested. These results aim at motivating further developments of reconstruction methods of the ocean dynamics based on potential vorticity dynamics arguments. In that context, they are particularly relevant for the upcoming wide-swath high resolution altimetric missions (SWOT).
USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS.
Benito, G; Sáez, J C; Blázquez, J B; Quiñones, J
2018-03-15
The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.
NASA Astrophysics Data System (ADS)
Mit'kin, A. S.; Pogorelov, V. A.; Chub, E. G.
2015-08-01
We consider the method of constructing the suboptimal filter on the basis of approximating the a posteriori probability density of the multidimensional Markov process by the Pearson distributions. The proposed method can efficiently be used for approximating asymmetric, excessive, and finite densities.
On the Origin of the High Column Density Turnover in the HI Column Density Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erkal, Denis; Gnedin, Nickolay Y.; Kravtsov, Andrey V.
We study the high column density regime of the HI column density distribution function and argue that there are two distinct features: a turnover at NHI ~ 10^21 cm^-2 which is present at both z=0 and z ~ 3, and a lack of systems above NHI ~ 10^22 cm^-2 at z=0. Using observations of the column density distribution, we argue that the HI-H2 transition does not cause the turnover at NHI ~ 10^21 cm^-2, but can plausibly explain the turnover at NHI > 10^22 cm^-2. We compute the HI column density distribution of individual galaxies in the THINGS sample andmore » show that the turnover column density depends only weakly on metallicity. Furthermore, we show that the column density distribution of galaxies, corrected for inclination, is insensitive to the resolution of the HI map or to averaging in radial shells. Our results indicate that the similarity of HI column density distributions at z=3 and z=0 is due to the similarity of the maximum HI surface densities of high-z and low-z disks, set presumably by universal processes that shape properties of the gaseous disks of galaxies. Using fully cosmological simulations, we explore other candidate physical mechanisms that could produce a turnover in the column density distribution. We show that while turbulence within GMCs cannot affect the DLA column density distribution, stellar feedback can affect it significantly if the feedback is sufficiently effective in removing gas from the central 2-3 kpc of high-redshift galaxies. Finally, we argue that it is meaningful to compare column densities averaged over ~ kpc scales with those estimated from quasar spectra which probe sub-pc scales due to the steep power spectrum of HI column density fluctuations observed in nearby galaxies.« less
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Global warming precipitation accumulation increases above the current-climate cutoff scale
Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.
2017-01-01
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff. PMID:28115693
Global warming precipitation accumulation increases above the current-climate cutoff scale
NASA Astrophysics Data System (ADS)
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.
2017-02-01
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.
Global warming precipitation accumulation increases above the current-climate cutoff scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less
Global warming precipitation accumulation increases above the current-climate cutoff scale.
Neelin, J David; Sahany, Sandeep; Stechmann, Samuel N; Bernstein, Diana N
2017-02-07
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.
Global warming precipitation accumulation increases above the current-climate cutoff scale
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; ...
2017-01-23
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less
Li, Ye; Yu, Lin; Zhang, Yixin
2017-05-29
Applying the angular spectrum theory, we derive the expression of a new Hermite-Gaussian (HG) vortex beam. Based on the new Hermite-Gaussian (HG) vortex beam, we establish the model of the received probability density of orbital angular momentum (OAM) modes of this beam propagating through a turbulent ocean of anisotropy. By numerical simulation, we investigate the influence of oceanic turbulence and beam parameters on the received probability density of signal OAM modes and crosstalk OAM modes of the HG vortex beam. The results show that the influence of oceanic turbulence of anisotropy on the received probability of signal OAM modes is smaller than isotropic oceanic turbulence under the same condition, and the effect of salinity fluctuation on the received probability of the signal OAM modes is larger than the effect of temperature fluctuation. In the strong dissipation of kinetic energy per unit mass of fluid and the weak dissipation rate of temperature variance, we can decrease the effects of turbulence on the received probability of signal OAM modes by selecting a long wavelength and a larger transverse size of the HG vortex beam in the source's plane. In long distance propagation, the HG vortex beam is superior to the Laguerre-Gaussian beam for resisting the destruction of oceanic turbulence.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
Seasonal and Local Characteristics of Lightning Outages of Power Distribution Lines in Hokuriku Area
NASA Astrophysics Data System (ADS)
Sugimoto, Hitoshi; Shimasaki, Katsuhiko
The proportion of the lightning outages in all outages on Japanese 6.6kV distribution lines is high with approximately 20 percent, and then lightning protections are very important for supply reliability of 6.6kV lines. It is effective for the lightning performance to apply countermeasures in order of the area where a large number of the lightning outages occur. Winter lightning occurs in Hokuriku area, therefore it is also important to understand the seasonal characteristics of the lightning outages. In summer 70 percent of the lightning outages on distribution lines in Hokuriku area were due to sparkover, such as power wire breakings and failures of pole-mounted transformers. However, in winter almost half of lightning-damaged equipments were surge arrester failures. The number of the lightning outages per lightning strokes detected by the lightning location system (LLS) in winter was 4.4 times larger than that in summer. The authors have presumed the occurrence of lightning outages from lightning stroke density, 50% value of lightning current and installation rate of lightning protection equipments and overhead ground wire by multiple regression analysis. The presumed results suggest the local difference in the lightning outages.
28 CFR 104.43 - Determination of presumed economic loss for decedents.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Determination of presumed economic loss... of presumed economic loss for decedents. In reaching presumed determinations for economic loss for...-time outside the home, economic loss may be determined with reference to replacement services and...
28 CFR 104.43 - Determination of presumed economic loss for decedents.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Determination of presumed economic loss... of presumed economic loss for decedents. In reaching presumed determinations for economic loss for...-time outside the home, economic loss may be determined with reference to replacement services and...
28 CFR 104.43 - Determination of presumed economic loss for decedents.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Determination of presumed economic loss... of presumed economic loss for decedents. In reaching presumed determinations for economic loss for...-time outside the home, economic loss may be determined with reference to replacement services and...
Spence, Emma Suzuki; Beck, Jeffrey L; Gregory, Andrew J
2017-01-01
Greater sage-grouse (Centrocercus urophasianus) occupy sagebrush (Artemisia spp.) habitats in 11 western states and 2 Canadian provinces. In September 2015, the U.S. Fish and Wildlife Service announced the listing status for sage-grouse had changed from warranted but precluded to not warranted. The primary reason cited for this change of status was that the enactment of new regulatory mechanisms was sufficient to protect sage-grouse populations. One such plan is the 2008, Wyoming Sage Grouse Executive Order (SGEO), enacted by Governor Freudenthal. The SGEO identifies "Core Areas" that are to be protected by keeping them relatively free from further energy development and limiting other forms of anthropogenic disturbances near active sage-grouse leks. Using the Wyoming Game and Fish Department's sage-grouse lek count database and the Wyoming Oil and Gas Conservation Commission database of oil and gas well locations, we investigated the effectiveness of Wyoming's Core Areas, specifically: 1) how well Core Areas encompass the distribution of sage-grouse in Wyoming, 2) whether Core Area leks have a reduced probability of lek collapse, and 3) what, if any, edge effects intensification of oil and gas development adjacent to Core Areas may be having on Core Area populations. Core Areas contained 77% of male sage-grouse attending leks and 64% of active leks. Using Bayesian binomial probability analysis, we found an average 10.9% probability of lek collapse in Core Areas and an average 20.4% probability of lek collapse outside Core Areas. Using linear regression, we found development density outside Core Areas was related to the probability of lek collapse inside Core Areas. Specifically, probability of collapse among leks >4.83 km from inside Core Area boundaries was significantly related to well density within 1.61 km (1-mi) and 4.83 km (3-mi) outside of Core Area boundaries. Collectively, these data suggest that the Wyoming Core Area Strategy has benefited sage-grouse and sage-grouse habitat conservation; however, additional guidelines limiting development densities adjacent to Core Areas may be necessary to effectively protect Core Area populations.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Electromigration Mechanism of Failure in Flip-Chip Solder Joints Based on Discrete Void Formation.
Chang, Yuan-Wei; Cheng, Yin; Helfen, Lukas; Xu, Feng; Tian, Tian; Scheel, Mario; Di Michiel, Marco; Chen, Chih; Tu, King-Ning; Baumbach, Tilo
2017-12-20
In this investigation, SnAgCu and SN100C solders were electromigration (EM) tested, and the 3D laminography imaging technique was employed for in-situ observation of the microstructure evolution during testing. We found that discrete voids nucleate, grow and coalesce along the intermetallic compound/solder interface during EM testing. A systematic analysis yields quantitative information on the number, volume, and growth rate of voids, and the EM parameter of DZ*. We observe that fast intrinsic diffusion in SnAgCu solder causes void growth and coalescence, while in the SN100C solder this coalescence was not significant. To deduce the current density distribution, finite-element models were constructed on the basis of the laminography images. The discrete voids do not change the global current density distribution, but they induce the local current crowding around the voids: this local current crowding enhances the lateral void growth and coalescence. The correlation between the current density and the probability of void formation indicates that a threshold current density exists for the activation of void formation. There is a significant increase in the probability of void formation when the current density exceeds half of the maximum value.
HD 38452 - J. R. Hind's star that changed colour
NASA Technical Reports Server (NTRS)
Warner, Brian; Sneden, Christopher
1988-01-01
In 1851, John Russell Hind announced that a star previously observed by him to be very red had become bluish white in color. It is shown that this star, HD 38451, is a ninth magnitude shell star which presumably was ejecting a shell when Hind first observed it. From high dispersion coude spectra, low dispersion IUE spectra, and ground-based photometry, HD 38451 is found to be a normal A21V shell star. Its current values of E(B-V) of about 0.14 is probably caused by interstellar rather than circumstellar reddening. There remains a problem to reconcile the large amount of reddening present when Hind first observed the star with its evidently small diminution in visual brightness at that time.
Bright Stuff on Ceres = Sulfates and Carbonates on CI Chondrites
NASA Technical Reports Server (NTRS)
Zolensky, Michael; Chan, Queenie H. S.; Gounelle, Matthieu; Fries, Marc
2016-01-01
Recent reports of the DAWN spacecraft's observations of the surface of Ceres indicate that there are bright areas, which can be explained by large amounts of the Mg sulfate hexahydrate (MgSO4•6(H2O)), although the identification appears tenuous. There are preliminary indications that water is being evolved from these bright areas, and some have inferred that these might be sites of contemporary hydro-volcanism. A heat source for such modern activity is not obvious, given the small size of Ceres, lack of any tidal forces from nearby giant planets, probable age and presumed bulk composition. We contend that observations of chondritic materials in the lab shed light on the nature of the bright spots on Ceres
NASA Technical Reports Server (NTRS)
Nehru, C. E.; Warner, R. D.; Keil, K.; Taylor, G. J.
1978-01-01
Rake samples 72559 and 78527 are annealed rocks of ANT-suite mineralogy and bulk composition. The rocks were presumably derived from ancient lunar highland ANT rocks of cumulate origin. Sample 72559 is polymict and its precursors were anorthositic-troctolitic in composition. Sample 78527 is monomict and of noritic derivation. The precursors were brecciated due to impact processes; 72559 shows evidence of some impact melting. The samples were thermally metamorphosed forming rocks with granoblastic matrix textures. Coexisting matrix pyroxenes indicate equilibration temperatures of 950-1000 C for both rocks. Accessory opaque oxide minerals in the rocks show rather wide compositional variations. These probably primarily reflect compositional ranges inherited from the precursor/s with little integranular equilibration among them during metamorphism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
Predation and nutrients drive population declines in breeding waders.
Møller, Anders Pape; Thorup, Ole; Laursen, Karsten
2018-04-20
Allee effects are defined as a decline in per capita fitness at low population density. We hypothesized that predation reduces population size of breeding waders and thereby the efficiency of predator deterrence, while total nitrogen through its effects on primary and secondary productivity increases population size. Therefore, nest predation could have negative consequences for population size because nest failure generally results in breeding dispersal and hence reduced local population density. To test these predictions, we recorded nest predation in five species of waders for 4,745 nests during 1987-2015 at the nature reserve Tipperne, Denmark. Predation rates were generally negatively related to conspecific and heterospecific population density, but positively related to overall population density of the entire wader community. Nest predation and population density were related to ground water level, management (grazing and mowing), and nutrients. High nest predation with a time lag of one year resulted in low overall breeding population density, while high nutrient levels resulted in higher population density. These two factors accounted for 86% of the variance in population size, presumably due to effects of nest predation on emigration, while nutrient levels increased the level of vegetation cover and the abundance of food in the surrounding brackish water. These findings are consistent with the hypothesis that predation may reduce population density through negative density dependence, while total nitrogen at adjacent shallow water may increase population size. Nest predation rates were reduced by high ground water level in March, grazing by cattle and mowing that affected access to and susceptibility of nests to predators. These effects can be managed to benefit breeding waders. © 2018 by the Ecological Society of America.
Zák, J; Kapitola, J; Povýsil, C
2003-01-01
Authors deal with question, if there is possibility to infer bone histological structure (described by histomorphometric parameters of trabecular bone volume and trabecular thickness) from bone density, ash weight or even from weight of animal (rat). Both tibias of each of 30 intact male rats, 90 days old, were processed. Left tibia was utilized to the determination of histomorphometric parameters of undecalcified bone tissue patterns by automatic image analysis. Right tibia was used to the determination of values of bone density, using Archimedes' principle. Values of bone density, ash weight, ash weight related to bone volume and animal weight were correlated with histomorphometric parameters (trabecular bone volume, trabecular thickness) by Pearson's correlation test. One could presume the existence of relation between data, describing bone mass at the histological level (trabecular bone of tibia) and other data, describing mass of whole bone or even animal mass (weight). But no statistically significant correlation was found. The reason of the present results could be in the deviations of trabecular density in marrow of tibia. Because of higher trabecular bone density in metaphyseal and epiphyseal regions, the histomorphometric analysis of trabecular bone is preferentially done in these areas. It is possible, that this irregularity of trabecular tibial density could be the source of the deviations, which could influence the results of correlations determined. The values of bone density, ash weight and animal weight do not influence trabecular bone volume and vice versa: static histomorphometric parameters of trabecular bone do not reflect bone density, ash weight and weight of animal.
Incompressible variable-density turbulence in an external acceleration field
Gat, Ilana; Matheou, Georgios; Chung, Daniel; ...
2017-08-24
Dynamics and mixing of a variable-density turbulent flow subject to an externally imposed acceleration field in the zero-Mach-number limit are studied in a series of direct numerical simulations. The flow configuration studied consists of alternating slabs of high- and low-density fluid in a triply periodic domain. Density ratios in the range ofmore » $$1.05\\leqslant R\\equiv \\unicode[STIX]{x1D70C}_{1}/\\unicode[STIX]{x1D70C}_{2}\\leqslant 10$$are investigated. The flow produces temporally evolving shear layers. A perpendicular density–pressure gradient is maintained in the mean as the flow evolves, with multi-scale baroclinic torques generated in the turbulent flow that ensues. For all density ratios studied, the simulations attain Reynolds numbers at the beginning of the fully developed turbulence regime. An empirical relation for the convection velocity predicts the observed entrainment-ratio and dominant mixed-fluid composition statistics. Two mixing-layer temporal evolution regimes are identified: an initial diffusion-dominated regime with a growth rate$${\\sim}t^{1/2}$$followed by a turbulence-dominated regime with a growth rate$${\\sim}t^{3}$$. In the turbulent regime, composition probability density functions within the shear layers exhibit a slightly tilted (‘non-marching’) hump, corresponding to the most probable mole fraction. In conclusion, the shear layers preferentially entrain low-density fluid by volume at all density ratios, which is reflected in the mixed-fluid composition.« less
Incompressible variable-density turbulence in an external acceleration field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gat, Ilana; Matheou, Georgios; Chung, Daniel
Dynamics and mixing of a variable-density turbulent flow subject to an externally imposed acceleration field in the zero-Mach-number limit are studied in a series of direct numerical simulations. The flow configuration studied consists of alternating slabs of high- and low-density fluid in a triply periodic domain. Density ratios in the range ofmore » $$1.05\\leqslant R\\equiv \\unicode[STIX]{x1D70C}_{1}/\\unicode[STIX]{x1D70C}_{2}\\leqslant 10$$are investigated. The flow produces temporally evolving shear layers. A perpendicular density–pressure gradient is maintained in the mean as the flow evolves, with multi-scale baroclinic torques generated in the turbulent flow that ensues. For all density ratios studied, the simulations attain Reynolds numbers at the beginning of the fully developed turbulence regime. An empirical relation for the convection velocity predicts the observed entrainment-ratio and dominant mixed-fluid composition statistics. Two mixing-layer temporal evolution regimes are identified: an initial diffusion-dominated regime with a growth rate$${\\sim}t^{1/2}$$followed by a turbulence-dominated regime with a growth rate$${\\sim}t^{3}$$. In the turbulent regime, composition probability density functions within the shear layers exhibit a slightly tilted (‘non-marching’) hump, corresponding to the most probable mole fraction. In conclusion, the shear layers preferentially entrain low-density fluid by volume at all density ratios, which is reflected in the mixed-fluid composition.« less
2017-01-01
Abstract We have examined whether GABAergic neurons in the mesencephalic reticular formation (RFMes), which are believed to inhibit the neurons in the pons that generate paradoxical sleep (PS or REMS), are submitted to homeostatic regulation under conditions of sleep deprivation (SD) by enforced waking during the day in mice. Using immunofluorescence, we investigated first, by staining for c-Fos, whether GABAergic RFMes neurons are active during SD and then, by staining for receptors, whether their activity is associated with homeostatic changes in GABAA or acetylcholine muscarinic type 2 (AChM2) receptors (Rs), which evoke inhibition. We found that a significantly greater proportion of the GABAergic neurons were positively stained for c-Fos after SD (∼27%) as compared to sleep control (SC; ∼1%) and sleep recovery (SR; ∼6%), suggesting that they were more active during waking with SD and less active or inactive during sleep with SC and SR. The density of GABAARs and AChM2Rs on the plasma membrane of the GABAergic neurons was significantly increased after SD and restored to control levels after SR. We conclude that the density of these receptors is increased on RFMes GABAergic neurons during presumed enhanced activity with SD and is restored to control levels during presumed lesser or inactivity with SR. Such increases in GABAAR and AChM2R with sleep deficits would be associated with increased susceptibility of the wake-active GABAergic neurons to inhibition from GABAergic and cholinergic sleep-active neurons and to thus permitting the onset of sleep and PS with muscle atonia. PMID:29302615
Toossi, Hanieh; Del Cid-Pellitero, Esther; Jones, Barbara E
2017-01-01
We have examined whether GABAergic neurons in the mesencephalic reticular formation (RFMes), which are believed to inhibit the neurons in the pons that generate paradoxical sleep (PS or REMS), are submitted to homeostatic regulation under conditions of sleep deprivation (SD) by enforced waking during the day in mice. Using immunofluorescence, we investigated first, by staining for c-Fos, whether GABAergic RFMes neurons are active during SD and then, by staining for receptors, whether their activity is associated with homeostatic changes in GABA A or acetylcholine muscarinic type 2 (AChM2) receptors (Rs), which evoke inhibition. We found that a significantly greater proportion of the GABAergic neurons were positively stained for c-Fos after SD (∼27%) as compared to sleep control (SC; ∼1%) and sleep recovery (SR; ∼6%), suggesting that they were more active during waking with SD and less active or inactive during sleep with SC and SR. The density of GABA A Rs and AChM2Rs on the plasma membrane of the GABAergic neurons was significantly increased after SD and restored to control levels after SR. We conclude that the density of these receptors is increased on RFMes GABAergic neurons during presumed enhanced activity with SD and is restored to control levels during presumed lesser or inactivity with SR. Such increases in GABA A R and AChM2R with sleep deficits would be associated with increased susceptibility of the wake-active GABAergic neurons to inhibition from GABAergic and cholinergic sleep-active neurons and to thus permitting the onset of sleep and PS with muscle atonia.
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Sean A. Parks; Marc-Andre Parisien; Carol Miller
2011-01-01
We examined the scale-dependent relationship between spatial fire likelihood or burn probability (BP) and some key environmental controls in the southern Sierra Nevada, California, USA. Continuous BP estimates were generated using a fire simulation model. The correspondence between BP (dependent variable) and elevation, ignition density, fuels and aspect was evaluated...
ERIC Educational Resources Information Center
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkanen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick,…
Quantum illumination for enhanced detection of Rayleigh-fading targets
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.
28 CFR 104.43 - Determination of presumed economic loss for decedents.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Determination of presumed economic loss... Determination of presumed economic loss for decedents. In reaching presumed determinations for economic loss for... prior earned income, or who worked only part time outside the home, economic loss may be determined with...
28 CFR 104.43 - Determination of presumed economic loss for decedents.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Determination of presumed economic loss... Determination of presumed economic loss for decedents. In reaching presumed determinations for economic loss for... prior earned income, or who worked only part time outside the home, economic loss may be determined with...
ERP effects and perceived exclusion in the Cyberball paradigm: Correlates of expectancy violation?
Weschke, Sarah; Niedeggen, Michael
2015-10-22
A virtual ball-tossing game called Cyberball has allowed the identification of neural structures involved in the processing of social exclusion by using neurocognitive methods. However, there is still an ongoing debate if structures involved are either pain- or exclusion-specific or part of a broader network. In electrophysiological Cyberball studies we have shown that the P3b component is sensitive to exclusion manipulations, possibly modulated by the probability of ball possession of the participant (event "self") or the presumed co-players (event "other"). Since it is known from oddball studies that the P3b is not only modulated by the objective probability of an event, but also by subjective expectancy, we independently manipulated the probability of the events "self" and "other" and the expectancy for these events. Questionnaire data indicate that social need threat is only induced when the expectancy for involvement in the ball-tossing game is violated. Similarly, the P3b amplitude of both "self" and "other" events was a correlate of expectancy violation. We conclude that both the subjective report of exclusion and the P3b effect induced in the Cyberball paradigm are primarily based on a cognitive process sensitive to expectancy violations, and that the P3b is not related to the activation of an exclusion-specific neural alarm system. Copyright © 2015 Elsevier B.V. All rights reserved.
The diversity effect in diagnostic reasoning.
Rebitschek, Felix G; Krems, Josef F; Jahn, Georg
2016-07-01
Diagnostic reasoning draws on knowledge about effects and their potential causes. The causal-diversity effect in diagnostic reasoning normatively depends on the distribution of effects in causal structures, and thus, a psychological diversity effect could indicate whether causally structured knowledge is used in evaluating the probability of a diagnosis, if the effect were to covary with manipulations of causal structures. In four experiments, participants dealt with a quasi-medical scenario presenting symptom sets (effects) that consistently suggested a specified diagnosis (cause). The probability that the diagnosis was correct had to be rated for two opposed symptom sets that differed with regard to the symptoms' positions (proximal or diverse) in the causal structure that was initially acquired. The causal structure linking the diagnosis to the symptoms and the base rate of the diagnosis were manipulated to explore whether the diagnosis was rated as more probable for diverse than for proximal symptoms when alternative causations were more plausible (e.g., because of a lower base rate of the diagnosis in question). The results replicated the causal diversity effect in diagnostic reasoning across these conditions, but no consistent effects of structure and base rate variations were observed. Diversity effects computed in causal Bayesian networks are presented, illustrating the consequences of the structure manipulations and corroborating that a diversity effect across the different experimental manipulations is normatively justified. The observed diversity effects presumably resulted from shortcut reasoning about the possibilities of alternative causation.
Dynamic Responses in a Plant-Insect System to Fertilization by Cormorant Feces
Kolb, Gundula; Hambäck, Peter A.
2015-01-01
Theoretical arguments suggest that increased plant productivity may not only increase consumer densities but also their fluctuations. While increased consumer densities are commonly observed in fertilization experiments, experiments are seldom performed at a spatial and temporal scale where effects on population fluctuations may be observed. In this study we used a natural gradient in soil fertility caused by cormorant nesting. Cormorants feed on fish but defecate on their nesting islands. On these islands we studied soil nutrient availability, plant nutrient content and the density of Galerucella beetles, main herbivores feeding on Lythrum salicaria. In a common garden experiment, we followed larval development on fertilized plants and estimated larval stoichiometry. Soil nutrient availability varied among islands, and several cormorant islands had very high N and P soil content. Plant nutrient content, however, did not vary among islands, and there was no correlation between soil and plant nutrient contents. Beetle densities increased with plant nutrient content in the field study. However, there was either no effect on temporal fluctuations in beetle density or that temporal fluctuations decreased (at high P). In the common garden experiment, we found limited responses in either larval survival or pupal weights to fertilization. A possible mechanism for the limited effect of fertilization on density fluctuations may be that the distribution of L. salicaria on nesting islands was restricted to sites with a lower N and P content, presumably because high N loads are toxic. PMID:26463193
Poor horse traders: large mammals trade survival for reproduction during the process of feralization
Grange, Sophie; Duncan, Patrick; Gaillard, Jean-Michel
2009-01-01
We investigated density dependence on the demographic parameters of a population of Camargue horses (Equus caballus), individually monitored and unmanaged for eight years. We also analysed the contributions of individual demographic parameters to changes in the population growth rates. The decrease in resources caused a loss of body condition. Adult male survival was not affected, but the survival of foals and adult females decreased with increasing density. Prime-aged females maintained high reproductive performance at high density, and their survival decreased. The higher survival of adult males compared with females at high density presumably results from higher investment in reproduction by mares. The high fecundity in prime-aged females, even when at high density, may result from artificial selection for high reproductive performance, which is known to have occurred in all the major domestic ungulates. Other studies suggest that feral ungulates including cattle and sheep, as these horses, respond differently from wild ungulates to increases in density, by trading adult survival for reproduction. As a consequence, populations of feral animals should oscillate more strongly than their wild counterparts, since they should be both more invasive (as they breed faster), and more sensitive to harsh environmental conditions (as the population growth rate of long-lived species is consistently more sensitive to a given proportional change in adult survival than to the same change in any other vital rate). If this principle proves to be general, it has important implications for management of populations of feral ungulates. PMID:19324787
Sahin, Ozlem; Ziaei, Alireza
2014-07-01
This study was designed to investigate whether the antiinflammatory and antiproliferative activity of oral and intravitreal methotrexate (MTX) suppresses intraocular inflammation in patients with presumed latent syphilitic uveitis and presumed tuberculosis-related uveitis. Interventional prospective study including three cases with presumed latent syphilitic uveitis treated with intravenous penicillin and oral MTX, and two cases with presumed tuberculosis-related uveitis treated with standard antituberculosis therapy and intravitreal MTX injections. Treatment efficacy of all cases was assessed by best-corrected visual acuity, fundus fluorescein angiography, and optical coherence tomography. Four eyes of 3 patients with presumed latent syphilitic uveitis had improved best-corrected visual acuity, suppression of intraocular inflammation, and resolution of cystoid macular edema in 6 months with oral MTX therapy. No recurrence of intraocular inflammation was observed in 6 months to 18 months of follow-up period after cessation of MTX. Two eyes of two patients with presumed tuberculosis-related uveitis showed improved best-corrected visual acuity, suppression of intraocular inflammation, and resolution of cystoid macular edema after intravitreal injections of MTX. No recurrence of intraocular inflammation was observed in 6 months to 8 months of follow-up period after cessation of antituberculous therapy. For the first time in the treatment of presumed latent syphilitic uveitis and presumed tuberculosis-related uveitis, we believe that MTX might have an adjunctive role to suppress intraocular inflammation, reduce uveitic macular edema, and prevent the recurrences of the diseases.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.
2013-11-01
A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.
Bronte, Charles R.; Evrard, Lori M.; Brown, William P.; Mayo, Kathleen R.; Edwards, Andrew J.
1998-01-01
Ruffe (Gymnocephalus cernuus) have been implicated in density declines of native species through egg predation and competition for food in some European waters where they were introduced. Density estimates for ruffe and principal native fishes in the St. Louis River estuary (western Lake Superior) were developed for 1989 to 1996 to measure changes in the fish community in response to an unintentional introduction of ruffe. During the study, ruffe density increased and the densities of several native species decreased. The reductions of native stocks to the natural population dynamics of the same species from Chequamegon Bay, Lake Superior (an area with very few ruffe) were developed, where there was a 24-year record of density. Using these data, short- and long-term variations in catch and correlations among species within years were compared, and species-specific distributions were developed of observed trends in abundance of native fishes in Chequamegon Bay indexed by the slopes of densities across years. From these distributions and our observed trend-line slopes from the St. Louis River, probabilities of measuring negative change at the magnitude observed in the St. Louis River were estimated. Compared with trends in Chequamegon Bay, there was a high probability of obtaining the negative slopes measured for most species, which suggests natural population dynamics could explain, the declines rather than interactions with ruffe. Variable recruitment, which was not related to ruffe density, and associated density-dependent changes in mortality likely were responsible for density declines of native species.
Gravity anomaly and density structure of the San Andreas fault zone
NASA Astrophysics Data System (ADS)
Wang, Chi-Yuen; Rui, Feng; Zhengsheng, Yao; Xingjue, Shi
1986-01-01
A densely spaced gravity survey across the San andreas fault zone was conducted near Bear Valley, about 180 km south of San Francisco, along a cross-section where a detailed seismic reflection profile was previously made by McEvilly (1981). With Feng and McEvilly's velocity structure (1983) of the fault zone at this cross-section as a constraint, the density structure of the fault zone is obtained through inversion of the gravity data by a method used by Parker (1973) and Oldenburg (1974). Although the resulting density picture cannot be unique, it is better constrained and contains more detailed information about the structure of the fault than was previously possible. The most striking feature of the resulting density structure is a deeply seated tongue of low-density material within the fault zone, probably representing a wedge of fault gouge between the two moving plates, which projects from the surface to the base of the seismogenic zone. From reasonable assumptions concerning the density of the solid grains and the state of saturation of the fault zone the average porosity of this low-density fault gouge is estimated as about 12%. Stress-induced cracks are not expected to create so much porosity under the pressures in the deep fault zone. Large-scaled removal of fault-zone material by hydrothermal alteration, dissolution, and subsequent fluid transport may have occurred to produce this pronounced density deficiency. In addition, a broad, funnel-shaped belt of low density appears about the upper part of the fault zone, which probably represents a belt of extensively shattered wall rocks.
Impact of presumed consent for organ donation on donation rates: a systematic review
Rithalia, Amber; Suekarran, Sara; Myers, Lindsey; Sowden, Amanda
2009-01-01
Objectives To examine the impact of a system of presumed consent for organ donation on donation rates and to review data on attitudes towards presumed consent. Design Systematic review. Data sources Studies retrieved by online searches to January 2008 of Medline, Medline In-Process, Embase, CINAHL, PsycINFO, HMIC, PAIS International, and OpenSIGLE. Studies reviewed Five studies comparing donation rates before and after the introduction of legislation for presumed consent (before and after studies); eight studies comparing donation rates in countries with and without presumed consent systems (between country comparisons); 13 surveys of public and professional attitudes to presumed consent. Results The five before and after studies represented three countries: all reported an increase in donation rates after the introduction of presumed consent, but there was little investigation of any other changes taking place concurrently with the change in legislation. In the four best quality between country comparisons, presumed consent law or practice was associated with increased organ donation—increases of 25-30%, 21-26%, 2.7 more donors per million population, and 6.14 more donors per million population in the four studies. Other factors found to be important in at least one study were mortality from road traffic accidents and cerebrovascular causes, transplant capacity, gross domestic product per capita, health expenditure per capita, religion (Catholicism), education, public access to information, and a common law legal system. Eight surveys of attitudes to presumed consent were of the UK public. These surveys varied in the level of support for presumed consent, with surveys conducted before 2000 reporting the lowest levels of support (28-57%). The most recent survey, in 2007, reported that 64% of respondents supported a change to presumed consent. Conclusion Presumed consent alone is unlikely to explain the variation in organ donation rates between countries. Legislation, availability of donors, organisation and infrastructure of the transplantation service, wealth and investment in health care, and public attitudes to and awareness of organ donation may all play a part, but their relative importance is unclear. Recent UK surveys show support for presumed consent, though with variation in results that may reflect differences in survey methods. PMID:19147479
Hill, Jason M.; Diefenbach, Duane R.
2014-01-01
Organisms can be affected by processes in the surrounding landscape outside the boundary of habitat areas and by local vegetation characteristics. There is substantial interest in understanding how these processes affect populations of grassland birds, which have experienced substantial population declines. Much of our knowledge regarding patterns of occupancy and density stem from prairie systems, whereas relatively little is known regarding how occurrence and abundance of grassland birds vary in reclaimed surface mine grasslands. Using distance sampling and single-season occupancy models, we investigated how the occupancy probability of Grasshopper (Ammodramus savannarum) and Henslow's Sparrows (A. henslowii) on 61 surface mine grasslands (1591 ha) in Pennsylvania changed from 2002 through 2011 in response to landscape, grassland, and local vegetation characteristics . A subset (n = 23; 784 ha) of those grasslands were surveyed in 2002, and we estimated changes in sparrow density and vegetation across 10 years. Grasshopper and Henslow's Sparrow populations declined 72% and 49%, respectively from 2002 to 2011, whereas overall woody vegetation density increased 2.6 fold. Henslow's Sparrows avoided grasslands with perimeter–area ratios ≥0.141 km/ha and woody shrub densities ≥0.04 shrubs/m2. Both species occupied grasslands ≤13 ha, but occupancy probability declined with increasing grassland perimeter–area ratio and woody shrub density. Grassland size, proximity to nearest neighboring grassland ( = 0.2 km), and surrounding landscape composition at 0.5, 1.5, and 3.0 km were not parsimonious predictors of occupancy probability for either species. Our results suggest that reclaimed surface mine grasslands, without management intervention, are ephemeral habitats for Grasshopper and Henslow's Sparrows. Given the forecasted decline in surface coal production for Pennsylvania, it is likely that both species will continue to decline in our study region for the foreseeable future.
Salisbury, Margaret L; Xia, Meng; Murray, Susan; Bartholmai, Brian J; Kazerooni, Ella A; Meldrum, Catherine A; Martinez, Fernando J; Flaherty, Kevin R
2016-09-01
Idiopathic pulmonary fibrosis (IPF) can be diagnosed confidently and non-invasively when clinical and computed tomography (CT) criteria are met. Many do not meet these criteria due to absence of CT honeycombing. We investigated predictors of IPF and combinations allowing accurate diagnosis in individuals without honeycombing. We utilized prospectively collected clinical and CT data from patients enrolled in the Lung Tissue Research Consortium. Included patients had no honeycombing, no connective tissue disease, underwent diagnostic lung biopsy, and had CT pattern consistent with fibrosing ILD (n = 200). Logistic regression identified clinical and CT variables predictive of IPF. The probability of IPF was assessed at various cut-points of important clinical and CT variables. A multivariable model adjusted for age and gender found increasingly extensive reticular densities (OR 2.93, CI 95% 1.55-5.56, p = 0.001) predicted IPF, while increasing ground glass densities predicted a diagnosis other than IPF (OR 0.55, CI 95% 0.34-0.89, p = 0.02). The model-based probability of IPF was 80% or greater in patients with age at least 60 years and extent of reticular density one-third or more of total lung volume; for patients meeting or exceeding these clinical thresholds the specificity for IPF is 96% (CI 95% 91-100%) with 21 of 134 (16%) biopsies avoided. In patients with suspected fibrotic ILD and absence of CT honeycombing, extent of reticular and ground glass densities predict a diagnosis of IPF. The probability of IPF exceeds 80% in subjects over age 60 years with one-third of total lung having reticular densities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Model Description for the SOCRATES Contamination Code
1988-10-21
Special A2-I V ILLUSTRATIONS A Schematic Representaction of the Major Elements or Shuttle Contaminacion Problem .... .............. 3 2 A Diagram of the...Atmospherically Scattered Molecules on Ambient Number Density for the 200, 250, and 300 Km Runs 98 A--I A Plot of the Chi-Square Probability Density Function...are scaled with respect to the far field ambient number density, nD, which leaves only the cross section scaling factor to be determined. This factor
Weibull crack density coefficient for polydimensional stress states
NASA Technical Reports Server (NTRS)
Gross, Bernard; Gyekenyesi, John P.
1989-01-01
A structural ceramic analysis and reliability evaluation code has recently been developed encompassing volume and surface flaw induced fracture, modeled by the two-parameter Weibull probability density function. A segment of the software involves computing the Weibull polydimensional stress state crack density coefficient from uniaxial stress experimental fracture data. The relationship of the polydimensional stress coefficient to the uniaxial stress coefficient is derived for a shear-insensitive material with a random surface flaw population.
Theory and analysis of statistical discriminant techniques as applied to remote sensing data
NASA Technical Reports Server (NTRS)
Odell, P. L.
1973-01-01
Classification of remote earth resources sensing data according to normed exponential density statistics is reported. The use of density models appropriate for several physical situations provides an exact solution for the probabilities of classifications associated with the Bayes discriminant procedure even when the covariance matrices are unequal.
Analysing designed experiments in distance sampling
Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block
2009-01-01
Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...
Adaptive Dynamics, Control, and Extinction in Networked Populations
2015-07-09
network geometries. From the pre-history of paths that go extinct, a density function is created from the prehistory of these paths, and a clear local...density plots of Fig. 3b. Using the IAMM to compute the most probable path and comparing it to the prehistory of extinction events on stochastic networks
Constructing the AdS dual of a Fermi liquid: AdS black holes with Dirac hair
NASA Astrophysics Data System (ADS)
Čubrović, Mihailo; Zaanen, Jan; Schalm, Koenraad
2011-10-01
We provide evidence that the holographic dual to a strongly coupled charged Fermi liquid has a non-zero fermion density in the bulk. We show that the pole-strength of the stable quasiparticle characterizing the Fermi surface is encoded in the AdS probability density of a single normalizable fermion wavefunction in AdS. Recalling Migdal's theorem which relates the pole strength to the Fermi-Dirac characteristic discontinuity in the number density at ω F , we conclude that the AdS dual of a Fermi liquid is described by occupied on-shell fermionic modes in AdS. Encoding the occupied levels in the total spatially averaged probability density of the fermion field directly, we show that an AdS Reissner-Nordström black holein a theory with charged fermions has a critical temperature, at which the system undergoes a first-order transition to a black hole with a non-vanishing profile for the bulk fermion field. Thermodynamics and spectral analysis support that the solution with non-zero AdS fermion-profile is the preferred ground state at low temperatures.
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
NASA Astrophysics Data System (ADS)
Donkov, Sava; Stefanov, Ivan Z.
2018-03-01
We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen
2018-04-01
The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.
Effect of density feedback on the two-route traffic scenario with bottleneck
NASA Astrophysics Data System (ADS)
Sun, Xiao-Yan; Ding, Zhong-Jun; Huang, Guo-Hua
2016-12-01
In this paper, we investigate the effect of density feedback on the two-route scenario with a bottleneck. The simulation and theory analysis shows that there exist two critical vehicle entry probabilities αc1 and αc2. When vehicle entry probability α≤αc1, four different states, i.e. free flow state, transition state, maximum current state and congestion state are identified in the system, which correspond to three critical reference densities. However, in the interval αc1<α<αc2, the free flow and transition state disappear, and there is only congestion state when α≥αc2. According to the results, traffic control center can adjust the reference density so that the system is in maximum current state. In this case, the capacity of the traffic system reaches maximum so that drivers can make full use of the roads. We hope that the study results can provide good advice for alleviating traffic jam and be useful to traffic control center for designing advanced traveller information systems.
Yu, Tao; Lin, Maohua; Wu, Bo; Wang, Jintian; Tsai, Chi-Tay
2018-05-16
On the basis of the framework of cubic gauche nitrogen (cg-N), six one-eighth methanetriyl groups (>CH-) substitutes and fifteen one-fourth >CH- substitutes were optimized using the first-principle calculations based on density functional theory (DFT). Both one-eighth and one-fourth substitutes still keep the gauche structures with the simple formula CHN 7 and CHN 3 , respectively. The most thermodynamic stable gauche CHN 7 and CHN 3 are P2 1 qtg-C 2 H 2 N 14 I and P2 1 qtg-C 4 H 4 N 12 III, respectively. No probability density of C-C single bonds and high probability densities of C-N-C structures were found in the two substitutes. Although gauche CHN 7 and CHN 3 lose energy density in contrast to cg-N, they win kinetic stability and combustion temperature (T c ). Thus, they are more feasible than cg-N, and more effective than the traditional rocket fuels. Copyright © 2018 Elsevier Inc. All rights reserved.
Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A
2009-03-13
The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Vertical Deformation of Late Quaternary Features Across Port-au-Prince Bay, Haiti
NASA Astrophysics Data System (ADS)
Cormier, M.; McHugh, C. M.; Gulick, S. P.; Braudy, N.; Davis, M. B.; Diebold, J. B.; Dieudonne, N.; Douilly, R.; Hornbach, M. J.; Johnson, H. E.; Mishkin, K.; Seeber, L.; Sorlien, C. C.; Steckler, M. S.; Symithe, S. J.; Templeton, J.
2010-12-01
As part of a project that investigated the underwater impacts of the January 12, 2010 earthquake in Haiti, we surveyed offshore structures that may have been activated during that earthquake or that might become activated in future earthquakes. Part of that survey focused on the shallow shelf area that extends north of the segment of the Enriquillo-Plantain Garden fault that just ruptured. This area is occupied by an elongated depression, 25 km long, 10 km wide, and 140 m deep. The NW-SE axis of that shallow basin is sub-parallel to that of the NW-SE anticlines that bounds Port-au-Prince Bay. The shallow basin is also rimmed by a carbonate platform that is 5-10 km-wide and ~30m deep. New multibeam bathymetric and sidescan sonar data collected across that platform highlight a series of circular dissolution structures 1-2 km across and ~80 m deep. We interpret that morphology to indicate antecedent karst topography that developed during previous glacial maxima. According to that scenario, the shallow basin off Port-au-Prince would have been isolated from the Caribbean Sea by the continuous platform, and would probably have been occupied by a lagoon. Indeed, a few high-resolution chirp profiles image what may be a paleoshoreline at about 80m depth, buried beneath a 5-8 m thick, acoustically transparent, presumably Holocene layer. Preliminary analysis indicates that the basin floor and the base of the presumably Holocene layer are perfectly horizontal in the center of the basin, but tilted down to the south at its northern edge. The presumed paleoshoreline is also shallower to the north of the basin. We propose that this tilt is driven by contraction across the NW-SE fold-and-thrust belt that runs across Hispaniola. This hypothesis remains to be tested with a more thorough geophysical and coring survey in Port-au-Prince Bay.
NASA Astrophysics Data System (ADS)
Hale, Stephen Roy
Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.
20 CFR 219.24 - Evidence of presumed death.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Evidence of presumed death. 219.24 Section... EVIDENCE REQUIRED FOR PAYMENT Evidence of Age and Death § 219.24 Evidence of presumed death. When a person cannot be proven dead but evidence of death is needed, the Board may presume he or she died at a certain...
Calculation of density of states for modeling photoemission using method of moments
NASA Astrophysics Data System (ADS)
Finkenstadt, Daniel; Lambrakos, Samuel G.; Jensen, Kevin L.; Shabaev, Andrew; Moody, Nathan A.
2017-09-01
Modeling photoemission using the Moments Approach (akin to Spicer's "Three Step Model") is often presumed to follow simple models for the prediction of two critical properties of photocathodes: the yield or "Quantum Efficiency" (QE), and the intrinsic spreading of the beam or "emittance" ɛnrms. The simple models, however, tend to obscure properties of electrons in materials, the understanding of which is necessary for a proper prediction of a semiconductor or metal's QE and ɛnrms. This structure is characterized by localized resonance features as well as a universal trend at high energy. Presented in this study is a prototype analysis concerning the density of states (DOS) factor D(E) for Copper in bulk to replace the simple three-dimensional form of D(E) = (m/π2 h3)p2mE currently used in the Moments approach. This analysis demonstrates that excited state spectra of atoms, molecules and solids based on density-functional theory can be adapted as useful information for practical applications, as well as providing theoretical interpretation of density-of-states structure, e.g., qualitatively good descriptions of optical transitions in matter, in addition to DFT's utility in providing the optical constants and material parameters also required in the Moments Approach.
Determinism and probability in the development of the cell theory.
Duchesneau, François
2012-09-01
A return to Claude Bernard's original use of the concept of 'determinism' displays the fact that natural laws were presumed to rule over all natural processes. In a more restricted sense, the term boiled down to a mere presupposition of constant determinant causes for those processes, leaving aside any particular ontological principle, even stochastic. The history of the cell theory until around 1900 was dominated by a twofold conception of determinant causes. Along a reductionist trend, cells' structures and processes were supposed to be accounted for through their analysis into detailed partial mechanisms. But a more holistic approach tended to subsume those analytic means and the mechanism involved under a program of global functional determinations. When mitotic and meiotic sequences in nuclear replication were being unveiled and that neo-Mendelian genetics was being grafted onto cytology and embryology, a conception of strict determinism at the nuclear level, principally represented by Wilhelm Roux and August Weismann, would seem to rule unilaterally over the mosaic interpretation of the cleavage of blastomeres. But, as shown by E.B. Wilson, in developmental processes there occur contingent outcomes of cell division which observations and experiments reveal. This induces the need to admit 'epigenetic' determinants and relativize the presumed 'preformation' of thedevelopmental phases by making room for an emergent order which the accidental circumstances of gene replication would trigger on. Copyright © 2012 Elsevier Ltd. All rights reserved.
Longitudinal decrease in blood oxygenation level dependent response in cerebral amyloid angiopathy.
Switzer, Aaron R; McCreary, Cheryl; Batool, Saima; Stafford, Randall B; Frayne, Richard; Goodyear, Bradley G; Smith, Eric E
2016-01-01
Lower blood oxygenation level dependent (BOLD) signal changes in response to a visual stimulus in functional magnetic resonance imaging (fMRI) have been observed in cross-sectional studies of cerebral amyloid angiopathy (CAA), and are presumed to reflect impaired vascular reactivity. We used fMRI to detect a longitudinal change in BOLD responses to a visual stimulus in CAA, and to determine any correlations between these changes and other established biomarkers of CAA progression. Data were acquired from 22 patients diagnosed with probable CAA (using the Boston Criteria) and 16 healthy controls at baseline and one year. BOLD data were generated from the 200 most active voxels of the primary visual cortex during the fMRI visual stimulus (passively viewing an alternating checkerboard pattern). In general, BOLD amplitudes were lower at one year compared to baseline in patients with CAA (p = 0.01) but were unchanged in controls (p = 0.18). The longitudinal difference in BOLD amplitudes was significantly lower in CAA compared to controls (p < 0.001). White matter hyperintensity (WMH) volumes and number of cerebral microbleeds, both presumed to reflect CAA-mediated vascular injury, increased over time in CAA (p = 0.007 and p = 0.001, respectively). Longitudinal increases in WMH (rs = 0.04, p = 0.86) or cerebral microbleeds (rs = -0.18, p = 0.45) were not associated with the longitudinal decrease in BOLD amplitudes.
Alho, Kimmo; Vorobyev, Victor A; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Starchenko, Maria G; Tervaniemi, Mari; Näätänen, Risto
2006-02-23
Regional cerebral blood flow was measured with positron emission tomography (PET) in 10 healthy male volunteers. They heard two binaurally delivered concurrent stories, one spoken by a male voice and the other by a female voice. A third story was presented at the same time as a text running on a screen. The subjects were instructed to attend silently to one of the stories at a time. In an additional resting condition, no stories were delivered. PET data showed that in comparison with the reading condition, the brain activity in the speech-listening conditions was enhanced bilaterally in the anterior superior temporal sulcus including cortical areas that have been reported to be specifically sensitive to human voice. Previous studies on attention to non-linguistic sounds and visual objects, in turn, showed prefrontal activations that are presumably related to attentional control functions. However, comparisons of the present speech-listening and reading conditions with each other or with the resting condition indicated no prefrontal activity, except for an activation in the inferior frontal cortex that was presumably associated with semantic and syntactic processing of the attended story. Thus, speech listening, as well as reading, even in a distracting environment appears to depend less on the prefrontal control functions than do other types of attention-demanding tasks, probably because selective attention to speech and written text are over-learned actions rehearsed daily.
Takyar, Varun; Nath, Anand; Beri, Andrea; Gharib, Ahmed M; Rotman, Yaron
2017-09-01
Healthy volunteers are crucial for biomedical research. Inadvertent inclusion of subjects with nonalcoholic fatty liver disease (NAFLD) as controls can compromise study validity and subject safety. Given the rising prevalence of NAFLD in the general population, we sought to identify its prevalence and potential impact in volunteers for clinical trials. We conducted a cross-sectional study of subjects who were classified as healthy volunteers between 2011 and 2015 and had no known liver disease. Subjects were classified as presumed NAFLD (pNF; alanine aminotransferase [ALT] level ≥ 20 for women or ≥ 31 for men and body mass index [BMI] > 25 kg/m 2 ), healthy non-NAFLD controls (normal ALT and BMI), or indeterminate. A total of 3160 subjects participated as healthy volunteers in 149 clinical trials (1-29 trials per subject); 1732 of these subjects (55%) had a BMI > 25 kg/m 2 and 1382 (44%) had abnormal ALT. pNF was present in 881 subjects (27.9%), and these subjects were older than healthy control subjects and had higher triglycerides, low-density lipoprotein cholesterol, and HbA1c and lower high-density lipoprotein cholesterol (P < 0.001 for all). The 149 trials included 101 non-interventional, 33 interventional, and 15 vaccine trials. The impact on study validity of recruiting NAFLD subjects as controls was estimated as likely, probable, and unlikely in 10, 41, and 98 trials, respectively. The proportion of pNF subjects (28%-29%) did not differ by impact. Only 14% of trials used both BMI and ALT for screening. ALT cutoffs for screening were based on local reference values. Grade 3-4 ALT elevations during the study period were rare but more common in pNF subjects than in healthy control subjects (4 versus 1). NAFLD is common and often overlooked in volunteers for clinical trials, despite its potential impact on subject safety and validity of study findings. Increased awareness of NAFLD prevalence and stricter ALT cutoffs may ameliorate this problem. (Hepatology 2017;66:825-833). Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Regional geologic framework off northeastern United States
Schlee, J.; Behrendt, John C.; Grow, J.A.; Robb, James M.; Mattick, R.; Taylor, P.T.; Lawson, B.J.
1976-01-01
Six multichannel seismic-reflection profiles taken across the Atlantic continental margin Previous HitoffTop the northeastern United States show an excess of 14 km of presumed Mesozoic and younger sedimentary rocks in the Baltimore Canyon trough and 8 km in the Georges Bank basin. Beneath the continental rise, the sedimentary prism thickness exceeds 7 km south of New Jersey and Maryland, and it is 4.5 km thick south of Georges Bank. Stratigraphically, the continental slope--outer edge of the continental shelf is a transition zone of high-velocity sedimentary rock, probably carbonate, that covers deeply subsided basement. Acoustically, the sedimentary sequence beneath the shelf is divided into three units which are correlated speculatively with the Cenozoic, the Cretaceous, and the Jurassic-Triassic sections. These units thicken offshore, and some have increased seismic velocities farther offshore. The uppermost unit thickens from a fraction of a kilometer to slightly more than a kilometer in a seaward direction, and velocity values range from 1.7 to 2.2 km/sec. The middle unit thickens from a fraction of a kilometer to as much as 5 km (northern Baltimore Canyon trough), and seismic velocity ranges from 2.2 to 5.4 km/sec. The lowest unit thickens to a maximum of 9 km (northern Baltimore Canyon), and velocities span the 3.9 to 5.9-km/sec interval. The spatial separation of magnetic and gravity anomalies on line 2 (New Jersey) suggests that in the Baltimore Canyon region the magnetic-slope anomaly is due to edge effects and that the previously reported free-air and isostatic gravity anomalies over the outer shelf may be due in part to a lateral increase in sediment density (velocity) near the shelf edge. The East Coast magnetic anomaly and the free-air gravity high both coincide over the outer shelf edge on line 1 (Georges Bank) but are offset by 20 km from the ridge on the reflection profile. Because the magnetic-slope-anomaly wavelength is nearly 50 km across, a deep source is likely. In part, the positive free-air gravity anomaly likewise may represent the significant lateral density increase within the sedimentary section to ard the outer edge of the shelf.
Lewis, Jesse S; Logan, Kenneth A; Alldredge, Mat W; Bailey, Larissa L; VandeWoude, Sue; Crooks, Kevin R
2015-10-01
Urbanization is a primary driver of landscape conversion, with far-reaching effects on landscape pattern and process, particularly related to the population characteristics of animals. Urbanization can alter animal movement and habitat quality, both of which can influence population abundance and persistence. We evaluated three important population characteristics (population density, site occupancy, and species detection probability) of a medium-sized and a large carnivore across varying levels of urbanization. Specifically, we studied bobcat and puma populations across wildland, exurban development, and wildland-urban interface (WUI) sampling grids to test hypotheses evaluating how urbanization affects wild felid populations and their prey. Exurban development appeared to have a greater impact on felid populations than did habitat adjacent to a major urban area (i.e., WUI); estimates of population density for both bobcats and pumas were lower in areas of exurban development compared to wildland areas, whereas population density was similar between WUI and wildland habitat. Bobcats and pumas were less likely to be detected in habitat as the amount of human disturbance associated with residential development increased at a site, which was potentially related to reduced habitat quality resulting from urbanization. However, occupancy of both felids was similar between grids in both study areas, indicating that this population metric was less sensitive than density. At the scale of the sampling grid, detection probability for bobcats in urbanized habitat was greater than in wildland areas, potentially due to restrictive movement corridors and funneling of animal movements in landscapes influenced by urbanization. Occupancy of important felid prey (cottontail rabbits and mule deer) was similar across levels of urbanization, although elk occupancy was lower in urbanized areas. Our study indicates that the conservation of medium- and large-sized felids associated with urbanization likely will be most successful if large areas of wildland habitat are maintained, even in close proximity to urban areas, and wildland habitat is not converted to low-density residential development.
Surface Impact Simulations of Helium Nanodroplets
2015-06-30
mechanical delocalization of the individual helium atoms in the droplet and the quan- tum statistical effects that accompany the interchange of identical...incorporates the effects of atomic delocaliza- tion by treating individual atoms as smeared-out probability distributions that move along classical...probability density distributions to give effec- tive interatomic potential energy curves that have zero-point averaging effects built into them [25
Time Neutron Technique for UXO Discrimination
2010-12-01
mixture of TNT and RDX C-4 CFD Composition 4 military plastic explosive Constant Fraction Discriminator Cps CsI counts per second inorganic...Pdfs Probability Density Functions PET Positron Emission Tomography Pfa Probability of False Alarm PFTNA Pulsed Fast/Thermal Neutron Analysis PMTs...the ordnance type (rocket, mortar , projectile, etc.) and what filler material it contains (inert or empty), practice, HE, illumination, chemical (i.e
Predicting the cosmological constant with the scale-factor cutoff measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Simone, Andrea; Guth, Alan H.; Salem, Michael P.
2008-09-15
It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less