Sample records for statistical density operators

  1. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  2. Evolution of probability densities in stochastic coupled map lattices

    NASA Astrophysics Data System (ADS)

    Losson, Jérôme; Mackey, Michael C.

    1995-08-01

    This paper describes the statistical properties of coupled map lattices subjected to the influence of stochastic perturbations. The stochastic analog of the Perron-Frobenius operator is derived for various types of noise. When the local dynamics satisfy rather mild conditions, this equation is shown to possess either stable, steady state solutions (i.e., a stable invariant density) or density limit cycles. Convergence of the phase space densities to these limit cycle solutions explains the nonstationary behavior of statistical quantifiers at equilibrium. Numerical experiments performed on various lattices of tent, logistic, and shift maps with diffusivelike interelement couplings are examined in light of these theoretical results.

  3. Using the MCNP Taylor series perturbation feature (efficiently) for shielding problems

    NASA Astrophysics Data System (ADS)

    Favorite, Jeffrey

    2017-09-01

    The Taylor series or differential operator perturbation method, implemented in MCNP and invoked using the PERT card, can be used for efficient parameter studies in shielding problems. This paper shows how only two PERT cards are needed to generate an entire parameter study, including statistical uncertainty estimates (an additional three PERT cards can be used to give exact statistical uncertainties). One realistic example problem involves a detailed helium-3 neutron detector model and its efficiency as a function of the density of its high-density polyethylene moderator. The MCNP differential operator perturbation capability is extremely accurate for this problem. A second problem involves the density of the polyethylene reflector of the BeRP ball and is an example of first-order sensitivity analysis using the PERT capability. A third problem is an analytic verification of the PERT capability.

  4. The vertical pattern of microwave radiation around BTS (Base Transceiver Station) antennae in Hashtgerd township.

    PubMed

    Nasseri, Simin; Monazzam, Mohammadreza; Beheshti, Meisam; Zare, Sajad; Mahvi, Amirhosein

    2013-12-20

    New environmental pollutants interfere with the environment and human life along with technology development. One of these pollutants is electromagnetic field. This study determines the vertical microwave radiation pattern of different types of Base Transceiver Station (BTS) antennae in the Hashtgerd city as the capital of Savojbolagh County, Alborz Province of Iran. The basic data including the geographical location of the BTS antennae in the city, brand, operator type, installation and its height was collected from radio communication office, and then the measurements were carried out according to IEEE STD 95. 1 by the SPECTRAN 4060. The statistical analyses were carried out by SPSS16 using Kolmogorov Smirnov test and multiple regression method. Results indicated that in both operators of Irancell and Hamrah-e-Aval (First Operator), the power density rose with an increase in measurement height or decrease in the vertical distance of broadcaster antenna. With mix model test, a significant statistical relationship was observed between measurement height and the average power density in both types of the operators. With increasing measuring height, power density increased in both operators. The study showed installing antennae in a crowded area needs more care because of higher radiation emission. More rigid surfaces and mobile users are two important factors in crowded area that can increase wave density and hence raise public microwave exposure.

  5. The vertical pattern of microwave radiation around BTS (Base Transceiver Station) antennae in Hashtgerd township

    PubMed Central

    2013-01-01

    New environmental pollutants interfere with the environment and human life along with technology development. One of these pollutants is electromagnetic field. This study determines the vertical microwave radiation pattern of different types of Base Transceiver Station (BTS) antennae in the Hashtgerd city as the capital of Savojbolagh County, Alborz Province of Iran. The basic data including the geographical location of the BTS antennae in the city, brand, operator type, installation and its height was collected from radio communication office, and then the measurements were carried out according to IEEE STD 95. 1 by the SPECTRAN 4060. The statistical analyses were carried out by SPSS16 using Kolmogorov Smirnov test and multiple regression method. Results indicated that in both operators of Irancell and Hamrah-e-Aval (First Operator), the power density rose with an increase in measurement height or decrease in the vertical distance of broadcaster antenna. With mix model test, a significant statistical relationship was observed between measurement height and the average power density in both types of the operators. With increasing measuring height, power density increased in both operators. The study showed installing antennae in a crowded area needs more care because of higher radiation emission. More rigid surfaces and mobile users are two important factors in crowded area that can increase wave density and hence raise public microwave exposure. PMID:24359870

  6. Twenty-five years of maximum-entropy principle

    NASA Astrophysics Data System (ADS)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  7. A torque-measuring micromotor provides operator independent measurements marking four different density areas in maxillae.

    PubMed

    Di Stefano, Danilo Alessio; Arosio, Paolo; Piattelli, Adriano; Perrotti, Vittoria; Iezzi, Giovanna

    2015-02-01

    Bone density at implant placement site is a key factor to obtain the primary stability of the fixture, which, in turn, is a prognostic factor for osseointegration and long-term success of an implant supported rehabilitation. Recently, an implant motor with a bone density measurement probe has been introduced. The aim of the present study was to test the objectiveness of the bone densities registered by the implant motor regardless of the operator performing them. A total of 3704 bone density measurements, performed by means of the implant motor, were registered by 39 operators at different implant sites during routine activity. Bone density measurements were grouped according to their distribution across the jaws. Specifically, four different areas were distinguished: a pre-antral (between teeth from first right maxillary premolar to first left maxillary premolar) and a sub-antral (more distally) zone in the maxilla, and an interforaminal (between and including teeth from first left mandibular premolar to first right mandibular premolar) and a retroforaminal (more distally) zone in the lower one. A statistical comparison was performed to check the inter-operators variability of the collected data. The device produced consistent and operator-independent bone density values at each tooth position, showing a reliable bone-density measurement. The implant motor demonstrated to be a helpful tool to properly plan implant placement and loading irrespective of the operator using it.

  8. A Statistical Study of Eiscat Electron and Ion Temperature Measurements In The E-region

    NASA Astrophysics Data System (ADS)

    Hussey, G.; Haldoupis, C.; Schlegel, K.; Bösinger, T.

    Motivated by the large EISCAT data base, which covers over 15 years of common programme operation, and previous statistical work with EISCAT data (e.g., C. Hal- doupis, K. Schlegel, and G. Hussey, Auroral E-region electron density gradients mea- sured with EISCAT, Ann. Geopshysicae, 18, 1172-1181, 2000), a detailed statistical analysis of electron and ion EISCAT temperature measurements has been undertaken. This study was specifically concerned with the statistical dependence of heating events with other ambient parameters such as the electric field and electron density. The re- sults showed previously reported dependences such as the electron temperature being directly correlated with the ambient electric field and inversely related to the electron density. However, these correlations were found to be also dependent upon altitude. There was also evidence of the so called "Schlegel effect" (K. Schlegel, Reduced effective recombination coefficient in the disturbed polar E-region, J. Atmos. Terr. Phys., 44, 183-185, 1982); that is, the heated electron gas leads to increases in elec- tron density through a reduction in the recombination rate. This paper will present the statistical heating results and attempt to offer physical explanations and interpretations of the findings.

  9. Nowcasting Cloud Fields for U.S. Air Force Special Operations

    DTIC Science & Technology

    2017-03-01

    application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES

  10. The Physics and Operation of Ultra-Submicron Length Semiconductor Devices.

    DTIC Science & Technology

    1994-05-01

    300 mei heterostructure diode at T=3001( with Fenni statistics and flat band conditions In all of the calculations with a heterostructure barrier, once...25 24- 22- 21- 0 50 100 150 200 Obhnce (mre Figure 8. Self-consistent T=300K calculation with Fenni statistics showing the density and donor

  11. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  12. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  13. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  14. Physical properties of wild mango fruit and nut

    NASA Astrophysics Data System (ADS)

    Ehiem, J.; Simonyan, K.

    2012-02-01

    Physical properties of two wild mango varieties were studied at 81.9 and 24.5% moisture (w.b.) for the fruits and nuts, respectively. The shape and size of the fruit are the same while that of nuts differs at P = 0.05. The mass, density and bulk density of the fruits are statistically different at P = 0.05 but the volume is the same. The shape and size, volume and bulk density of the nuts are statistically the same at P = 0.05. The nuts of both varieties are also the same at P = 0.05 in terms of mass and density. The packing factor for both fruits and nut of the two varieties are the same at 0.95. The relevant data obtained for the two varieties would be useful for design and development of machines and equipment for processing and handling operations.

  15. Parametrically coupled fermionic oscillators: Correlation functions and phase-space description

    NASA Astrophysics Data System (ADS)

    Ghosh, Arnab

    2015-01-01

    A fermionic analog of a parametric amplifier is used to describe the joint quantum state of the two interacting fermionic modes. Based on a two-mode generalization of the time-dependent density operator, time evolution of the fermionic density operator is determined in terms of its two-mode Wigner and P function. It is shown that the equation of motion of the Wigner function corresponds to a fermionic analog of Liouville's equation. The equilibrium density operator for fermionic fields developed by Cahill and Glauber is thus extended to a dynamical context to show that the mathematical structures of both the correlation functions and the weight factors closely resemble their bosonic counterpart. It has been shown that the fermionic correlation functions are marked by a characteristic upper bound due to Fermi statistics, which can be verified in the matter wave counterpart of photon down-conversion experiments.

  16. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  17. Thermionic emission from monolayer graphene, sheath formation and its feasibility towards thermionic converters

    NASA Astrophysics Data System (ADS)

    Misra, Shikha; Upadhyay Kahaly, M.; Mishra, S. K.

    2017-02-01

    A formalism describing the thermionic emission from a single layer graphene sheet operating at a finite temperature and the consequent formation of the thermionic sheath in its proximity has been established. The formulation takes account of two dimensional densities of state configuration, Fermi-Dirac (f-d) statistics of the electron energy distribution, Fowler's treatment of electron emission, and Poisson's equation. The thermionic current estimates based on the present analysis is found to be in reasonably good agreement with experimental observations (Zhu et al., Nano Res. 07, 1 (2014)). The analysis has further been simplified for the case where f-d statistics of an electron energy distribution converges to Maxwellian distribution. By using this formulation, the steady state sheath features, viz., spatial dependence of the surface potential and electron density structure in the thermionic sheath are derived and illustrated graphically for graphene parameters; the electron density in the sheath is seen to diminish within ˜10 s of Debye lengths. By utilizing the graphene based cathode in configuring a thermionic converter (TC), an appropriate operating regime in achieving the efficient energy conversion has been identified. A TC configured with the graphene based cathode (operating at ˜1200 K/work function 4.74 V) along with the metallic anode (operating at ˜400 K/ work function 2.0 V) is predicted to display ˜56% of the input thermal flux into the electrical energy, which infers approximately ˜84% of the Carnot efficiency.

  18. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  19. Ethnic Density Effects on Physical Morbidity, Mortality, and Health Behaviors: A Systematic Review of the Literature

    PubMed Central

    Shaw, Richard; Nazroo, James; Stafford, Mai; Albor, Christo; Atkin, Karl; Kiernan, Kathleen; Wilkinson, Richard; Pickett, Kate

    2012-01-01

    It has been suggested that people in racial/ethnic minority groups are healthier when they live in areas with a higher concentration of people from their own ethnic group, a so-called ethnic density effect. Ethnic density effects are still contested, and the pathways by which ethnic density operates are poorly understood. The aim of this study was to systematically review the literature examining the ethnic density effect on physical health, mortality, and health behaviors. Most studies report a null association between ethnic density and health. Protective ethnic density effects are more common than adverse associations, particularly for health behaviors and among Hispanic people. Limitations of the literature include inadequate adjustment for area deprivation and limited statistical power across ethnic density measures and study samples. PMID:23078507

  20. Statistical characterization of portal images and noise from portal imaging systems.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge

    2013-06-01

    In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.

  1. Parametric control in coupled fermionic oscillators

    NASA Astrophysics Data System (ADS)

    Ghosh, Arnab

    2014-10-01

    A simple model of parametric coupling between two fermionic oscillators is considered. Statistical properties, in particular the mean and variance of quanta for a single mode, are described by means of a time-dependent reduced density operator for the system and the associated P function. The density operator for fermionic fields as introduced by Cahill and Glauber [K. E. Cahill and R. J. Glauber, Phys. Rev. A 59, 1538 (1999), 10.1103/PhysRevA.59.1538] thus can be shown to provide a quantum mechanical description of the fields closely resembling their bosonic counterpart. In doing so, special emphasis is given to population trapping, and quantum control over the states of the system.

  2. Implementation and Research on the Operational Use of the Mesoscale Prediction Model COAMPS in Poland

    DTIC Science & Technology

    2007-09-30

    COAMPS model. Bogumil Jakubiak, University of Warsaw – participated in EGU General Assembly , Vienna Austria 15-20 April 2007 giving one oral and two...conditional forecast (background) error probability density function using an ensemble of the model forecast to generate background error statistics...COAMPS system on ICM machines at Warsaw University for the purpose of providing operational support to the general public using the ICM meteorological

  3. Hypoxic areas, density-dependence and food limitation drive the body condition of a heavily exploited marine fish predator

    PubMed Central

    Käll, Filip; Hansson, Martin; Baranova, Tatjana; Karlsson, Olle; Lundström, Karl; Neuenfeldt, Stefan; Hjelm, Joakim

    2016-01-01

    Investigating the factors regulating fish condition is crucial in ecology and the management of exploited fish populations. The body condition of cod (Gadus morhua) in the Baltic Sea has dramatically decreased during the past two decades, with large implications for the fishery relying on this resource. Here, we statistically investigated the potential drivers of the Baltic cod condition during the past 40 years using newly compiled fishery-independent biological data and hydrological observations. We evidenced a combination of different factors operating before and after the ecological regime shift that occurred in the Baltic Sea in the early 1990s. The changes in cod condition related to feeding opportunities, driven either by density-dependence or food limitation, along the whole period investigated and to the fivefold increase in the extent of hypoxic areas in the most recent 20 years. Hypoxic areas can act on cod condition through different mechanisms related directly to species physiology, or indirectly to behaviour and trophic interactions. Our analyses found statistical evidence for an effect of the hypoxia-induced habitat compression on cod condition possibly operating via crowding and density-dependent processes. These results furnish novel insights into the population dynamics of Baltic Sea cod that can aid the management of this currently threatened population. PMID:27853557

  4. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  5. Stochastic Optimally Tuned Range-Separated Hybrid Density Functional Theory.

    PubMed

    Neuhauser, Daniel; Rabani, Eran; Cytter, Yael; Baer, Roi

    2016-05-19

    We develop a stochastic formulation of the optimally tuned range-separated hybrid density functional theory that enables significant reduction of the computational effort and scaling of the nonlocal exchange operator at the price of introducing a controllable statistical error. Our method is based on stochastic representations of the Coulomb convolution integral and of the generalized Kohn-Sham density matrix. The computational cost of the approach is similar to that of usual Kohn-Sham density functional theory, yet it provides a much more accurate description of the quasiparticle energies for the frontier orbitals. This is illustrated for a series of silicon nanocrystals up to sizes exceeding 3000 electrons. Comparison with the stochastic GW many-body perturbation technique indicates excellent agreement for the fundamental band gap energies, good agreement for the band edge quasiparticle excitations, and very low statistical errors in the total energy for large systems. The present approach has a major advantage over one-shot GW by providing a self-consistent Hamiltonian that is central for additional postprocessing, for example, in the stochastic Bethe-Salpeter approach.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined bymore » the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.« less

  7. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution.

    PubMed

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-07

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  8. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution

    NASA Astrophysics Data System (ADS)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-01

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  9. SEDIDAT: A BASIC program for the collection and statistical analysis of particle settling velocity data

    NASA Astrophysics Data System (ADS)

    Wright, Robyn; Thornberg, Steven M.

    SEDIDAT is a series of compiled IBM-BASIC (version 2.0) programs that direct the collection, statistical calculation, and graphic presentation of particle settling velocity and equivalent spherical diameter for samples analyzed using the settling tube technique. The programs follow a menu-driven format that is understood easily by students and scientists with little previous computer experience. Settling velocity is measured directly (cm,sec) and also converted into Chi units. Equivalent spherical diameter (reported in Phi units) is calculated using a modified Gibbs equation for different particle densities. Input parameters, such as water temperature, settling distance, particle density, run time, and Phi;Chi interval are changed easily at operator discretion. Optional output to a dot-matrix printer includes a summary of moment and graphic statistical parameters, a tabulation of individual and cumulative weight percents, a listing of major distribution modes, and cumulative and histogram plots of a raw time, settling velocity. Chi and Phi data.

  10. Particle-sampling statistics in laser anemometers Sample-and-hold systems and saturable systems

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1983-01-01

    The effect of the data-processing system on the particle statistics obtained with laser anemometry of flows containing suspended particles is examined. Attention is given to the sample and hold processor, a pseudo-analog device which retains the last measurement until a new measurement is made, followed by time-averaging of the data. The second system considered features a dead time, i.e., a saturable system with a significant reset time with storage in a data buffer. It is noted that the saturable system operates independent of the particle arrival rate. The probabilities of a particle arrival in a given time period are calculated for both processing systems. It is shown that the system outputs are dependent on the mean particle flow rate, the flow correlation time, and the flow statistics, indicating that the particle density affects both systems. The results are significant for instances of good correlation between the particle density and velocity, such as occurs near the edge of a jet.

  11. Anomaly detection of turbopump vibration in Space Shuttle Main Engine using statistics and neural networks

    NASA Technical Reports Server (NTRS)

    Lo, C. F.; Wu, K.; Whitehead, B. A.

    1993-01-01

    The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.

  12. Physiochemical properties and reproducibility of air-based sodium tetradecyl sulphate foam using the Tessari method.

    PubMed

    Watkins, Mike R; Oliver, Richard J

    2017-07-01

    Objectives The objectives were to examine the density, bubble size distribution and durability of sodium tetradecyl sulphate foam and the consistency of production of foam by a number of different operators using the Tessari method. Methods 1% and 3% sodium tetradecyl sulphate sclerosant foam was produced by an experienced operator and a group of inexperienced operators using either a 1:3 or 1:4 liquid:air ratio and the Tessari method. The foam density, bubble size distribution and foam durability were measured on freshly prepared foam from each operator. Results The foam density measurements were similar for each of the 1:3 preparations and for each of the 1:4 preparations but not affected by the sclerosant concentration. The bubble size for all preparations were very small immediately after preparation but progressively coalesced to become a micro-foam (<250 µm) after the first 30 s up until 2 min. Both the 1% and 3% solution foams developed liquid more rapidly when made in a 1:3 ratio (37 s) than in a 1:4 ratio (45 s) but all combinations took similar times to reach 0.4 ml liquid formation. For all the experiments, there was no statistical significant difference between operators. Conclusions The Tessari method of foam production for sodium tetradecyl sulphate sclerosant is consistent and reproducible even when made by inexperienced operators. The best quality foam with micro bubbles should be used within the first minute after production.

  13. Quantum formalism for classical statistics

    NASA Astrophysics Data System (ADS)

    Wetterich, C.

    2018-06-01

    In static classical statistical systems the problem of information transport from a boundary to the bulk finds a simple description in terms of wave functions or density matrices. While the transfer matrix formalism is a type of Heisenberg picture for this problem, we develop here the associated Schrödinger picture that keeps track of the local probabilistic information. The transport of the probabilistic information between neighboring hypersurfaces obeys a linear evolution equation, and therefore the superposition principle for the possible solutions. Operators are associated to local observables, with rules for the computation of expectation values similar to quantum mechanics. We discuss how non-commutativity naturally arises in this setting. Also other features characteristic of quantum mechanics, such as complex structure, change of basis or symmetry transformations, can be found in classical statistics once formulated in terms of wave functions or density matrices. We construct for every quantum system an equivalent classical statistical system, such that time in quantum mechanics corresponds to the location of hypersurfaces in the classical probabilistic ensemble. For suitable choices of local observables in the classical statistical system one can, in principle, compute all expectation values and correlations of observables in the quantum system from the local probabilistic information of the associated classical statistical system. Realizing a static memory material as a quantum simulator for a given quantum system is not a matter of principle, but rather of practical simplicity.

  14. Statistical process control applied to mechanized peanut sowing as a function of soil texture.

    PubMed

    Zerbato, Cristiano; Furlani, Carlos Eduardo Angeli; Ormond, Antonio Tassio Santana; Gírio, Lucas Augusto da Silva; Carneiro, Franciele Morlin; da Silva, Rouverson Pereira

    2017-01-01

    The successful establishment of agricultural crops depends on sowing quality, machinery performance, soil type and conditions, among other factors. This study evaluates the operational quality of mechanized peanut sowing in three soil types (sand, silt, and clay) with variable moisture contents. The experiment was conducted in three locations in the state of São Paulo, Brazil. The track-sampling scheme was used for 80 sampling locations of each soil type. Descriptive statistics and statistical process control (SPC) were used to evaluate the quality indicators of mechanized peanut sowing. The variables had normal distributions and were stable from the viewpoint of SPC. The best performance for peanut sowing density, normal spacing, and the initial seedling growing stand was found for clayey soil followed by sandy soil and then silty soil. Sandy or clayey soils displayed similar results regarding sowing depth, which was deeper than in the silty soil. Overall, the texture and the moisture of clayey soil provided the best operational performance for mechanized peanut sowing.

  15. Statistical process control applied to mechanized peanut sowing as a function of soil texture

    PubMed Central

    Furlani, Carlos Eduardo Angeli; da Silva, Rouverson Pereira

    2017-01-01

    The successful establishment of agricultural crops depends on sowing quality, machinery performance, soil type and conditions, among other factors. This study evaluates the operational quality of mechanized peanut sowing in three soil types (sand, silt, and clay) with variable moisture contents. The experiment was conducted in three locations in the state of São Paulo, Brazil. The track-sampling scheme was used for 80 sampling locations of each soil type. Descriptive statistics and statistical process control (SPC) were used to evaluate the quality indicators of mechanized peanut sowing. The variables had normal distributions and were stable from the viewpoint of SPC. The best performance for peanut sowing density, normal spacing, and the initial seedling growing stand was found for clayey soil followed by sandy soil and then silty soil. Sandy or clayey soils displayed similar results regarding sowing depth, which was deeper than in the silty soil. Overall, the texture and the moisture of clayey soil provided the best operational performance for mechanized peanut sowing. PMID:28742095

  16. Treatment of automotive industry oily wastewater by electrocoagulation: statistical optimization of the operational parameters.

    PubMed

    GilPavas, Edison; Molina-Tirado, Kevin; Gómez-García, Miguel Angel

    2009-01-01

    An electrocoagulation process was used for the treatment of oily wastewater generated from an automotive industry in Medellín (Colombia). An electrochemical cell consisting of four parallel electrodes (Fe and Al) in bipolar configuration was implemented. A multifactorial experimental design was used for evaluating the influence of several parameters including: type and arrangement of electrodes, pH, and current density. Oil and grease removal was defined as the response variable for the statistical analysis. Additionally, the BOD(5), COD, and TOC were monitored during the treatment process. According to the results, at the optimum parameter values (current density = 4.3 mA/cm(2), distance between electrodes = 1.5 cm, Fe as anode, and pH = 12) it was possible to reach a c.a. 95% oils removal, COD and mineralization of 87.4% and 70.6%, respectively. A final biodegradability (BOD(5)/COD) of 0.54 was reached.

  17. Inference of missing data and chemical model parameters using experimental statistics

    NASA Astrophysics Data System (ADS)

    Casey, Tiernan; Najm, Habib

    2017-11-01

    A method for determining the joint parameter density of Arrhenius rate expressions through the inference of missing experimental data is presented. This approach proposes noisy hypothetical data sets from target experiments and accepts those which agree with the reported statistics, in the form of nominal parameter values and their associated uncertainties. The data exploration procedure is formalized using Bayesian inference, employing maximum entropy and approximate Bayesian computation methods to arrive at a joint density on data and parameters. The method is demonstrated in the context of reactions in the H2-O2 system for predictive modeling of combustion systems of interest. Work supported by the US DOE BES CSGB. Sandia National Labs is a multimission lab managed and operated by Nat. Technology and Eng'g Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Intl, for the US DOE NCSA under contract DE-NA-0003525.

  18. The effect of retained intramedullary nails on tibial bone mineral density.

    PubMed

    Allen, J C; Lindsey, R W; Hipp, J A; Gugala, Z; Rianon, N; LeBlanc, A

    2008-07-01

    Intramedullary nailing has become a standard treatment for adult tibial shaft fractures. Retained intramedullary nails have been associated with stress shielding, although their long-term effect on decreasing tibial bone mineral density is currently unclear. The purpose of this study was to determine if retained tibial intramedullary nails decrease tibial mineral density in patients with successfully treated fractures. Patients treated with statically locked intramedullary nails for isolated, unilateral tibia shaft fractures were studied. Inclusion required that fracture had healed radiographically and that the patient returned to the pre-injury activity level. Data on patient demographic, fracture type, surgical technique, implant, and post-operative functional status were tabulated. Dual energy X-ray absorptiometry was used to measure bone mineral density in selected regions of the affected tibia and the contralateral intact tibia. Image reconstruction software was employed to ensure symmetry of the studied regions. Twenty patients (mean age 43; range 22-77 years) were studied at a mean of 29 months (range 5-60 months) following intramedullary nailing. There was statistically significant reduction of mean bone mineral density in tibiae with retained intramedullary nails (1.02 g/cm(2) versus 1.06 g/cm(2); P=0.04). A significantly greater decrease in bone mineral density was detected in the reamed versus non-reamed tibiae (-7% versus +6%, respectively; P<0.05). The present study demonstrates a small, but statistically significant overall bone mineral density decrease in healed tibiae with retained nails. Intramedullary reaming appears to be a factor potentiating the reduction of tibia bone mineral density in long-term nail retention.

  19. Remotely operable compact instruments for measuring atmospheric CO2 and CH4 column densities at surface monitoring sites

    NASA Astrophysics Data System (ADS)

    Kobayashi, N.; Inoue, G.; Kawasaki, M.; Yoshioka, H.; Minomura, M.; Murata, I.; Nagahama, T.; Matsumi, Y.; Tanaka, T.; Morino, I.; Ibuki, T.

    2010-08-01

    Remotely operable compact instruments for measuring atmospheric CO2 and CH4 column densities were developed in two independent systems: one utilizing a grating-based desktop optical spectrum analyzer (OSA) with a resolution enough to resolve rotational lines of CO2 and CH4 in the regions of 1565-1585 and 1674-1682 nm, respectively; the other is an application of an optical fiber Fabry-Perot interferometer (FFPI) to obtain the CO2 column density. Direct sunlight was collimated via a small telescope installed on a portable sun tracker and then transmitted through an optical fiber into the OSA or the FFPI for optical analysis. The near infrared spectra of the OSA were retrieved by a least squares spectral fitting algorithm. The CO2 and CH4 column densities deduced were in excellent agreement with those measured by a Fourier transform spectrometer with high resolution. The rovibronic lines in the wavelength region of 1570-1575 nm were analyzed by the FFPI. The I0 and I values in the Beer-Lambert law equation to obtain CO2 column density were deduced by modulating temperature of the FFPI, which offered column CO2 with the statistical error less than 0.2% for six hours measurement.

  20. A statistical model to estimate the local vulnerability to severe weather

    NASA Astrophysics Data System (ADS)

    Pardowitz, Tobias

    2018-06-01

    We present a spatial analysis of weather-related fire brigade operations in Berlin. By comparing operation occurrences to insured losses for a set of severe weather events we demonstrate the representativeness and usefulness of such data in the analysis of weather impacts on local scales. We investigate factors influencing the local rate of operation occurrence. While depending on multiple factors - which are often not available - we focus on publicly available quantities. These include topographic features, land use information based on satellite data and information on urban structure based on data from the OpenStreetMap project. After identifying suitable predictors such as housing coverage or local density of the road network we set up a statistical model to be able to predict the average occurrence frequency of local fire brigade operations. Such model can be used to determine potential hotspots for weather impacts even in areas or cities where no systematic records are available and can thus serve as a basis for a broad range of tools or applications in emergency management and planning.

  1. Ground-water quality and effects of poultry confined animal feeding operations on shallow ground water, upper Shoal Creek basin, Southwest Missouri, 2000

    USGS Publications Warehouse

    Mugel, Douglas N.

    2002-01-01

    Forty-seven wells and 8 springs were sampled in May, October, and November 2000 in the upper Shoal Creek Basin, southwest Missouri, to determine if nutrient concentrations and fecal bacteria densities are increasing in the shallow aquifer as a result of poultry confined animal feeding operations (CAFOs). Most of the land use in the basin is agricultural, with cattle and hay production dominating; the number of poultry CAFOs has increased in recent years. Poultry waste (litter) is used as a source of nutrients on pasture land as much as several miles away from poultry barns.Most wells in the sample network were classified as ?P? wells, which were open only or mostly to the Springfield Plateau aquifer and where poultry litter was applied to a substantial acreage within 0.5 mile of the well both in spring 2000 and in several previous years; and ?Ag? wells, which were open only or mostly to the Springfield Plateau aquifer and which had limited or no association with poultry CAFOs. Water-quality data from wells and springs were grouped for statistical purposes as P1, Ag1, and Sp1 (May 2000 samples) and P2, Ag2, and Sp2 (October or November 2000 samples). The results of this study do not indicate that poultry CAFOs are affecting the shallow ground water in the upper Shoal Creek Basin with respect to nutrient concentrations and fecal bacteria densities. Statistical tests do not indicate that P wells sampled in spring 2000 have statistically larger concentrations of nitrite plus nitrate or fecal indicator bacteria densities than Ag wells sampled during the same time, at a 95-percent confidence level. Instead, the Ag wells had statistically larger concentrations of nitrite plus nitrate and fecal coliform bacteria densities than the P wells.The results of this study do not indicate seasonal variations from spring 2000 to fall 2000 in the concentrations of nutrients or fecal indicator bacteria densities from well samples. Statistical tests do not indicate statistically significant differences at a 95-percent confidence level for nitrite plus nitrate concentrations or fecal indicator bacteria densities between either P wells sampled in spring and fall 2000, or Ag wells sampled in spring and fall 2000. However, analysis of samples from springs shows that fecal streptococcus bacteria densities were statistically smaller in fall 2000 than in spring 2000 at a 95-percent confidence level.Nitrite plus nitrate concentrations in spring 2000 samples ranged from less than the detection level [0.02 mg/L (milligram per liter) as nitrogen] to 18 mg/L as nitrogen. Seven samples from three wells had nitrite plus nitrate concentrations at or larger than the maximum contaminant level (MCL) of 10 mg/L as nitrogen. The median nitrite plus nitrate concentrations were 0.28 mg/L as nitrogen for P1 samples, 4.6 mg/L as nitrogen for Ag1 samples, and 3.9 mg/L as nitrogen for Sp1 samples.Fecal coliform bacteria were detected in 1 of 25 P1 samples and 5 of 15 Ag1 samples. Escherichia coli (E. coli) bacteria were detected in 3 of 24 P1 samples and 1 of 13 Ag1 samples. Fecal streptococcus bacteria were detected in 8 of 25 P1 samples and 6 of 15 Ag1 samples. Bacteria densities in samples from wells ranged from less than 1 to 81 col/100 mL (colonies per 100 milliliters) of fecal coliform, less than 1 to 140 col/100 mL of E. coli, and less than 1 to 130 col/100 mL of fecal streptococcus. Fecal indicator bacteria densities in samples from springs were substantially larger than in samples from wells. In Sp1 samples, bacteria densities ranged from 12 to 3,300 col/100 mL of fecal coliform, 40 to 2,700 col/100 mL of E. coli, and 42 to 3,100 col/100 mL of fecal streptococcus.

  2. Statistical analysis of field data for aircraft warranties

    NASA Astrophysics Data System (ADS)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  3. Photothermal effect of infrared lasers on ex vivo lamb brain tissues

    NASA Astrophysics Data System (ADS)

    Özgürün, Baturay; Gülsoy, Murat

    2018-02-01

    Here, the most suitable infrared laser for a neurosurgery operation is suggested, among 1940-nm thulium fiber, 1470-nm diode, 1070-nm ytterbium fiber and 980-nm diode lasers. Cortical and subcortical ex-vivo lamb brain tissues are exposed to the laser light with the combinations of some laser parameters such as output power, energy density, operation mode (continuous and pulsed-modulated) and operation time. In this way, the greatest ablation efficiency associated with the best neurosurgical laser type can be defined. The research can be divided into two parts; pre-dosimetry and dosimetry studies. The former is used to determine safe operation zones for the dosimetry study by defining coagulation and carbonization onset times for each of the brain tissues. The latter is the main part of this research, and both tissues are exposed to laser irradiation with various energy density levels associated with the output power and operation time. In addition, photo-thermal effects are compared for two laser operation modes, and then coagulation and ablation diameters to calculate the ablation efficiency are measured under a light microscope. Consequently, results are compared graphically and statistically, and it is found that thulium and 1470-nm diode lasers can be utilized as subcortical and cortical tissue ablator devices, respectively.

  4. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  5. Entropy: Thermodynamic definition and quantum expression

    NASA Astrophysics Data System (ADS)

    Gyftopoulos, Elias P.; Çubukçu, Erol

    1997-04-01

    Numerous expressions exist in the scientific literature purporting to represent entropy. Are they all acceptable? To answer this question, we review the thermodynamic definition of entropy, and establish eight criteria that must be satisfied by it. The definition and criteria are obtained by using solely the general, nonstatistical statements of the first and second laws presented in Thermodynamics: Foundations and Applications [Elias P. Gyftopoulos and Gian Paolo Beretta (Macmillan, New York, 1991)]. We apply the eight criteria to each of the entropy expressions proposed in the literature and find that only the relation S=-kTrρln ρ satisfies all the criteria, provided that the density operator ρ corresponds to a homogeneous ensemble of identical systems, identically prepared. Homogeneous ensemble means that every member of the ensemble is described by the same density operator ρ as any other member, that is, the ensemble is not a statistical mixture of projectors (wave functions).

  6. Flow dynamic environment data base development for the SSME

    NASA Technical Reports Server (NTRS)

    Sundaram, C. V.

    1985-01-01

    The fluid flow-induced vibration of the Space Shuttle main engine (SSME) components are being studied with a view to correlating the frequency characteristics of the pressure fluctuations in a rocket engine to its operating conditions and geometry. An overview of the data base development for SSME test firing results and the interactive computer software used to access, retrieve, and plot or print the results selectively for given thrust levels, engine numbers, etc., is presented. The various statistical methods available in the computer code for data analysis are discussed. Plots of test data, nondimensionalized using parameters such as fluid flow velocities, densities, and pressures, are presented. Results are compared with those available in the literature. Correlations between the resonant peaks observed at higher frequencies in power spectral density plots with pump geometry and operating conditions are discussed. An overview of the status of the investigation is presented and future directions are discussed.

  7. Infrared thermography for wood density estimation

    NASA Astrophysics Data System (ADS)

    López, Gamaliel; Basterra, Luis-Alfonso; Acuña, Luis

    2018-03-01

    Infrared thermography (IRT) is becoming a commonly used technique to non-destructively inspect and evaluate wood structures. Based on the radiation emitted by all objects, this technique enables the remote visualization of the surface temperature without making contact using a thermographic device. The process of transforming radiant energy into temperature depends on many parameters, and interpreting the results is usually complicated. However, some works have analyzed the operation of IRT and expanded its applications, as found in the latest literature. This work analyzes the effect of density on the thermodynamic behavior of timber to be determined by IRT. The cooling of various wood samples has been registered, and a statistical procedure that enables one to quantitatively estimate the density of timber has been designed. This procedure represents a new method to physically characterize this material.

  8. Accessible Information Without Disturbing Partially Known Quantum States on a von Neumann Algebra

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui

    2018-04-01

    This paper addresses the problem of how much information we can extract without disturbing a statistical experiment, which is a family of partially known normal states on a von Neumann algebra. We define the classical part of a statistical experiment as the restriction of the equivalent minimal sufficient statistical experiment to the center of the outcome space, which, in the case of density operators on a Hilbert space, corresponds to the classical probability distributions appearing in the maximal decomposition by Koashi and Imoto (Phys. Rev. A 66, 022,318 2002). We show that we can access by a Schwarz or completely positive channel at most the classical part of a statistical experiment if we do not disturb the states. We apply this result to the broadcasting problem of a statistical experiment. We also show that the classical part of the direct product of statistical experiments is the direct product of the classical parts of the statistical experiments. The proof of the latter result is based on the theorem that the direct product of minimal sufficient statistical experiments is also minimal sufficient.

  9. A Theory of Density Layering in Stratified Turbulence using Statistical State Dynamics

    NASA Astrophysics Data System (ADS)

    Fitzgerald, J.; Farrell, B.

    2016-12-01

    Stably stratified turbulent fluids commonly develop density structures that are layered in the vertical direction (e.g., Manucharyan et al., 2015). Within layers, density is approximately constant and stratification is weak. Between layers, density varies rapidly and stratification is strong. A common explanation for the existence of layers invokes the negative diffusion mechanism of Phillips (1972) & Posmentier (1977). The physical principle underlying this mechanism is that the flux-gradient relationship connecting the turbulent fluxes of buoyancy to the background stratification must have the special property of weakening fluxes with strengthening gradient. Under these conditions, the evolution of the stratification is governed by a negative diffusion problem which gives rise to spontaneous layer formation. In previous work on stratified layering, this flux-gradient property is often assumed (e.g, Posmentier, 1977) or drawn from phenomenological models of turbulence (e.g., Balmforth et al., 1998).In this work we develop the theoretical underpinnings of layer formation by applying stochastic turbulence modeling and statistical state dynamics (SSD) to predict the flux-gradient relation and analyze layer formation directly from the equations of motion. We show that for stochastically-forced homogeneous 2D Boussinesq turbulence, the flux-gradient relation can be obtained analytically and indicates that the fluxes always strengthen with stratification. The Phillips mechanism thus does not operate in this maximally simplified scenario. However, when the problem is augmented to include a large scale background shear, we show that the flux-gradient relationship is modified so that the fluxes weaken with stratification. Sheared and stratified 2D Boussinesq turbulence thus spontaneously forms density layers through the Phillips mechanism. Using SSD (Farrell & Ioannou 2003), we obtain a closed, deterministic dynamics for the stratification and the statistical turbulent state. We show that density layers form as a linear instability of the sheared turbulence, associated with a supercritical bifurcation. We further show that SSD predicts the nonlinear equilibration and maintenance of the layers, and captures the phenomena of layer growth and mergers (Radko, 2007).

  10. Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.

    PubMed

    Venturi, D; Karniadakis, G E

    2014-06-08

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.

  11. How measurement reversal could erroneously suggest the capability to discriminate the preparation basis of a quantum ensemble

    NASA Astrophysics Data System (ADS)

    Goyal, Sandeep K.; Singh, Rajeev; Ghosh, Sibasish

    2016-01-01

    Mixed states of a quantum system, represented by density operators, can be decomposed as a statistical mixture of pure states in a number of ways where each decomposition can be viewed as a different preparation recipe. However the fact that the density matrix contains full information about the ensemble makes it impossible to estimate the preparation basis for the quantum system. Here we present a measurement scheme to (seemingly) improve the performance of unsharp measurements. We argue that in some situations this scheme is capable of providing statistics from a single copy of the quantum system, thus making it possible to perform state tomography from a single copy. One of the by-products of the scheme is a way to distinguish between different preparation methods used to prepare the state of the quantum system. However, our numerical simulations disagree with our intuitive predictions. We show that a counterintuitive property of a biased classical random walk is responsible for the proposed mechanism not working.

  12. Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems

    PubMed Central

    Venturi, D.; Karniadakis, G. E.

    2014-01-01

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519

  13. The effect of cathode felt geometries on electrochemical characteristics of sodium sulfur (NaS) cells: Planar vs. tubular

    NASA Astrophysics Data System (ADS)

    Kim, Goun; Park, Yoon-Cheol; Lee, Younki; Cho, Namung; Kim, Chang-Soo; Jung, Keeyoung

    2016-09-01

    Two sodium sulfur (NaS) cells, one with a planar design and the other with a tubular design, were subject to discharge-charge cycles in order to investigate the effect of cathode felt geometries on electrochemical characteristics of NaS cells. Their discharge-charge behaviors over 200 cycles were evaluated at the operation temperature of 350 °C with the current densities of 100 mA cm-2 for discharge and 80 mA cm-2 for charge. The results showed that the deviation from theoretical open circuit voltage changes of a planar cell was smaller than those of a tubular cell resulting in potential specific power loss reduction during operation. In order to understand the effect, a three dimensional statistically representative matrix for a cathode felt has been generated using experimentally measured data. It turns out that the area specific fiber number density in the outer side area of a tubular cathode felt is smaller than that of a planar felt resulting in occurrence of larger voltage drops via retarded convection of cathode melts during cell operation.

  14. Influences of operational parameters on phosphorus removal in batch and continuous electrocoagulation process performance.

    PubMed

    Nguyen, Dinh Duc; Yoon, Yong Soo; Bui, Xuan Thanh; Kim, Sung Su; Chang, Soon Woong; Guo, Wenshan; Ngo, Huu Hao

    2017-11-01

    Performance of an electrocoagulation (EC) process in batch and continuous operating modes was thoroughly investigated and evaluated for enhancing wastewater phosphorus removal under various operating conditions, individually or combined with initial phosphorus concentration, wastewater conductivity, current density, and electrolysis times. The results revealed excellent phosphorus removal (72.7-100%) for both processes within 3-6 min of electrolysis, with relatively low energy requirements, i.e., less than 0.5 kWh/m 3 for treated wastewater. However, the removal efficiency of phosphorus in the continuous EC operation mode was better than that in batch mode within the scope of the study. Additionally, the rate and efficiency of phosphorus removal strongly depended on operational parameters, including wastewater conductivity, initial phosphorus concentration, current density, and electrolysis time. Based on experimental data, statistical model verification of the response surface methodology (RSM) (multiple factor optimization) was also established to provide further insights and accurately describe the interactive relationship between the process variables, thus optimizing the EC process performance. The EC process using iron electrodes is promising for improving wastewater phosphorus removal efficiency, and RSM can be a sustainable tool for predicting the performance of the EC process and explaining the influence of the process variables.

  15. {Phi}{sup 4} kinks: Statistical mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, S.

    1995-12-31

    Some recent investigations of the thermal equilibrium properties of kinks in a 1+1-dimensional, classical {phi}{sup 4} field theory are reviewed. The distribution function, kink density, correlation function, and certain thermodynamic quantities were studied both theoretically and via large scale simulations. A simple double Gaussian variational approach within the transfer operator formalism was shown to give good results in the intermediate temperature range where the dilute gas theory is known to fail.

  16. The supersymmetric method in random matrix theory and applications to QCD

    NASA Astrophysics Data System (ADS)

    Verbaarschot, Jacobus

    2004-12-01

    The supersymmetric method is a powerful method for the nonperturbative evaluation of quenched averages in disordered systems. Among others, this method has been applied to the statistical theory of S-matrix fluctuations, the theory of universal conductance fluctuations and the microscopic spectral density of the QCD Dirac operator. We start this series of lectures with a general review of Random Matrix Theory and the statistical theory of spectra. An elementary introduction of the supersymmetric method in Random Matrix Theory is given in the second and third lecture. We will show that a Random Matrix Theory can be rewritten as an integral over a supermanifold. This integral will be worked out in detail for the Gaussian Unitary Ensemble that describes level correlations in systems with broken time-reversal invariance. We especially emphasize the role of symmetries. As a second example of the application of the supersymmetric method we discuss the calculation of the microscopic spectral density of the QCD Dirac operator. This is the eigenvalue density near zero on the scale of the average level spacing which is known to be given by chiral Random Matrix Theory. Also in this case we use symmetry considerations to rewrite the generating function for the resolvent as an integral over a supermanifold. The main topic of the second last lecture is the recent developments on the relation between the supersymmetric partition function and integrable hierarchies (in our case the Toda lattice hierarchy). We will show that this relation is an efficient way to calculate superintegrals. Several examples that were given in previous lectures will be worked out by means of this new method. Finally, we will discuss the quenched QCD Dirac spectrum at nonzero chemical potential. Because of the nonhermiticity of the Dirac operator the usual supersymmetric method has not been successful in this case. However, we will show that the supersymmetric partition function can be evaluated by means of the replica limit of the Toda lattice equation.

  17. Random matrix theory for transition strengths: Applications and open questions

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    2017-12-01

    Embedded random matrix ensembles are generic models for describing statistical properties of finite isolated interacting quantum many-particle systems. A finite quantum system, induced by a transition operator, makes transitions from its states to the states of the same system or to those of another system. Examples are electromagnetic transitions (then the initial and final systems are same), nuclear beta and double beta decay (then the initial and final systems are different) and so on. Using embedded ensembles (EE), there are efforts to derive a good statistical theory for transition strengths. With m fermions (or bosons) in N mean-field single particle levels and interacting via two-body forces, we have with GOE embedding, the so called EGOE(1+2). Now, the transition strength density (transition strength multiplied by the density of states at the initial and final energies) is a convolution of the density generated by the mean-field one-body part with a bivariate spreading function due to the two-body interaction. Using the embedding U(N) algebra, it is established, for a variety of transition operators, that the spreading function, for sufficiently strong interactions, is close to a bivariate Gaussian. Also, as the interaction strength increases, the spreading function exhibits a transition from bivariate Breit-Wigner to bivariate Gaussian form. In appropriate limits, this EE theory reduces to the polynomial theory of Draayer, French and Wong on one hand and to the theory due to Flambaum and Izrailev for one-body transition operators on the other. Using spin-cutoff factors for projecting angular momentum, the theory is applied to nuclear matrix elements for neutrinoless double beta decay (NDBD). In this paper we will describe: (i) various developments in the EE theory for transition strengths; (ii) results for nuclear matrix elements for 130Te and 136Xe NDBD; (iii) important open questions in the current form of the EE theory.

  18. Evaluation of methods to estimate lake herring spawner abundance in Lake Superior

    USGS Publications Warehouse

    Yule, D.L.; Stockwell, J.D.; Cholwek, G.A.; Evrard, L.M.; Schram, S.; Seider, M.; Symbal, M.

    2006-01-01

    Historically, commercial fishers harvested Lake Superior lake herring Coregonus artedi for their flesh, but recently operators have targeted lake herring for roe. Because no surveys have estimated spawning female abundance, direct estimates of fishing mortality are lacking. The primary objective of this study was to determine the feasibility of using acoustic techniques in combination with midwater trawling to estimate spawning female lake herring densities in a Lake Superior statistical grid (i.e., a 10′ latitude × 10′ longitude area over which annual commercial harvest statistics are compiled). Midwater trawling showed that mature female lake herring were largely pelagic during the night in late November, accounting for 94.5% of all fish caught exceeding 250 mm total length. When calculating acoustic estimates of mature female lake herring, we excluded backscattering from smaller pelagic fishes like immature lake herring and rainbow smelt Osmerus mordax by applying an empirically derived threshold of −35.6 dB. We estimated the average density of mature females in statistical grid 1409 at 13.3 fish/ha and the total number of spawning females at 227,600 (95% confidence interval = 172,500–282,700). Using information on mature female densities, size structure, and fecundity, we estimate that females deposited 3.027 billion (109) eggs in grid 1409 (95% confidence interval = 2.356–3.778 billion). The relative estimation error of the mature female density estimate derived using a geostatistical model—based approach was low (12.3%), suggesting that the employed method was robust. Fishing mortality rates of all mature females and their eggs were estimated at 2.3% and 3.8%, respectively. The techniques described for enumerating spawning female lake herring could be used to develop a more accurate stock–recruitment model for Lake Superior lake herring.

  19. Statistical modeling of Earth's plasmasphere

    NASA Astrophysics Data System (ADS)

    Veibell, Victoir

    The behavior of plasma near Earth's geosynchronous orbit is of vital importance to both satellite operators and magnetosphere modelers because it also has a significant influence on energy transport, ion composition, and induced currents. The system is highly complex in both time and space, making the forecasting of extreme space weather events difficult. This dissertation examines the behavior and statistical properties of plasma mass density near geosynchronous orbit by using both linear and nonlinear models, as well as epoch analyses, in an attempt to better understand the physical processes that precipitates and drives its variations. It is shown that while equatorial mass density does vary significantly on an hourly timescale when a drop in the disturbance time scale index ( Dst) was observed, it does not vary significantly between the day of a Dst event onset and the day immediately following. It is also shown that increases in equatorial mass density were not, on average, preceded or followed by any significant change in the examined solar wind or geomagnetic variables, including Dst, despite prior results that considered a few selected events and found a notable influence. It is verified that equatorial mass density and and solar activity via the F10.7 index have a strong correlation, which is stronger over longer timescales such as 27 days than it is over an hourly timescale. It is then shown that this connection seems to affect the behavior of equatorial mass density most during periods of strong solar activity leading to large mass density reactions to Dst drops for high values of F10.7. It is also shown that equatorial mass density behaves differently before and after events based on the value of F10.7 at the onset of an equatorial mass density event or a Dst event, and that a southward interplanetary magnetic field at onset leads to slowed mass density growth after event onset. These behavioral differences provide insight into how solar and geomagnetic conditions impact mass density at geosynchronous orbit, enabling operators to better anticipate the response to space weather events and magnetosphere models to include mass density effects in magnetosphere simulations. It is shown that it is possible to classify an equatorial mass density event onset as being distinct from the three hours preceding it, indicating that there are distinguishing characteristics of solar wind and geomagnetic conditions surrounding an event. It is also been shown that given four days of solar and geomagnetic conditions, an event can be forecasted a day in advance with reasonable accuracy, but also with a number of false positives. These false positives have similarly distributed values as the true positives, though, indicating more data are needed to distinguish impending events.

  20. Periodic orbit spectrum in terms of Ruelle-Pollicott resonances

    NASA Astrophysics Data System (ADS)

    Leboeuf, P.

    2004-02-01

    Fully chaotic Hamiltonian systems possess an infinite number of classical solutions which are periodic, e.g., a trajectory “p” returns to its initial conditions after some fixed time τp. Our aim is to investigate the spectrum {τ1,τ2,…} of periods of the periodic orbits. An explicit formula for the density ρ(τ)=∑pδ(τ-τp) is derived in terms of the eigenvalues of the classical evolution operator. The density is naturally decomposed into a smooth part plus an interferent sum over oscillatory terms. The frequencies of the oscillatory terms are given by the imaginary part of the complex eigenvalues (Ruelle-Pollicott resonances). For large periods, corrections to the well-known exponential growth of the smooth part of the density are obtained. An alternative formula for ρ(τ) in terms of the zeros and poles of the Ruelle ζ function is also discussed. The results are illustrated with the geodesic motion in billiards of constant negative curvature. Connections with the statistical properties of the corresponding quantum eigenvalues, random-matrix theory, and discrete maps are also considered. In particular, a random-matrix conjecture is proposed for the eigenvalues of the classical evolution operator of chaotic billiards.

  1. Reliable enumeration of malaria parasites in thick blood films using digital image analysis.

    PubMed

    Frean, John A

    2009-09-23

    Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.

  2. Nanoscale temperature mapping in operating microelectronic devices

    DOE PAGES

    Mecklenburg, Matthew; Hubbard, William A.; White, E. R.; ...

    2015-02-05

    We report that modern microelectronic devices have nanoscale features that dissipate power nonuniformly, but fundamental physical limits frustrate efforts to detect the resulting temperature gradients. Contact thermometers disturb the temperature of a small system, while radiation thermometers struggle to beat the diffraction limit. Exploiting the same physics as Fahrenheit’s glass-bulb thermometer, we mapped the thermal expansion of Joule-heated, 80-nanometer-thick aluminum wires by precisely measuring changes in density. With a scanning transmission electron microscope (STEM) and electron energy loss spectroscopy (EELS), we quantified the local density via the energy of aluminum’s bulk plasmon. Rescaling density to temperature yields maps with amore » statistical precision of 3 kelvin/hertz ₋1/2, an accuracy of 10%, and nanometer-scale resolution. Lastly, many common metals and semiconductors have sufficiently sharp plasmon resonances to serve as their own thermometers.« less

  3. Algebraic aspects of the driven dynamics in the density operator and correlation functions calculation for multi-level open quantum systems

    NASA Astrophysics Data System (ADS)

    Bogolubov, Nikolai N.; Soldatov, Andrey V.

    2017-12-01

    Exact and approximate master equations were derived by the projection operator method for the reduced statistical operator of a multi-level quantum system with finite number N of quantum eigenstates interacting with arbitrary external classical fields and dissipative environment simultaneously. It was shown that the structure of these equations can be simplified significantly if the free Hamiltonian driven dynamics of an arbitrary quantum multi-level system under the influence of the external driving fields as well as its Markovian and non-Markovian evolution, stipulated by the interaction with the environment, are described in terms of the SU(N) algebra representation. As a consequence, efficient numerical methods can be developed and employed to analyze these master equations for real problems in various fields of theoretical and applied physics. It was also shown that literally the same master equations hold not only for the reduced density operator but also for arbitrary nonequilibrium multi-time correlation functions as well under the only assumption that the system and the environment are uncorrelated at some initial moment of time. A calculational scheme was proposed to account for these lost correlations in a regular perturbative way, thus providing additional computable terms to the correspondent master equations for the correlation functions.

  4. Greenland annual accumulation along the EGIG line, 1959-2004, from ASIRAS airborne radar and neutron-probe density measurements

    NASA Astrophysics Data System (ADS)

    Overly, Thomas B.; Hawley, Robert L.; Helm, Veit; Morris, Elizabeth M.; Chaudhary, Rohan N.

    2016-08-01

    We report annual snow accumulation rates from 1959 to 2004 along a 250 km segment of the Expéditions Glaciologiques Internationales au Groenland (EGIG) line across central Greenland using Airborne SAR/Interferometric Radar Altimeter System (ASIRAS) radar layers and high resolution neutron-probe (NP) density profiles. ASIRAS-NP-derived accumulation rates are not statistically different (95 % confidence interval) from in situ EGIG accumulation measurements from 1985 to 2004. ASIRAS-NP-derived accumulation increases by 20 % below 3000 m elevation, and increases by 13 % above 3000 m elevation for the period 1995 to 2004 compared to 1985 to 1994. Three Regional Climate Models (PolarMM5, RACMO2.3, MAR) underestimate snow accumulation below 3000 m by 16-20 % compared to ASIRAS-NP from 1985 to 2004. We test radar-derived accumulation rates sensitivity to density using modeled density profiles in place of NP densities. ASIRAS radar layers combined with Herron and Langway (1980) model density profiles (ASIRAS-HL) produce accumulation rates within 3.5 % of ASIRAS-NP estimates in the dry snow region. We suggest using Herron and Langway (1980) density profiles to calibrate radar layers detected in dry snow regions of ice sheets lacking detailed in situ density measurements, such as those observed by the Operation IceBridge campaign.

  5. How big should a mammal be? A macroecological look at mammalian body size over space and time

    PubMed Central

    Smith, Felisa A.; Lyons, S. Kathleen

    2011-01-01

    Macroecology was developed as a big picture statistical approach to the study of ecology and evolution. By focusing on broadly occurring patterns and processes operating at large spatial and temporal scales rather than on localized and/or fine-scaled details, macroecology aims to uncover general mechanisms operating at organism, population, and ecosystem levels of organization. Macroecological studies typically involve the statistical analysis of fundamental species-level traits, such as body size, area of geographical range, and average density and/or abundance. Here, we briefly review the history of macroecology and use the body size of mammals as a case study to highlight current developments in the field, including the increasing linkage with biogeography and other disciplines. Characterizing the factors underlying the spatial and temporal patterns of body size variation in mammals is a daunting task and moreover, one not readily amenable to traditional statistical analyses. Our results clearly illustrate remarkable regularities in the distribution and variation of mammalian body size across both geographical space and evolutionary time that are related to ecology and trophic dynamics and that would not be apparent without a broader perspective. PMID:21768152

  6. Damage detection of engine bladed-disks using multivariate statistical analysis

    NASA Astrophysics Data System (ADS)

    Fang, X.; Tang, J.

    2006-03-01

    The timely detection of damage in aero-engine bladed-disks is an extremely important and challenging research topic. Bladed-disks have high modal density and, particularly, their vibration responses are subject to significant uncertainties due to manufacturing tolerance (blade-to-blade difference or mistuning), operating condition change and sensor noise. In this study, we present a new methodology for the on-line damage detection of engine bladed-disks using their vibratory responses during spin-up or spin-down operations which can be measured by blade-tip-timing sensing technique. We apply a principle component analysis (PCA)-based approach for data compression, feature extraction, and denoising. The non-model based damage detection is achieved by analyzing the change between response features of the healthy structure and of the damaged one. We facilitate such comparison by incorporating the Hotelling's statistic T2 analysis, which yields damage declaration with a given confidence level. The effectiveness of the method is demonstrated by case studies.

  7. Predicting the response of populations to environmental change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ives, A.R.

    1995-04-01

    When subject to long-term directional environmental perturbations, changes in population densities depend on the positive and negative feedbacks operating through interactions within and among species in a community. This paper develops techniques to predict the long-term responses of population densities to environmental changes using data on short-term population fluctuations driven by short-term environmental variability. In addition to giving quantitative predictions, the techniques also reveal how different qualitative patterns of species interactions either buffer or accentuate population responses to environmental trends. All of the predictions are based on regression coefficients extracted from time series data, and they can therefore be appliedmore » with a minimum of mathematical and statistical gymnastics. 48 refs., 10 figs., 4 tabs.« less

  8. Breast density characterization using texton distributions.

    PubMed

    Petroudi, Styliani; Brady, Michael

    2011-01-01

    Breast density has been shown to be one of the most significant risks for developing breast cancer, with women with dense breasts at four to six times higher risk. The Breast Imaging Reporting and Data System (BI-RADS) has a four class classification scheme that describes the different breast densities. However, there is great inter and intra observer variability among clinicians in reporting a mammogram's density class. This work presents a novel texture classification method and its application for the development of a completely automated breast density classification system. The new method represents the mammogram using textons, which can be thought of as the building blocks of texture under the operational definition of Leung and Malik as clustered filter responses. The new proposed method characterizes the mammographic appearance of the different density patterns by evaluating the texton spatial dependence matrix (TDSM) in the breast region's corresponding texton map. The TSDM is a texture model that captures both statistical and structural texture characteristics. The normalized TSDM matrices are evaluated for mammograms from the different density classes and corresponding texture models are established. Classification is achieved using a chi-square distance measure. The fully automated TSDM breast density classification method is quantitatively evaluated on mammograms from all density classes from the Oxford Mammogram Database. The incorporation of texton spatial dependencies allows for classification accuracy reaching over 82%. The breast density classification accuracy is better using texton TSDM compared to simple texton histograms.

  9. Deep convolutional neural network for mammographic density segmentation

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Li, Songfeng; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir; Samala, Ravi K.

    2018-02-01

    Breast density is one of the most significant factors for cancer risk. In this study, we proposed a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammography (DM). The deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD). PD was calculated as the ratio of the dense area to the breast area based on the probability of each pixel belonging to dense region or fatty region at a decision threshold of 0.5. The DCNN estimate was compared to a feature-based statistical learning approach, in which gray level, texture and morphological features were extracted from each ROI and the least absolute shrinkage and selection operator (LASSO) was used to select and combine the useful features to generate the PMD. The reference PD of each image was provided by two experienced MQSA radiologists. With IRB approval, we retrospectively collected 347 DMs from patient files at our institution. The 10-fold cross-validation results showed a strong correlation r=0.96 between the DCNN estimation and interactive segmentation by radiologists while that of the feature-based statistical learning approach vs radiologists' segmentation had a correlation r=0.78. The difference between the segmentation by DCNN and by radiologists was significantly smaller than that between the feature-based learning approach and radiologists (p < 0.0001) by two-tailed paired t-test. This study demonstrated that the DCNN approach has the potential to replace radiologists' interactive thresholding in PD estimation on DMs.

  10. Symmetry Transition Preserving Chirality in QCD: A Versatile Random Matrix Model

    NASA Astrophysics Data System (ADS)

    Kanazawa, Takuya; Kieburg, Mario

    2018-06-01

    We consider a random matrix model which interpolates between the chiral Gaussian unitary ensemble and the Gaussian unitary ensemble while preserving chiral symmetry. This ensemble describes flavor symmetry breaking for staggered fermions in 3D QCD as well as in 4D QCD at high temperature or in 3D QCD at a finite isospin chemical potential. Our model is an Osborn-type two-matrix model which is equivalent to the elliptic ensemble but we consider the singular value statistics rather than the complex eigenvalue statistics. We report on exact results for the partition function and the microscopic level density of the Dirac operator in the ɛ regime of QCD. We compare these analytical results with Monte Carlo simulations of the matrix model.

  11. Computation of the asymptotic states of modulated open quantum systems with a numerically exact realization of the quantum trajectory method

    NASA Astrophysics Data System (ADS)

    Volokitin, V.; Liniov, A.; Meyerov, I.; Hartmann, M.; Ivanchenko, M.; Hänggi, P.; Denisov, S.

    2017-11-01

    Quantum systems out of equilibrium are presently a subject of active research, both in theoretical and experimental domains. In this work, we consider time-periodically modulated quantum systems that are in contact with a stationary environment. Within the framework of a quantum master equation, the asymptotic states of such systems are described by time-periodic density operators. Resolution of these operators constitutes a nontrivial computational task. Approaches based on spectral and iterative methods are restricted to systems with the dimension of the hosting Hilbert space dim H =N ≲300 , while the direct long-time numerical integration of the master equation becomes increasingly problematic for N ≳400 , especially when the coupling to the environment is weak. To go beyond this limit, we use the quantum trajectory method, which unravels the master equation for the density operator into a set of stochastic processes for wave functions. The asymptotic density matrix is calculated by performing a statistical sampling over the ensemble of quantum trajectories, preceded by a long transient propagation. We follow the ideology of event-driven programming and construct a new algorithmic realization of the method. The algorithm is computationally efficient, allowing for long "leaps" forward in time. It is also numerically exact, in the sense that, being given the list of uniformly distributed (on the unit interval) random numbers, {η1,η2,...,ηn} , one could propagate a quantum trajectory (with ηi's as norm thresholds) in a numerically exact way. By using a scalable N -particle quantum model, we demonstrate that the algorithm allows us to resolve the asymptotic density operator of the model system with N =2000 states on a regular-size computer cluster, thus reaching the scale on which numerical studies of modulated Hamiltonian systems are currently performed.

  12. Computation of the asymptotic states of modulated open quantum systems with a numerically exact realization of the quantum trajectory method.

    PubMed

    Volokitin, V; Liniov, A; Meyerov, I; Hartmann, M; Ivanchenko, M; Hänggi, P; Denisov, S

    2017-11-01

    Quantum systems out of equilibrium are presently a subject of active research, both in theoretical and experimental domains. In this work, we consider time-periodically modulated quantum systems that are in contact with a stationary environment. Within the framework of a quantum master equation, the asymptotic states of such systems are described by time-periodic density operators. Resolution of these operators constitutes a nontrivial computational task. Approaches based on spectral and iterative methods are restricted to systems with the dimension of the hosting Hilbert space dimH=N≲300, while the direct long-time numerical integration of the master equation becomes increasingly problematic for N≳400, especially when the coupling to the environment is weak. To go beyond this limit, we use the quantum trajectory method, which unravels the master equation for the density operator into a set of stochastic processes for wave functions. The asymptotic density matrix is calculated by performing a statistical sampling over the ensemble of quantum trajectories, preceded by a long transient propagation. We follow the ideology of event-driven programming and construct a new algorithmic realization of the method. The algorithm is computationally efficient, allowing for long "leaps" forward in time. It is also numerically exact, in the sense that, being given the list of uniformly distributed (on the unit interval) random numbers, {η_{1},η_{2},...,η_{n}}, one could propagate a quantum trajectory (with η_{i}'s as norm thresholds) in a numerically exact way. By using a scalable N-particle quantum model, we demonstrate that the algorithm allows us to resolve the asymptotic density operator of the model system with N=2000 states on a regular-size computer cluster, thus reaching the scale on which numerical studies of modulated Hamiltonian systems are currently performed.

  13. Learning Multisensory Integration and Coordinate Transformation via Density Estimation

    PubMed Central

    Sabes, Philip N.

    2013-01-01

    Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588

  14. A Geospatial Statistical Analysis of the Density of Lottery Outlets within Ethnically Concentrated Neighborhoods

    ERIC Educational Resources Information Center

    Wiggins, Lyna; Nower, Lia; Mayers, Raymond Sanchez; Peterson, N. Andrew

    2010-01-01

    This study examines the density of lottery outlets within ethnically concentrated neighborhoods in Middlesex County, New Jersey, using geospatial statistical analyses. No prior studies have empirically examined the relationship between lottery outlet density and population demographics. Results indicate that lottery outlets were not randomly…

  15. Favre-Averaged Turbulence Statistics in Variable Density Mixing of Buoyant Jets

    NASA Astrophysics Data System (ADS)

    Charonko, John; Prestridge, Kathy

    2014-11-01

    Variable density mixing of a heavy fluid jet with lower density ambient fluid in a subsonic wind tunnel was experimentally studied using Particle Image Velocimetry and Planar Laser Induced Fluorescence to simultaneously measure velocity and density. Flows involving the mixing of fluids with large density ratios are important in a range of physical problems including atmospheric and oceanic flows, industrial processes, and inertial confinement fusion. Here we focus on buoyant jets with coflow. Results from two different Atwood numbers, 0.1 (Boussinesq limit) and 0.6 (non-Boussinesq case), reveal that buoyancy is important for most of the turbulent quantities measured. Statistical characteristics of the mixing important for modeling these flows such as the PDFs of density and density gradients, turbulent kinetic energy, Favre averaged Reynolds stress, turbulent mass flux velocity, density-specific volume correlation, and density power spectra were also examined and compared with previous direct numerical simulations. Additionally, a method for directly estimating Reynolds-averaged velocity statistics on a per-pixel basis is extended to Favre-averages, yielding improved accuracy and spatial resolution as compared to traditional post-processing of velocity and density fields.

  16. Statistics of Smoothed Cosmic Fields in Perturbation Theory. I. Formulation and Useful Formulae in Second-Order Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Matsubara, Takahiko

    2003-02-01

    We formulate a general method for perturbative evaluations of statistics of smoothed cosmic fields and provide useful formulae for application of the perturbation theory to various statistics. This formalism is an extensive generalization of the method used by Matsubara, who derived a weakly nonlinear formula of the genus statistic in a three-dimensional density field. After describing the general method, we apply the formalism to a series of statistics, including genus statistics, level-crossing statistics, Minkowski functionals, and a density extrema statistic, regardless of the dimensions in which each statistic is defined. The relation between the Minkowski functionals and other geometrical statistics is clarified. These statistics can be applied to several cosmic fields, including three-dimensional density field, three-dimensional velocity field, two-dimensional projected density field, and so forth. The results are detailed for second-order theory of the formalism. The effect of the bias is discussed. The statistics of smoothed cosmic fields as functions of rescaled threshold by volume fraction are discussed in the framework of second-order perturbation theory. In CDM-like models, their functional deviations from linear predictions plotted against the rescaled threshold are generally much smaller than that plotted against the direct threshold. There is still a slight meatball shift against rescaled threshold, which is characterized by asymmetry in depths of troughs in the genus curve. A theory-motivated asymmetry factor in the genus curve is proposed.

  17. Self-consistent mean-field approach to the statistical level density in spherical nuclei

    NASA Astrophysics Data System (ADS)

    Kolomietz, V. M.; Sanzhur, A. I.; Shlomo, S.

    2018-06-01

    A self-consistent mean-field approach within the extended Thomas-Fermi approximation with Skyrme forces is applied to the calculations of the statistical level density in spherical nuclei. Landau's concept of quasiparticles with the nucleon effective mass and the correct description of the continuum states for the finite-depth potentials are taken into consideration. The A dependence and the temperature dependence of the statistical inverse level-density parameter K is obtained in a good agreement with experimental data.

  18. The statistics of primordial density fluctuations

    NASA Astrophysics Data System (ADS)

    Barrow, John D.; Coles, Peter

    1990-05-01

    The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.

  19. Methods for Combination of GRACE Gravimetry and ICESat Altimetry over Antarctica on Monthly Timescales

    NASA Astrophysics Data System (ADS)

    Hardy, R. A.; Nerem, R. S.; Wiese, D. N.

    2017-12-01

    Gravity and surface elevation change data altimetry provide different perspectives on mass variability in Antarctica. In anticipation of the concurrent operation of the successors of GRACE and ICESat, GRACE Follow-On and ICESat-2, we approach the problem of combining these data for enhanced spatial resolution and disaggregation of Antarctica's major mass transport processes. Using elevation changes gathered from over 500 million overlapping ICESat laser shot pairs between 2003 and 2009, we construct gridded models of Antarctic elevation change for each ICESat operational period. Comparing these elevation grids with temporally registered JPL RL05M mascon solutions, we exploit the relationship between surface mass flux and elevation change to inform estimates of effective surface density. These density estimates enable solutions for glacial isostatic adjustment and monthly estimates of surface mass change. These are used alongside spatial statistics from both the data and models of surface mass balance to produce enhanced estimates of Antarctic mass balance. We validate our solutions by modeling the effects of elastic loading and GIA from these solutions on the vertical motion of Antarctica's GNSS sites.

  20. Automated structure solution, density modification and model building.

    PubMed

    Terwilliger, Thomas C

    2002-11-01

    The approaches that form the basis of automated structure solution in SOLVE and RESOLVE are described. The use of a scoring scheme to convert decision making in macromolecular structure solution to an optimization problem has proven very useful and in many cases a single clear heavy-atom solution can be obtained and used for phasing. Statistical density modification is well suited to an automated approach to structure solution because the method is relatively insensitive to choices of numbers of cycles and solvent content. The detection of non-crystallographic symmetry (NCS) in heavy-atom sites and checking of potential NCS operations against the electron-density map has proven to be a reliable method for identification of NCS in most cases. Automated model building beginning with an FFT-based search for helices and sheets has been successful in automated model building for maps with resolutions as low as 3 A. The entire process can be carried out in a fully automatic fashion in many cases.

  1. Three methods for estimating a range of vehicular interactions

    NASA Astrophysics Data System (ADS)

    Krbálek, Milan; Apeltauer, Jiří; Apeltauer, Tomáš; Szabová, Zuzana

    2018-02-01

    We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.

  2. Statistical Study on Variations of the Ionospheric Ion Density Observed by DEMETER and Related to Seismic Activities

    NASA Astrophysics Data System (ADS)

    Yan, Rui; Parrot, Michel; Pinçon, Jean-Louis

    2017-12-01

    In this paper, we present the result of a statistical study performed on the ionospheric ion density variations above areas of seismic activity. The ion density was observed by the low altitude satellite DEMETER between 2004 and 2010. In the statistical analysis a superposed epoch method is used where the observed ionospheric ion density close to the epicenters both in space and in time is compared to background values recorded at the same location and in the same conditions. Data associated with aftershocks have been carefully removed from the database to prevent spurious effects on the statistics. It is shown that, during nighttime, anomalous ionospheric perturbations related to earthquakes with magnitudes larger than 5 are evidenced. At the time of these perturbations the background ion fluctuation departs from a normal distribution. They occur up to 200 km from the epicenters and mainly 5 days before the earthquakes. As expected, an ion density perturbation occurring just after the earthquakes and close to the epicenters is also evidenced.

  3. Investigation of the performance characteristics of Doppler radar technique for aircraft collision hazard warning, phase 3

    NASA Technical Reports Server (NTRS)

    1972-01-01

    System studies, equipment simulation, hardware development and flight tests which were conducted during the development of aircraft collision hazard warning system are discussed. The system uses a cooperative, continuous wave Doppler radar principle with pseudo-random frequency modulation. The report presents a description of the system operation and deals at length with the use of pseudo-random coding techniques. In addition, the use of mathematical modeling and computer simulation to determine the alarm statistics and system saturation characteristics in terminal area traffic of variable density is discussed.

  4. A question of separation: disentangling tracer bias and gravitational non-linearity with counts-in-cells statistics

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Feix, M.; Codis, S.; Pichon, C.; Bernardeau, F.; L'Huillier, B.; Kim, J.; Hong, S. E.; Laigle, C.; Park, C.; Shin, J.; Pogosyan, D.

    2018-02-01

    Starting from a very accurate model for density-in-cells statistics of dark matter based on large deviation theory, a bias model for the tracer density in spheres is formulated. It adopts a mean bias relation based on a quadratic bias model to relate the log-densities of dark matter to those of mass-weighted dark haloes in real and redshift space. The validity of the parametrized bias model is established using a parametrization-independent extraction of the bias function. This average bias model is then combined with the dark matter PDF, neglecting any scatter around it: it nevertheless yields an excellent model for densities-in-cells statistics of mass tracers that is parametrized in terms of the underlying dark matter variance and three bias parameters. The procedure is validated on measurements of both the one- and two-point statistics of subhalo densities in the state-of-the-art Horizon Run 4 simulation showing excellent agreement for measured dark matter variance and bias parameters. Finally, it is demonstrated that this formalism allows for a joint estimation of the non-linear dark matter variance and the bias parameters using solely the statistics of subhaloes. Having verified that galaxy counts in hydrodynamical simulations sampled on a scale of 10 Mpc h-1 closely resemble those of subhaloes, this work provides important steps towards making theoretical predictions for density-in-cells statistics applicable to upcoming galaxy surveys like Euclid or WFIRST.

  5. Laser amplification of incoherent radiation

    NASA Technical Reports Server (NTRS)

    Menegozzi, L. N.; Lamb, W. E., Jr.

    1978-01-01

    The amplification of noise in a laser amplifier is treated theoretically. The model for the active medium and its description using density-matrix techniques, are taken from the theory of laser operation. The spectral behavior of the radiation in the nonlinear regime is studied and the formalism is written from the onset in the frequency domain. The statistics of the light are gradually modified by the nonlinear amplification process, and expressions are derived for the rate of change of fluctuations in intensity as a measure of statistical changes. In addition, the range of validity of Litvak's Gaussian-statistics approximation is discussed. In the homogeneous-broadening case, the evolution of initially broadband Gaussian radiation toward quasimonochromatic oscillations with laserlike statistics is explored in several numerical examples. The connections of this study with the time-domain work on self-pulsing in a ring-laser configuration, are established. Finally, spectral-narrowing and -rebroadening effects in Doppler-broadened media are discussed both analytically and with numerical examples. These examples show the distinct contribution of pulsations in the population ('Raman-type terms'), and saturation phenomena.

  6. Technician Consistency in Specular Microscopy Measurements: A "Real-World" Retrospective Analysis of a United States Eye Bank.

    PubMed

    Rand, Gabriel M; Kwon, Ji Won; Gore, Patrick K; McCartney, Mitchell D; Chuck, Roy S

    2017-10-01

    To quantify consistency of endothelial cell density (ECD) measurements among technicians in a single US eye bank operating under typical operating conditions. In this retrospective analysis of 51 microscopy technicians using a semiautomated counting method on 35,067 eyes from July 2007 to May 2015, technician- and date-related marginal ECD effects were calculated using linear regression models. ECD variance was correlated with the number of specular microscopy technicians. Technician mean ECDs ranged from 2386 ± 431 to 3005 ± 560 cells/mm. Nine technicians had statistically and clinically significant marginal effects. Annual mean ECDs adjusted for changes in technicians ranged from 2422 ± 433 to 2644 ± 430 cells/mm. The period of 2007 to 2009 had statistically and clinically significant marginal effects. There was a nonstatistically significant association between the number of technicians and ECD standard deviation. There was significant ECD variability associated with specular microscopy technicians and with the date of measurement. We recommend that eye banks collect data related to laboratory factors that have been shown to influence ECD variability.

  7. Relating off-premises alcohol outlet density to intentional and unintentional injuries.

    PubMed

    Morrison, Christopher; Smith, Karen; Gruenewald, Paul J; Ponicki, William R; Lee, Juliet P; Cameron, Peter

    2016-01-01

    This study investigated the hypotheses that (i) intentional and unintentional injuries occur more frequently in areas with greater density of off-premises alcohol outlets; and (ii) larger and chain outlets selling cheaper alcohol contribute more substantially to injury risk than smaller and independent outlets. Ecological cross-sectional. From the 256 Statistical Area level 2 (SA2) census units in Melbourne, Australia, we selected a random sample of 62 units. There were 2119 Statistical Area level 1 (SA1) units nested within the selected SA2 units. The selected units contained 295 off-premises outlets. Two independent observers conducted premises assessments in all off-premises outlets, assessing the volume of alcohol available for sale (paces of shelf space), price (least wine price) and other operating characteristics (chain versus independent, drive-through). Outlet counts, assessed outlet characteristics and other area characteristics (population density, median age, median income, retail zoning) were aggregated within SA1 units. Dependent variables were counts of ambulance attended intentional injuries (assaults, stabbings, shootings) and unintentional injuries (falls, crush injuries and object strikes). In univariable analyses, chain outlets were larger (r = 0.383; P < 0.001) and sold cheaper alcohol (r = -0.484; P < 0.001) compared with independent outlets. In Bayesian spatial Poisson models, off-premises outlet density was positively related to both intentional [incidence rate ratio (IRR) = 1.38; 95% credible interval (CI) = 1.19, 1.60] and unintentional injuries (IRR = 1.18; 95% CI = 1.06, 1.30). After disaggregation by outlet characteristics, chain outlet density was also related to both intentional (IRR = 1.35; 95% CI = 1.11, 1.64) and unintentional injuries (IRR = 1.20; 95% CI = 1.08, 1.38). Greater off-premises outlet density is related to greater incidence of traumatic injury, and chain outlets appear to contribute most substantially to traumatic injury risk. © 2015 Society for the Study of Addiction.

  8. Relating Off-Premises Alcohol Outlet Density to Intentional and Unintentional Injuries

    PubMed Central

    Morrison, Christopher; Smith, Karen; Gruenewald, Paul J.; Ponicki, William R.; Lee, Juliet P.; Cameron, Peter

    2015-01-01

    Aims This study investigated the hypotheses that (i) intentional and unintentional injuries occur more frequently in areas with greater density of off-premises alcohol outlets; and (ii) larger and chain outlets selling cheaper alcohol contribute more substantially to injury risk than smaller and independent outlets. Design Ecological cross-sectional. Setting From the 256 Statistical Area level 2 (SA2) Census units in Melbourne, Australia, we selected a random sample of 62 units. There were 2,119 Statistical Area level 1 (SA1) units nested within the selected SA2 units. Participants The selected units contained 295 off-premises outlets. Measurements Two independent observers conducted premises assessments in all off-premises outlets, assessing the volume of alcohol available for sale (paces of shelf space), price (least wine price), and other operating characteristics (chain vs. independent, drive-through). Outlet counts, assessed outlet characteristics, and other area characteristics (population density, median age, median income, retail zoning) were aggregated within SA1 units. Dependent variables were counts of ambulance attended intentional injuries (assaults, stabbings, shootings) and unintentional injuries (falls, crush injuries, and object strikes). Findings In univariable analyses, chain outlets were larger (r = 0.383; p < 0.001) and sold cheaper alcohol (r = −0.484; p < 0.001) compared with independent outlets. In Bayesian spatial Poisson models, off-premises outlet density was positively related to both intentional (Incidence Rate Ratio = 1.38; 95% Credible Interval: 1.19, 1.60) and unintentional injuries (IRR = 1.18; 95% CI: 1.06, 1.30). After disaggregation by outlet characteristics, chain outlet density was also related to both intentional (IRR = 1.35; 95% CI: 1.11, 1.64) and unintentional injuries (IRR = 1.20; 95% CI: 1.08, 1.38). Conclusions Greater off-premises outlet density is related to greater incidence of traumatic injury, and chain outlets appear to contribute most substantially to traumatic injury risk. PMID:26283189

  9. Statistics of primordial density perturbations from discrete seed masses

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Bertschinger, Edmund

    1991-01-01

    The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.

  10. Racial Differences in Quantitative Measures of Area and Volumetric Breast Density

    PubMed Central

    McCarthy, Anne Marie; Keller, Brad M.; Pantalone, Lauren M.; Hsieh, Meng-Kang; Synnestvedt, Marie; Conant, Emily F.; Armstrong, Katrina; Kontos, Despina

    2016-01-01

    Abstract Background: Increased breast density is a strong risk factor for breast cancer and also decreases the sensitivity of mammographic screening. The purpose of our study was to compare breast density for black and white women using quantitative measures. Methods: Breast density was assessed among 5282 black and 4216 white women screened using digital mammography. Breast Imaging-Reporting and Data System (BI-RADS) density was obtained from radiologists’ reports. Quantitative measures for dense area, area percent density (PD), dense volume, and volume percent density were estimated using validated, automated software. Breast density was categorized as dense or nondense based on BI-RADS categories or based on values above and below the median for quantitative measures. Logistic regression was used to estimate the odds of having dense breasts by race, adjusted for age, body mass index (BMI), age at menarche, menopause status, family history of breast or ovarian cancer, parity and age at first birth, and current hormone replacement therapy (HRT) use. All statistical tests were two-sided. Results: There was a statistically significant interaction of race and BMI on breast density. After accounting for age, BMI, and breast cancer risk factors, black women had statistically significantly greater odds of high breast density across all quantitative measures (eg, PD nonobese odds ratio [OR] = 1.18, 95% confidence interval [CI] = 1.02 to 1.37, P = .03, PD obese OR = 1.26, 95% CI = 1.04 to 1.53, P = .02). There was no statistically significant difference in BI-RADS density by race. Conclusions: After accounting for age, BMI, and other risk factors, black women had higher breast density than white women across all quantitative measures previously associated with breast cancer risk. These results may have implications for risk assessment and screening. PMID:27130893

  11. A Statistical Representation of Pyrotechnic Igniter Output

    NASA Astrophysics Data System (ADS)

    Guo, Shuyue; Cooper, Marcia

    2017-06-01

    The output of simplified pyrotechnic igniters for research investigations is statistically characterized by monitoring the post-ignition external flow field with Schlieren imaging. Unique to this work is a detailed quantification of all measurable manufacturing parameters (e.g., bridgewire length, charge cavity dimensions, powder bed density) and associated shock-motion variability in the tested igniters. To demonstrate experimental precision of the recorded Schlieren images and developed image processing methodologies, commercial exploding bridgewires using wires of different parameters were tested. Finally, a statistically-significant population of manufactured igniters were tested within the Schlieren arrangement resulting in a characterization of the nominal output. Comparisons between the variances measured throughout the manufacturing processes and the calculated output variance provide insight into the critical device phenomena that dominate performance. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under contract DE-AC04-94AL85000.

  12. A PDF-based classification of gait cadence patterns in patients with amyotrophic lateral sclerosis.

    PubMed

    Wu, Yunfeng; Ng, Sin Chun

    2010-01-01

    Amyotrophic lateral sclerosis (ALS) is a type of neurological disease due to the degeneration of motor neurons. During the course of such a progressive disease, it would be difficult for ALS patients to regulate normal locomotion, so that the gait stability becomes perturbed. This paper presents a pilot statistical study on the gait cadence (or stride interval) in ALS, based on the statistical analysis method. The probability density functions (PDFs) of stride interval were first estimated with the nonparametric Parzen-window method. We computed the mean of the left-foot stride interval and the modified Kullback-Leibler divergence (MKLD) from the PDFs estimated. The analysis results suggested that both of these two statistical parameters were significantly altered in ALS, and the least-squares support vector machine (LS-SVM) may effectively distinguish the stride patterns between the ALS patients and healthy controls, with an accurate rate of 82.8% and an area of 0.87 under the receiver operating characteristic curve.

  13. Solar F10.7 radiation - A short term model for Space Station applications

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    A new method is described for statistically modeling the F10.7 component of solar radiation for 91-day intervals. The resulting model represents this component of the solar flux as a quasi-exponentially correlated, Weibull distributed random variable, and thereby demonstrates excellent agreement with observed F10.7 data. Values of the F10.7 flux are widely used in models of the earth's upper atmosphere because of its high correlation with density fluctuations due to solar heating effects. Because of the direct relation between atmospheric density and drag, a realistic model of the short term fluctuation of the F10.7 flux is important for the design and operation of Space Station Freedom. The method of modeling this flux described in this report should therefore be useful for a variety of Space Station applications.

  14. CLUMPY: A code for γ-ray signals from dark matter structures

    NASA Astrophysics Data System (ADS)

    Charbonnier, Aldée; Combet, Céline; Maurin, David

    2012-03-01

    We present the first public code for semi-analytical calculation of the γ-ray flux astrophysical J-factor from dark matter annihilation/decay in the Galaxy, including dark matter substructures. The core of the code is the calculation of the line of sight integral of the dark matter density squared (for annihilations) or density (for decaying dark matter). The code can be used in three modes: i) to draw skymaps from the Galactic smooth component and/or the substructure contributions, ii) to calculate the flux from a specific halo (that is not the Galactic halo, e.g. dwarf spheroidal galaxies) or iii) to perform simple statistical operations from a list of allowed DM profiles for a given object. Extragalactic contributions and other tracers of DM annihilation (e.g. positrons, anti-protons) will be included in a second release.

  15. Referenceless perceptual fog density prediction model

    NASA Astrophysics Data System (ADS)

    Choi, Lark Kwon; You, Jaehee; Bovik, Alan C.

    2014-02-01

    We propose a perceptual fog density prediction model based on natural scene statistics (NSS) and "fog aware" statistical features, which can predict the visibility in a foggy scene from a single image without reference to a corresponding fogless image, without side geographical camera information, without training on human-rated judgments, and without dependency on salient objects such as lane markings or traffic signs. The proposed fog density predictor only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. A fog aware collection of statistical features is derived from a corpus of foggy and fog-free images by using a space domain NSS model and observed characteristics of foggy images such as low contrast, faint color, and shifted intensity. The proposed model not only predicts perceptual fog density for the entire image but also provides a local fog density index for each patch. The predicted fog density of the model correlates well with the measured visibility in a foggy scene as measured by judgments taken in a human subjective study on a large foggy image database. As one application, the proposed model accurately evaluates the performance of defog algorithms designed to enhance the visibility of foggy images.

  16. Statistical correlations in an ideal gas of particles obeying fractional exclusion statistics.

    PubMed

    Pellegrino, F M D; Angilella, G G N; March, N H; Pucci, R

    2007-12-01

    After a brief discussion of the concepts of fractional exchange and fractional exclusion statistics, we report partly analytical and partly numerical results on thermodynamic properties of assemblies of particles obeying fractional exclusion statistics. The effect of dimensionality is one focal point, the ratio mu/k_(B)T of chemical potential to thermal energy being obtained numerically as a function of a scaled particle density. Pair correlation functions are also presented as a function of the statistical parameter, with Friedel oscillations developing close to the fermion limit, for sufficiently large density.

  17. 14 CFR 93.129 - Additional operations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93... under IFR at a designated high density traffic airport without regard to the maximum number of operations allocated for that airport if the operation is not a scheduled operation to or from a high density...

  18. 14 CFR 93.129 - Additional operations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93... under IFR at a designated high density traffic airport without regard to the maximum number of operations allocated for that airport if the operation is not a scheduled operation to or from a high density...

  19. 14 CFR 93.129 - Additional operations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93... under IFR at a designated high density traffic airport without regard to the maximum number of operations allocated for that airport if the operation is not a scheduled operation to or from a high density...

  20. 14 CFR 93.129 - Additional operations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93... under IFR at a designated high density traffic airport without regard to the maximum number of operations allocated for that airport if the operation is not a scheduled operation to or from a high density...

  1. 14 CFR 93.129 - Additional operations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93... under IFR at a designated high density traffic airport without regard to the maximum number of operations allocated for that airport if the operation is not a scheduled operation to or from a high density...

  2. The reliability of cone-beam computed tomography to assess bone density at dental implant recipient sites: a histomorphometric analysis by micro-CT.

    PubMed

    González-García, Raúl; Monje, Florencio

    2013-08-01

    The aim of this study was to objectively assess the reliability of the cone-beam computed tomography (CBCT) as a tool to pre-operatively determine radiographic bone density (RBD) by the density values provided by the system, analyzing its relationship with histomorphometric bone density expressed as bone volumetric fraction (BV/TV) assessed by micro-CT of bone biopsies at the site of insertion of dental implants in the maxillary bones. Thirty-nine bone biopsies of the maxillary bones at the sites of 39 dental implants from 31 edentulous healthy patients were analyzed. The NobelGuide™ software was used for implant planning, which also allowed fabrication of individual stereolithographic surgical guides. The analysis of CBCT images allowed pre-operative determination of mean density values of implant recipient sites along the major axis of the planned implants (axial RBD). Stereolithographic surgical guides were used to guide implant insertion and also to extract cylindrical bone biopsies from the core of the exact implant site. Further analysis of several osseous micro-structural variables including BV/TV was performed by micro-CT of the extracted bone biopsies. Mean axial RBD was 478 ± 212 (range: 144-953). A statistically significant difference (P = 0.02) was observed among density values of the cortical bone of the upper maxilla and mandible. A high positive Pearson's correlation coefficient (r = 0.858, P < 0.001) was observed between RBD and BV/TV, with the regression equations: (1) Axial RBD = -19.974 + 10.238·BV/TV; (2) BV/TV = 14.258 + 0.72·Axial RBD. RBD was also positively correlated with the trabecular thickness (Tb.Th) and trabecular number (Tb.N), but negatively correlated with trabecular separation (Tb.Sp), structural model index, and inverse connectivity (Tb.Pf). Density values upper than 450 were associated with BV/TV upper than 50%, mean Tb.Th upper than 0.2 mm, mean Tb.Sp lower than 0.3 mm, and mean Tb.N upper than 2. RBD assessed by CBCT has a strong positive correlation with BV/TV assessed by micro-CT at the site of dental implants in the maxillary bones. Pre-operative estimation of density values by CBCT is a reliable tool to objectively determine bone density. © 2012 John Wiley & Sons A/S.

  3. Statistical parametric mapping of LORETA using high density EEG and individual MRI: application to mismatch negativities in schizophrenia.

    PubMed

    Park, Hae-Jeong; Kwon, Jun Soo; Youn, Tak; Pae, Ji Soo; Kim, Jae-Jin; Kim, Myung-Sun; Ha, Kyoo-Seob

    2002-11-01

    We describe a method for the statistical parametric mapping of low resolution electromagnetic tomography (LORETA) using high-density electroencephalography (EEG) and individual magnetic resonance images (MRI) to investigate the characteristics of the mismatch negativity (MMN) generators in schizophrenia. LORETA, using a realistic head model of the boundary element method derived from the individual anatomy, estimated the current density maps from the scalp topography of the 128-channel EEG. From the current density maps that covered the whole cortical gray matter (up to 20,000 points), volumetric current density images were reconstructed. Intensity normalization of the smoothed current density images was used to reduce the confounding effect of subject specific global activity. After transforming each image into a standard stereotaxic space, we carried out statistical parametric mapping of the normalized current density images. We applied this method to the source localization of MMN in schizophrenia. The MMN generators, produced by a deviant tone of 1,200 Hz (5% of 1,600 trials) under the standard tone of 1,000 Hz, 80 dB binaural stimuli with 300 msec of inter-stimulus interval, were measured in 14 right-handed schizophrenic subjects and 14 age-, gender-, and handedness-matched controls. We found that the schizophrenic group exhibited significant current density reductions of MMN in the left superior temporal gyrus and the left inferior parietal gyrus (P < 0. 0005). This study is the first voxel-by-voxel statistical mapping of current density using individual MRI and high-density EEG. Copyright 2002 Wiley-Liss, Inc.

  4. Statistical Entropy of Dirac Field Outside RN Black Hole and Modified Density Equation

    NASA Astrophysics Data System (ADS)

    Cao, Fei; He, Feng

    2012-02-01

    Statistical entropy of Dirac field in Reissner-Nordstrom black hole space-time is computed by state density equation corrected by the generalized uncertainty principle to all orders in Planck length and WKB approximation. The result shows that the statistical entropy is proportional to the horizon area but the present result is convergent without any artificial cutoff.

  5. Association between power law coefficients of the anatomical noise power spectrum and lesion detectability in breast imaging modalities

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Abbey, Craig K.; Boone, John M.

    2013-03-01

    Previous research has demonstrated that a parameter extracted from a power function fit to the anatomical noise power spectrum, β, may be predictive of breast mass lesion detectability in x-ray based medical images of the breast. In this investigation, the value of β was compared with a number of other more widely used parameters, in order to determine the relationship between β and these other parameters. This study made use of breast CT data sets, acquired on two breast CT systems developed in our laboratory. A total of 185 breast data sets in 183 women were used, and only the unaffected breast was used (where no lesion was suspected). The anatomical noise power spectrum computed from two-dimensional region of interests (ROIs), was fit to a power function (NPS(f) = α f-β), and the exponent parameter (β) was determined using log/log linear regression. Breast density for each of the volume data sets was characterized in previous work. The breast CT data sets analyzed in this study were part of a previous study which evaluated the receiver operating characteristic (ROC) curve performance using simulated spherical lesions and a pre-whitened matched filter computer observer. This ROC information was used to compute the detectability index as well as the sensitivity at 95% specificity. The fractal dimension was computed from the same ROIs which were used for the assessment of β. The value of β was compared to breast density, detectability index, sensitivity, and fractal dimension, and the slope of these relationships was investigated to assess statistical significance from zero slope. A statistically significant non-zero slope was considered to be a positive association in this investigation. All comparisons between β and breast density, detectability index, sensitivity at 95% specificity, and fractal dimension demonstrated statistically significant association with p < 0.001 in all cases. The value of β was also found to be associated with patient age and breast diameter, parameters both related to breast density. In all associations between other parameters, lower values of β were associated with increased breast cancer detection performance. Specifically, lower values of β were associated with lower breast density, higher detectability index, higher sensitivity, and lower fractal dimension values. While causality was not and probably cannot be demonstrated, the strong, statistically significant association between the β metric and the other more widely used parameters suggest that β may be considered as a surrogate measure for breast cancer detection performance. These findings are specific to breast parenchymal patterns and mass lesions only.

  6. CRAX. Cassandra Exoskeleton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.G.; Eubanks, L.

    1998-03-01

    This software assists the engineering designer in characterizing the statistical uncertainty in the performance of complex systems as a result of variations in manufacturing processes, material properties, system geometry or operating environment. The software is composed of a graphical user interface that provides the user with easy access to Cassandra uncertainty analysis routines. Together this interface and the Cassandra routines are referred to as CRAX (CassandRA eXoskeleton). The software is flexible enough, that with minor modification, it is able to interface with large modeling and analysis codes such as heat transfer or finite element analysis software. The current version permitsmore » the user to manually input a performance function, the number of random variables and their associated statistical characteristics: density function, mean, coefficients of variation. Additional uncertainity analysis modules are continuously being added to the Cassandra core.« less

  7. Cassandra Exoskeleton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robiinson, David G.

    1999-02-20

    This software assists the engineering designer in characterizing the statistical uncertainty in the performance of complex systems as a result of variations in manufacturing processes, material properties, system geometry or operating environment. The software is composed of a graphical user interface that provides the user with easy access to Cassandra uncertainty analysis routines. Together this interface and the Cassandra routines are referred to as CRAX (CassandRA eXoskeleton). The software is flexible enough, that with minor modification, it is able to interface with large modeling and analysis codes such as heat transfer or finite element analysis software. The current version permitsmore » the user to manually input a performance function, the number of random variables and their associated statistical characteristics: density function, mean, coefficients of variation. Additional uncertainity analysis modules are continuously being added to the Cassandra core.« less

  8. Robust statistical reconstruction for charged particle tomography

    DOEpatents

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  9. Spatial Differentiation of Landscape Values in the Murray River Region of Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Zhu, Xuan; Pfueller, Sharron; Whitelaw, Paul; Winter, Caroline

    2010-05-01

    This research advances the understanding of the location of perceived landscape values through a statistically based approach to spatial analysis of value densities. Survey data were obtained from a sample of people living in and using the Murray River region, Australia, where declining environmental quality prompted a reevaluation of its conservation status. When densities of 12 perceived landscape values were mapped using geographic information systems (GIS), valued places clustered along the entire river bank and in associated National/State Parks and reserves. While simple density mapping revealed high value densities in various locations, it did not indicate what density of a landscape value could be regarded as a statistically significant hotspot or distinguish whether overlapping areas of high density for different values indicate identical or adjacent locations. A spatial statistic Getis-Ord Gi* was used to indicate statistically significant spatial clusters of high value densities or “hotspots”. Of 251 hotspots, 40% were for single non-use values, primarily spiritual, therapeutic or intrinsic. Four hotspots had 11 landscape values. Two, lacking economic value, were located in ecologically important river red gum forests and two, lacking wilderness value, were near the major towns of Echuca-Moama and Albury-Wodonga. Hotspots for eight values showed statistically significant associations with another value. There were high associations between learning and heritage values while economic and biological diversity values showed moderate associations with several other direct and indirect use values. This approach may improve confidence in the interpretation of spatial analysis of landscape values by enhancing understanding of value relationships.

  10. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  11. Hydrological impact of high-density small dams in a humid catchment, Southeast China

    NASA Astrophysics Data System (ADS)

    Lu, W.; Lei, H.; Yang, D.

    2017-12-01

    The Jiulong River basin is a humid catchment with a drainage area of 14,741 km2; however, it has over 1000 hydropower stations within it. Such catchment with high-density small dams is scarce in China. Yet few is known about the impact of high-density small dams on streamflow changes. To what extent the large number of dams alters the hydrologic patterns is a fundamental scientific issue for water resources management, flood control, and aquatic ecological environment protection. Firstly, trend and change point analyses are applied to determine the characteristics of inter-annual streamflow. Based on the detected change point, the study period is divided into two study periods, the ``natural'' and ``disturbed'' periods. Then, a geomorphology-based hydrological model (GBHM) and the fixing-changing method are adopted to evaluate the relative contributions of climate variations and damming to the changes in streamflow at each temporal scale (i.e., from daily, monthly to annual). Based on the simulated natural streamflow, the impact of dam construction on hydrologic alteration and aquatic ecological environment will be evaluated. The hydrologic signatures that will be investigated include flood peak, seasonality of streamflow, and the inter-annual variability of streamflow. In particular, the impacts of damming on aquatic ecological environment will be investigated using eco-flow metrics and indicators of hydrologic alteration (IHA) which contains 33 individual streamflow statistics that are closely related to aquatic ecosystem. The results of this study expect to provide a reference for reservoir operation considering both ecological and economic benefits of such operations in the catchment with high-density dams.

  12. Monitoring diesel particulate matter and calculating diesel particulate densities using Grimm model 1.109 real-time aerosol monitors in underground mines.

    PubMed

    Kimbal, Kyle C; Pahler, Leon; Larson, Rodney; VanDerslice, Jim

    2012-01-01

    Currently, there is no Mine Safety and Health Administration (MSHA)-approved sampling method that provides real-time results for ambient concentrations of diesel particulates. This study investigated whether a commercially available aerosol spectrometer, the Grimm Portable Aerosol Spectrometer Model 1.109, could be used during underground mine operations to provide accurate real-time diesel particulate data relative to MSHA-approved cassette-based sampling methods. A subset was to estimate size-specific diesel particle densities to potentially improve the diesel particulate concentration estimates using the aerosol monitor. Concurrent sampling was conducted during underground metal mine operations using six duplicate diesel particulate cassettes, according to the MSHA-approved method, and two identical Grimm Model 1.109 instruments. Linear regression was used to develop adjustment factors relating the Grimm results to the average of the cassette results. Statistical models using the Grimm data produced predicted diesel particulate concentrations that highly correlated with the time-weighted average cassette results (R(2) = 0.86, 0.88). Size-specific diesel particulate densities were not constant over the range of particle diameters observed. The variance of the calculated diesel particulate densities by particle diameter size supports the current understanding that diesel emissions are a mixture of particulate aerosols and a complex host of gases and vapors not limited to elemental and organic carbon. Finally, diesel particulate concentrations measured by the Grimm Model 1.109 can be adjusted to provide sufficiently accurate real-time air monitoring data for an underground mining environment.

  13. AquaLase versus NeoSoniX--a comparison study.

    PubMed

    Jiraskova, Nada; Rozsival, Pavel; Kadlecova, Jana; Nekolova, Jana; Pozlerova, Jana; Dubravska, Zlatica

    2007-12-01

    To compare the metrics and surgical outcome when using Infiniti AquaLase and NeoSoniX cataract removal modalities. This prospective clinical study involved 50 patients with bilateral cataracts and lens removal using AquaLase in the right eye and NeoSoniX in the left eye. Best corrected visual acuity (BCVA), endothelial cell density and pachymetry were evaluted pre- and postoperatively. Statistical analysis was performed using the Wilcoxon Signed- Rank Test. Preoperative mean pachymetry was 569 +/- 31 mu in the right eye (RE) and 560 +/- 37 mu in the left eye (LE), mean endothelial cell density 2744 +/- 418 cells/mm(2) (RE) and 2730 +/- 472 cells/mm(2) (LE). One week after operation pachymetry was 576 +/- 52 mu (RE) and 583 +/- 72 mu (LE) and endothelial cell density 2388 +/- 586 cells/mm(2) (RE) and 2463 +/- 615 cells/mm(2) (LE). One month after surgery pachymetry was 556 +/- 43 mu (RE) and 559 +/- 44 mu (LE) and endothelial cell density 2368 +/- 52 cells/mm(2) (RE) and 2495 +/- 548 cells/mm(2) (LE). BCVA improved in all eyes and was 0.8 or better on the first postoperative day. Both the NeosoniX and AquaLase minimize intraoperative damage to ocular structures.

  14. Thinning regimes and initial spacing for Eucalyptus plantations in Brazil.

    PubMed

    Ferraz Filho, Antonio C; Mola-Yudego, Blas; González-Olabarria, José R; Scolforo, José Roberto S

    2018-01-01

    This study focuses on the effects of different thinning regimes on clonal Eucalyptus plantations growth. Four different trials, planted in 1999 and located in Bahia and Espírito Santo States, were used. Aside from thinning, initial planting density, and post thinning fertilization application were also evaluated. Before canopy closure, and therefore before excessive competition between trees took place, it was found that stands planted under low densities (667 trees per hectare) presented a lower mortality proportion when compared to stand planted under higher densities (1111 trees per hectare). However, diameter growth prior to thinning operations was not statistically different between these two densities, presenting an overall mean of 4.9 cm/year. After canopy closure and the application of the thinning treatments, it was found that thinning regimes beginning early in the life of the stand and leaving a low number of residual trees presented the highest diameter and height growth. Unthinned treatments and thinning regimes late in the life of the stand (after 5.5 years), leaving a large number of residual trees presented the highest values of basal area production. The choice of the best thinning regime for Eucalyptus clonal material will vary according to the plantation objective.

  15. Reader performance in visual assessment of breast density using visual analogue scales: Are some readers more predictive of breast cancer?

    NASA Astrophysics Data System (ADS)

    Rayner, Millicent; Harkness, Elaine F.; Foden, Philip; Wilson, Mary; Gadde, Soujanya; Beetles, Ursula; Lim, Yit Y.; Jain, Anil; Bundred, Sally; Barr, Nicky; Evans, D. Gareth; Howell, Anthony; Maxwell, Anthony; Astley, Susan M.

    2018-03-01

    Mammographic breast density is one of the strongest risk factors for breast cancer, and is used in risk prediction and for deciding appropriate imaging strategies. In the Predicting Risk Of Cancer At Screening (PROCAS) study, percent density estimated by two readers on Visual Analogue Scales (VAS) has shown a strong relationship with breast cancer risk when assessed against automated methods. However, this method suffers from reader variability. This study aimed to assess the performance of PROCAS readers using VAS, and to identify those most predictive of breast cancer. We selected the seven readers who had estimated density on over 6,500 women including at least 100 cancer cases, analysing their performance using multivariable logistic regression and Receiver Operator Characteristic (ROC) analysis. All seven readers showed statistically significant odds ratios (OR) for cancer risk according to VAS score after adjusting for classical risk factors. The OR was greatest for reader 18 at 1.026 (95% Cl 1.018-1.034). Adjusted Area Under the ROC Curves (AUCs) were statistically significant for all readers, but greatest for reader 14 at 0.639. Further analysis of the VAS scores for these two readers showed reader 14 had higher sensitivity (78.0% versus 42.2%), whereas reader 18 had higher specificity (78.0% versus 46.0%). Our results demonstrate individual differences when assigning VAS scores; one better identified those with increased risk, whereas another better identified low risk individuals. However, despite their different strengths, both readers showed similar predictive abilities overall. Standardised training for VAS may improve reader variability and consistency of VAS scoring.

  16. 14 CFR Section 19 - Uniform Classification of Operating Statistics

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Statistics Section 19 Section 19 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... AIR CARRIERS Operating Statistics Classifications Section 19 Uniform Classification of Operating Statistics ...

  17. 14 CFR Section 19 - Uniform Classification of Operating Statistics

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Statistics Section 19 Section 19 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... AIR CARRIERS Operating Statistics Classifications Section 19 Uniform Classification of Operating Statistics ...

  18. Quantitative digital subtraction radiography in the assessment of external apical root resorption induced by orthodontic therapy: a retrospective study.

    PubMed

    Sunku, Raghavendra; Roopesh, R; Kancherla, Pavan; Perumalla, Kiran Kumar; Yudhistar, Palla Venkata; Reddy, V Sridhar

    2011-11-01

    The objective of this study was to evaluate density changes around the apices of teeth during orthodontic treatment by using digital subtraction radiography to measure the densities around six teeth (maxilla central incisors, lateral incisors, and canines) before and after orthodontic treatment in 36 patients and also assess treatment variables and their coorelation with root resorption. A total of 36 consecutive patient files were selected initially. The selected patients presented with a class I or II relationship and were treated with or without premolar extractions and fixed appliances. Some class II patients were treated additionally with extraoral forces or functional appliances. External apical root resorption (EARR) per tooth in millimeters was calculated and was also expressed as a percentage of the original root length. Image reconstruction and subtraction were performed using the software Regeemy Image Registration and Mosaicing (version 0.2.43-RCB, DPI-INPE, Sao Jose dos Campos, Sao Paulo, Brazil) by a single operator. A region of interest (ROI) was defined in the apical third of the root and density calibration was made in Image J® using enamel (gray value = 255) as reference in the same image. The mean gray values in the ROIs were reflective of the change in the density values between the two images. The root resorption of the tooth and the factors of malocclusion were analyzed with a one-way ANOVA. An independent t-test was performed to compare the mean amount of resorption between male and female, between extraction and nonextraction cases. The density changes after orthodontic treatment were analyzed using the Wilcoxon signedrank test. In addition, the density changes in different teeth were analyzed using the Kruskal-Wallis test. The cut-off for statistical significance was a p-value of 0.05. All the statistical analyses were carried out using SPSS (version 13.0 for Windows, Chicago, IL, USA). Gender, the age at which treatment was started and Angle's classification was not statistically related with observed root resorption. The mean percentage density reduction as assessed by DSR was greatest in both central incisor: by 27.2 and 25.2% in the upper-right and upper-left central incisors, respectively, followed by the upper-right and upper-left canine teeth (23.5 and 21.0%) and then the upper-right and upper-left lateral incisors (19.1 and 17.4%). Tooth extraction prior to treatment initiation and the duration of orthodontic treatment was positively correlated with the amount of root resorption. DSR is useful for evaluating density changes around teeth during orthodontic treatment. The density around the apices of teeth reduced significantly after the application of orthodontic forces during treatment. Assessment of density changes on treatment radiographs of patients undergoing orthodontic therapy may help in the monitoring of external apical root resorption during course of treatment.

  19. Process model comparison and transferability across bioreactor scales and modes of operation for a mammalian cell bioprocess.

    PubMed

    Craven, Stephen; Shirsat, Nishikant; Whelan, Jessica; Glennon, Brian

    2013-01-01

    A Monod kinetic model, logistic equation model, and statistical regression model were developed for a Chinese hamster ovary cell bioprocess operated under three different modes of operation (batch, bolus fed-batch, and continuous fed-batch) and grown on two different bioreactor scales (3 L bench-top and 15 L pilot-scale). The Monod kinetic model was developed for all modes of operation under study and predicted cell density, glucose glutamine, lactate, and ammonia concentrations well for the bioprocess. However, it was computationally demanding due to the large number of parameters necessary to produce a good model fit. The transferability of the Monod kinetic model structure and parameter set across bioreactor scales and modes of operation was investigated and a parameter sensitivity analysis performed. The experimentally determined parameters had the greatest influence on model performance. They changed with scale and mode of operation, but were easily calculated. The remaining parameters, which were fitted using a differential evolutionary algorithm, were not as crucial. Logistic equation and statistical regression models were investigated as alternatives to the Monod kinetic model. They were less computationally intensive to develop due to the absence of a large parameter set. However, modeling of the nutrient and metabolite concentrations proved to be troublesome due to the logistic equation model structure and the inability of both models to incorporate a feed. The complexity, computational load, and effort required for model development has to be balanced with the necessary level of model sophistication when choosing which model type to develop for a particular application. Copyright © 2012 American Institute of Chemical Engineers (AIChE).

  20. [Experimental study on vascular bundle implantation combined with cellular transplantation in treating rabbit femoral head necrosis].

    PubMed

    Chen, Shuang-Tao; Zhang, Wei-Ping; Liu, Chang-An; Wang, Jun-Jiang; Song, Heng-Yi; Chai, Zhi-wen

    2013-03-01

    To discuss the feasibility of vascular bundle implantation combined with allogeneic bone marrow stromal cells (BMSCs) transplantation in treating rabbit femoral head osteonecrosis and bone defect, in order to explore a new method for the treatment of femoral head necrosis. Thirty-six New Zealand rabbits were randomly divided into three groups,with 12 rabbits in each group. Bilateral femoral heads of the rabbits were studied in the experiment. The models were made by liquid nitrogen frozen, and the femoral heads were drilled to cause bone defect. Group A was the control group,group B was stem cells transplantaion group of allograft marrow stromal,and group C was stem cells transplantation group of allograft marrow stromal combined with vascular bundle implantation. Three rabbits of each group were sacrificed respectively at 2, 4, 8, 12 weeks after operation. All specimens of the femoral heads were sliced for HE staining. Furthermore ,vascular density and the percentage of new bone trabecula of femoral head coronary section in defect area were measured and analyzed statistically. In group C,new bone trabecula and original micrangium formed at the 2nd week after operation; new bone trabecula was lamellar and interlaced with abundant micrangium at the 8th week;at the 12th week,the broadened,coarsened bone trabecula lined up regularly,and the mature bone trabecula and new marrow were visible. At the 2nd week after operation,there was no statistical significance in the percentage of new bone trabecula of femoral head coronary section in defect area between group B and C. While at 4, 8, 12 week after operation, vascular density and the percentage of new bone trabecula of femoral head coronary section in defect area of group C was higher than that of group B. Allogeneic bone marrow stromal cells cultured in vivo can form new bone trabecula, and can be applied to allotransplant. Vascular bundle implanted into the bone defect area of femoral head necrosis could improve blood supply, and promote the formation of bone trabecula.

  1. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  2. Change in working characteristics of the steam turbine metal with operating time of more than 330000 hours

    NASA Astrophysics Data System (ADS)

    Gladshteyn, V. I.; Troitskiy, A. I.

    2017-01-01

    Research of a metal of the stop valve case (SVC) of the K-300-23.5 LMZ turbine (steel grade 15Kh1M1FL), destroyed after operation for 331000 hours, is performed. It's chemical composition and properties are determined as follows: a short-term mechanical tensile stress at 20°C and at elevated temperature, critical temperature, fragility, critical crack opening at elevated temperature, and long-term strength. Furthermore, nature of the microstructure, packing density of carbide particles and their size, and chemical composition of carbide sediment are estimated. A manifestation of metal properties for the main case components by comparison with a forecast of the respective characteristics made for the operating time of 331000 hours is tested. Property-time relationships are built for the forecast using statistical treatment of the test results for the samples cut out from more than 300 parts. Representativeness of the research results is proved: the statistical treatment of their differences are within the range of ±5%. It has been found that, after 150000 hours of operation, only the tensile strength insignificantly depends on the operating time at 20°C, whereas indicators of strength at elevated temperature significantly reduce, depending on the operating time. A brittle-to-ductile transition temperature (BDTT) raises, a critical notch opening changes in a complicated way, a long-term strength reduces. It has been found empirically that the limit of a long-term strength of the SVC metal at 540°C and the operating time of 105 hours is almost 1.6 times less than the required value in the as-delivered state. It is possible to evaluate a service life of the operating valves with the operating time of more than 330000 hours with respect to the long-term strength of the metal taking into account the actual temperature and stress. Guidelines for the control of similar parts are provided.

  3. Flow Regime Study in a Circulating Fluidized Bed Riser with an Abrupt Exit: [1] High Density Suspension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, J.S.; Lee, G.T.; Seachman, S.M.

    2008-05-13

    Flow regime study was conducted in a 0.3 m diameter, 15.5 m tall circulating fluidized bed (CFB) riser with an abrupt exit at the National Energy Technology Laboratory of the U. S. Department of Energy. A statistical designed test series was conducted including four (4) operating set points and a duplicated center point (therefore a total of 6 operating set points). Glass beads of mean diameter 200 μm and particle density of 2,430 kg/m3 were used as bed material. The CFB riser was operated at various superficial gas velocities ranging from 5.6 to 7.6 m/s and solid mass flux frommore » a low of 86 to a high of 303 kg/m2-s. Results of the apparent solids fraction profile as well as the radial particle velocity profile were analyzed in order to identify the presence of Dense Suspension Upflow (DSU) conditions. DSU regime was found to exist at the bottom of the riser, while the middle section of the riser was still exhibiting core-annular flow structure. Due to the abrupt geometry of the exit, the DSU regime was also found at the top of the riser. In addition the effects of the azimuthal angle, riser gas velocity, and mass solids flux on the particle velocity were investigated and are discussed in this paper.« less

  4. Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time

    NASA Astrophysics Data System (ADS)

    Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.

    2017-12-01

    We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.

  5. When can the cause of a population decline be determined?

    PubMed

    Hefley, Trevor J; Hooten, Mevin B; Drake, John M; Russell, Robin E; Walsh, Daniel P

    2016-11-01

    Inferring the factors responsible for declines in abundance is a prerequisite to preventing the extinction of wild populations. Many of the policies and programmes intended to prevent extinctions operate on the assumption that the factors driving the decline of a population can be determined. Exogenous factors that cause declines in abundance can be statistically confounded with endogenous factors such as density dependence. To demonstrate the potential for confounding, we used an experiment where replicated populations were driven to extinction by gradually manipulating habitat quality. In many of the replicated populations, habitat quality and density dependence were confounded, which obscured causal inference. Our results show that confounding is likely to occur when the exogenous factors that are driving the decline change gradually over time. Our study has direct implications for wild populations, because many factors that could drive a population to extinction change gradually through time. © 2016 John Wiley & Sons Ltd/CNRS.

  6. When can the cause of a population decline be determined?

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Drake, John M.; Russell, Robin E.; Walsh, Daniel P.

    2016-01-01

    Inferring the factors responsible for declines in abundance is a prerequisite to preventing the extinction of wild populations. Many of the policies and programmes intended to prevent extinctions operate on the assumption that the factors driving the decline of a population can be determined. Exogenous factors that cause declines in abundance can be statistically confounded with endogenous factors such as density dependence. To demonstrate the potential for confounding, we used an experiment where replicated populations were driven to extinction by gradually manipulating habitat quality. In many of the replicated populations, habitat quality and density dependence were confounded, which obscured causal inference. Our results show that confounding is likely to occur when the exogenous factors that are driving the decline change gradually over time. Our study has direct implications for wild populations, because many factors that could drive a population to extinction change gradually through time.

  7. A practical approach for the scale-up of roller compaction process.

    PubMed

    Shi, Weixian; Sprockel, Omar L

    2016-09-01

    An alternative approach for the scale-up of ribbon formation during roller compaction was investigated, which required only one batch at the commercial scale to set the operational conditions. The scale-up of ribbon formation was based on a probability method. It was sufficient in describing the mechanism of ribbon formation at both scales. In this method, a statistical relationship between roller compaction parameters and ribbon attributes (thickness and density) was first defined with DoE using a pilot Alexanderwerk WP120 roller compactor. While the milling speed was included in the design, it has no practical effect on granule properties within the study range despite its statistical significance. The statistical relationship was then adapted to a commercial Alexanderwerk WP200 roller compactor with one experimental run. The experimental run served as a calibration of the statistical model parameters. The proposed transfer method was then confirmed by conducting a mapping study on the Alexanderwerk WP200 using a factorial DoE, which showed a match between the predictions and the verification experiments. The study demonstrates the applicability of the roller compaction transfer method using the statistical model from the development scale calibrated with one experiment point at the commercial scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Identifying aspects of neighbourhood deprivation associated with increased incidence of schizophrenia.

    PubMed

    Bhavsar, Vishal; Boydell, Jane; Murray, Robin; Power, Paddy

    2014-06-01

    Several studies have found an association between area deprivation and incidence of schizophrenia. However, not all studies have concurred and definitions of deprivation have varied between studies. Relative deprivation and inequality seem to be particularly important, but which aspects of deprivation or how this effect might operate is not known. The Lambeth Early Onset case register is a database of all cases of first episode psychosis aged 16 to 35years from the London Borough of Lambeth, a highly urban area. We identified 405 people with first onset schizophrenia who presented between 2000 and 2007. We calculated the overall incidence of first onset schizophrenia and tested for an association with area-level deprivation, using a multi-domain index of deprivation (IMD 2004). Specific analyses into associations with individual sub-domains of deprivation were then undertaken. Incidence rates, directly standardized for age and gender, were calculated for Lambeth at two geographical levels (small and large neighbourhood level). The Poisson regression model predicting incidence rate ratios for schizophrenia using overall deprivation score was statistically significant at both levels after adjusting for ethnicity, ethnic density, population density and population turnover. The incidence rate ratio for electoral ward deprivation was 1.03 (95% CI=1.004-1.04) and for the super output area deprivation was 1.04 (95% CI=1.02-1.06). The individual domains of crime, employment deprivation and educational deprivation were statistically significant predictors of incidence but, after adjusting for the other domains as well as age, gender, ethnicity and population density, only crime and educational deprivation, remained statistically significant. Low income, poor housing and deprived living environment did not predict incidence. In a highly urban area, an association was found between area-level deprivation and incidence of schizophrenia, after controlling for age, gender, ethnicity and population density; high crime and low levels of education accounted for this. As both of these are potentially modifiable, this suggests a possible means to reduce the incidence of schizophrenia. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  9. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  10. Snowmobile impacts on snowpack physical and mechanical properties

    NASA Astrophysics Data System (ADS)

    Fassnacht, Steven R.; Heath, Jared T.; Venable, Niah B. H.; Elder, Kelly J.

    2018-03-01

    Snowmobile use is a popular form of winter recreation in Colorado, particularly on public lands. To examine the effects of differing levels of use on snowpack properties, experiments were performed at two different areas, Rabbit Ears Pass near Steamboat Springs and at Fraser Experimental Forest near Fraser, Colorado USA. Differences between no use and varying degrees of snowmobile use (low, medium and high) on shallow (the operational standard of 30 cm) and deeper snowpacks (120 cm) were quantified and statistically assessed using measurements of snow density, temperature, stratigraphy, hardness, and ram resistance from snow pit profiles. A simple model was explored that estimated snow density changes from snowmobile use based on experimental results. Snowpack property changes were more pronounced for thinner snow accumulations. When snowmobile use started in deeper snow conditions, there was less difference in density, hardness, and ram resistance compared to the control case of no snowmobile use. These results have implications for the management of snowmobile use in times and places of shallower snow conditions where underlying natural resources could be affected by denser and harder snowpacks.

  11. Correlation between Na/K ratio and electron densities in blood samples of breast cancer patients.

    PubMed

    Topdağı, Ömer; Toker, Ozan; Bakırdere, Sezgin; Bursalıoğlu, Ertuğrul Osman; Öz, Ersoy; Eyecioğlu, Önder; Demir, Mustafa; İçelli, Orhan

    2018-05-31

    The main purpose of this study was to investigate the relationship between the electron densities and Na/K ratio which has important role in breast cancer disease. Determinations of sodium and potassium concentrations in blood samples performed with inductive coupled plasma-atomic emission spectrometry. Electron density values of blood samples were determined via ZXCOM. Statistical analyses were performed for electron densities and Na/K ratio including Kolmogorov-Smirnov normality tests, Spearman's rank correlation test and Mann-Whitney U test. It was found that the electron densities significantly differ between control and breast cancer groups. In addition, statistically significant positive correlation was found between the electron density and Na/K ratios in breast cancer group.

  12. Atomistic mechanisms of ReRAM cell operation and reliability

    NASA Astrophysics Data System (ADS)

    Pandey, Sumeet C.

    2018-01-01

    We present results from first-principles-based modeling that captures functionally important physical phenomena critical to cell materials selection, operation, and reliability for resistance-switching memory technologies. An atomic-scale description of retention, the low- and high-resistance states (RS), and the sources of intrinsic cell-level variability in ReRAM is discussed. Through the results obtained from density functional theory, non-equilibrium Green’s function, molecular dynamics, and kinetic Monte Carlo simulations; the role of variable-charge vacancy defects and metal impurities in determining the RS, the LRS-stability, and electron-conduction in such RS is reported. Although, the statistical electrical characteristics of the oxygen-vacancy (Ox-ReRAM) and conductive-bridging RAM (M-ReRAM) are notably different, the underlying similar electrochemical phenomena describing retention and formation/dissolution of RS are being discussed.

  13. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm.

    PubMed

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim

    2017-06-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.

  14. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm

    PubMed Central

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim

    2018-01-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035

  15. Chemometric study on the electrochemical incineration of diethylenetriaminepentaacetic acid using boron-doped diamond anode.

    PubMed

    Xian, Jiahui; Liu, Min; Chen, Wei; Zhang, Chunyong; Fu, Degang

    2018-05-01

    The electrochemical incineration of diethylenetriaminepentaacetic acid (DTPA) with boron-doped diamond (BDD) anode had been initially performed under galvanostatic conditions. The main and interaction effects of four operating parameters (flow rate, applied current density, sulfate concentration and initial DTPA concentration) on mineralization performance were investigated. Under similar experimental conditions, Doehlert matrix (DM) and central composite rotatable design (CCRD) were used as statistical multivariate methods in the optimization of the anodic oxidation processes. A comparison between DM model and CCRD model revealed that the former was more accurate, possibly due to its higher operating level numbers employed (7 levels for two variables). Despite this, these two models resulted in quite similar optimum operating conditions. The maximum TOC removal percentages at 180 min were 76.2% and 73.8% for case of DM and CCRD, respectively. In addition, with the aid of quantum chemistry calculation and LC/MS analysis, a plausible degradation sequence of DTPA on BDD anode was also proposed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Does fluoride influence oviposition of Anopheles stephensi in stored water habitats in an urban setting?

    PubMed

    Thomas, Shalu; Ravishankaran, Sangamithra; Johnson Amala Justin, N A; Asokan, Aswin; Maria Jusler Kalsingh, T; Mathai, Manu Thomas; Valecha, Neena; Eapen, Alex

    2016-11-09

    The physico-chemical characteristics of lentic aquatic habitats greatly influence mosquito species in selecting suitable oviposition sites; immature development, pupation and adult emergence, therefore are considerations for their preferred ecological niche. Correlating water quality parameters with mosquito breeding, as well as immature vector density, are useful for vector control operations in identifying and targeting potential breeding habitats. A total of 40 known habitats of Anopheles stephensi, randomly selected based on a vector survey in parallel, were inspected for the physical and chemical nature of the aquatic environment. Water samples were collected four times during 2013, representing four seasons (i.e., ten habitats per season). The physico-chemical variables and mosquito breeding were statistically analysed to find their correlation with immature density of An. stephensi and also co-inhabitation with other mosquito species. Anopheles stephensi prefer water with low nitrite content and high phosphate content. Parameters such as total dissolved solids, electrical conductivity, total hardness, chloride, fluoride and sulfate had a positive correlation in habitats with any mosquito species breeding (p < 0.05) and also in habitats with An. stephensi alone breeding. Fluoride was observed to have a strong positive correlation with immature density of An. stephensi in both overhead tanks and wells. Knowledge of larval ecology of vector mosquitoes is a key factor in risk assessment and for implementing appropriate and sustainable vector control operations. The presence of fluoride in potential breeding habitats and a strong positive correlation with An. stephensi immature density is useful information, as fluoride can be considered an indicator/predictor of vector breeding. Effective larval source management can be focussed on specified habitats in vulnerable areas to reduce vector abundance and malaria transmission.

  17. Nondestructive Evaluation (NDE) Technology Initiatives (NTIP). Delivery Order 0039: Statistical Comparison of Competing Material Models

    DTIC Science & Technology

    2003-01-01

    adapted from Kass and Rafferty (1995) and Congdon (2001). Page 10 of 57 density adjusted for resin content, z, since resin contributes to the density...c.f.: Congdon , 2001). How to Download the WinBUGS Software Package BUGS was originally a statistical research project at the Medical Research...Likelihood Estimation,” July 2002, working paper to be published. 18) Congdon , Peter, Bayesian Statistical Modeling, Wiley, 2001 19) Cox, D. R. and

  18. Breast density and parenchymal texture measures as potential risk factors for estrogen-receptor positive breast cancer

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2014-03-01

    Accurate assessment of a woman's risk to develop specific subtypes of breast cancer is critical for appropriate utilization of chemopreventative measures, such as with tamoxifen in preventing estrogen-receptor positive breast cancer. In this context, we investigate quantitative measures of breast density and parenchymal texture, measures of glandular tissue content and tissue structure, as risk factors for estrogen-receptor positive (ER+) breast cancer. Mediolateral oblique (MLO) view digital mammograms of the contralateral breast from 106 women with unilateral invasive breast cancer were retrospectively analyzed. Breast density and parenchymal texture were analyzed via fully-automated software. Logistic regression with feature selection and was performed to predict ER+ versus ER- cancer status. A combined model considering all imaging measures extracted was compared to baseline models consisting of density-alone and texture-alone features. Area under the curve (AUC) of the receiver operating characteristic (ROC) and Delong's test were used to compare the models' discriminatory capacity for receptor status. The density-alone model had a discriminatory capacity of 0.62 AUC (p=0.05). The texture-alone model had a higher discriminatory capacity of 0.70 AUC (p=0.001), which was not significantly different compared to the density-alone model (p=0.37). In contrast the combined density-texture logistic regression model had a discriminatory capacity of 0.82 AUC (p<0.001), which was statistically significantly higher than both the density-alone (p<0.001) and texture-alone regression models (p=0.04). The combination of breast density and texture measures may have the potential to identify women specifically at risk for estrogen-receptor positive breast cancer and could be useful in triaging women into appropriate risk-reduction strategies.

  19. Analysis of operational requirements for medium density air transportation. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The medium density air travel market was studied to determine the aircraft design and operational requirements. The impact of operational characteristics on the air travel system and the economic viability of the study aircraft were also evaluated. Medium density is defined in terms of numbers of people transported (20 to 500 passengers per day on round trip routes), and frequency of service ( a minumium of two and maximum of eight round trips per day) for 10 regional carriers. The operational characteristics of aircraft best suited to serve the medium density air transportation market are determined and a basepoint aircraft is designed from which tradeoff studies and parametric variations could be conducted. The impact of selected aircraft on the medium density market, economics, and operations is ascertained. Research and technology objectives for future programs in medium density air transportation are identified and ranked.

  20. Theory and analysis of statistical discriminant techniques as applied to remote sensing data

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1973-01-01

    Classification of remote earth resources sensing data according to normed exponential density statistics is reported. The use of density models appropriate for several physical situations provides an exact solution for the probabilities of classifications associated with the Bayes discriminant procedure even when the covariance matrices are unequal.

  1. Statistical distribution of building lot frontage: application for Tokyo downtown districts

    NASA Astrophysics Data System (ADS)

    Usui, Hiroyuki

    2018-03-01

    The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.

  2. Global statistics of liquid water content and effective number density of water clouds over ocean derived from combined CALIPSO and MODIS measurements

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Vaughan, M.; McClain, C.; Behrenfeld, M.; Maring, H.; Anderson, D.; Sun-Mack, S.; Flittner, D.; Huang, J.; Wielicki, B.; Minnis, P.; Weimer, C.; Trepte, C.; Kuehn, R.

    2007-03-01

    This study presents an empirical relation that links layer integrated depolarization ratios, the extinction coefficients, and effective radii of water clouds, based on Monte Carlo simulations of CALIPSO lidar observations. Combined with cloud effective radius retrieved from MODIS, cloud liquid water content and effective number density of water clouds are estimated from CALIPSO lidar depolarization measurements in this study. Global statistics of the cloud liquid water content and effective number density are presented.

  3. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  4. Density estimation in a wolverine population using spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin

    2011-01-01

    Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

  5. Hunting high and low: disentangling primordial and late-time non-Gaussianity with cosmic densities in spheres

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.

    2018-03-01

    Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.

  6. Assessing the effects of lumbar posterior stabilization and fusion to vertebral bone density in stabilized and adjacent segments by using Hounsfield unit

    PubMed Central

    Öksüz, Erol; Deniz, Fatih Ersay; Demir, Osman

    2017-01-01

    Background Computed tomography (CT) with Hounsfield unit (HU) is being used with increasing frequency for determining bone density. Established correlations between HU and bone density have been shown in the literature. The aim of this retrospective study was to determine the bone density changes of the stabilized and adjacent segment vertebral bodies by comparing HU values before and after lumbar posterior stabilization. Methods Sixteen patients who had similar diagnosis of lumbar spondylosis and stenosis were evaluated in this study. Same surgical procedures were performed to all of the patients with L2-3-4-5 transpedicular screw fixation, fusion and L3-4 total laminectomy. Bone mineral density measurements were obtained with clinical CT. Measurements were obtained from stabilized and adjacent segment vertebral bodies. Densities of vertebral bodies were evaluated with HU before the surgeries and approximately one year after the surgeries. The preoperative HU value of each vertebra was compared with postoperative HU value of the same vertebrae by using statistical analysis. Results The HU values of vertebra in the stabilized and adjacent segments consistently decreased after the operations. There were significant differences between the preoperative HU values and the postoperative HU values of the all evaluated vertebral bodies in the stabilized and adjacent segments. Additionally first sacral vertebra HU values were found to be significantly higher than lumbar vertebra HU values in the preoperative group and postoperative group. Conclusions Decrease in the bone density of the adjacent segment vertebral bodies may be one of the major predisposing factors for adjacent segment disease (ASD). PMID:29354730

  7. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  8. Real-time movement detection and analysis for video surveillance applications

    NASA Astrophysics Data System (ADS)

    Hueber, Nicolas; Hennequin, Christophe; Raymond, Pierre; Moeglin, Jean-Pierre

    2014-06-01

    Pedestrian movement along critical infrastructures like pipes, railways or highways, is of major interest in surveillance applications as well as its behavior in urban environment. The goal is to anticipate illicit or dangerous human activities. For this purpose, we propose an all-in-one small autonomous system which delivers high level statistics and reports alerts in specific cases. This situational awareness project leads us to manage efficiently the scene by performing movement analysis. A dynamic background extraction algorithm is developed to reach the degree of robustness against natural and urban environment perturbations and also to match the embedded implementation constraints. When changes are detected in the scene, specific patterns are applied to detect and highlight relevant movements. Depending on the applications, specific descriptors can be extracted and fused in order to reach a high level of interpretation. In this paper, our approach is applied to two operational use cases: pedestrian urban statistics and railway surveillance. In the first case, a grid of prototypes is deployed over a city centre to collect pedestrian movement statistics up to a macroscopic level of analysis. The results demonstrate the relevance of the delivered information; in particular, the flow density map highlights pedestrian preferential paths along the streets. In the second case, one prototype is set next to high speed train tracks to secure the area. The results exhibit a low false alarm rate and assess our approach of a large sensor network for delivering a precise operational picture without overwhelming a supervisor.

  9. Royal jelly and bee pollen decrease bone loss due to osteoporosis in an oophorectomized rat model.

    PubMed

    Kafadar, Ibrahim Halil; Güney, Ahmet; Türk, Cemil Yildirim; Oner, Mithat; Silici, Sibel

    2012-01-01

    In this study, we aimed to investigate whether royal jelly and bee pollen reduce the bone loss due to osteoporosis in oophorectomized rat model. Thirty-two female Sprague-Dawley mature rats at six-month-old, weighing 180-260 g were used in the study. The rats were divided into four groups: Sham-operation group, only oophorectomy group, oophorectomy in combination with royal jelly group, and oophorectomy and bee pollen group. The rats were sacrified within 12 weeks following surgery. Bone mineral density (BMD) was measured and blood samples were collected for biochemical analysis before sacrification. Following sacrification, uterine weights were measured and tissue samples were taken to determine bone calcium and phosphate level with imaging through scanning electron microscope. The uterine weights of the rats were found higher in Sham-operation group than the other groups. The difference among the groups was statistically significant (p=0.001). Total body BMD results were similar in all groups and there was not statistically significant difference (p=0.19). The lumbar spine and proximal femur BMD results were statistically significantly higher in the royal jelly and bee pollen groups, compared to only oophorectomy group (p=0.001). Bone tissue calcium and phosphate levels were higher in royal jelly and bee pollen groups. Royal jelly and bee pollen decrease the bone loss due to osteoporosis in oophorectomized rat model. These results may contribute to the clinical practice.

  10. Matrix product state representation of quasielectron wave functions

    NASA Astrophysics Data System (ADS)

    Kjäll, J.; Ardonne, E.; Dwivedi, V.; Hermanns, M.; Hansson, T. H.

    2018-05-01

    Matrix product state techniques provide a very efficient way to numerically evaluate certain classes of quantum Hall wave functions that can be written as correlators in two-dimensional conformal field theories. Important examples are the Laughlin and Moore-Read ground states and their quasihole excitations. In this paper, we extend the matrix product state techniques to evaluate quasielectron wave functions, a more complex task because the corresponding conformal field theory operator is not local. We use our method to obtain density profiles for states with multiple quasielectrons and quasiholes, and to calculate the (mutual) statistical phases of the excitations with high precision. The wave functions we study are subject to a known difficulty: the position of a quasielectron depends on the presence of other quasiparticles, even when their separation is large compared to the magnetic length. Quasielectron wave functions constructed using the composite fermion picture, which are topologically equivalent to the quasielectrons we study, have the same problem. This flaw is serious in that it gives wrong results for the statistical phases obtained by braiding distant quasiparticles. We analyze this problem in detail and show that it originates from an incomplete screening of the topological charges, which invalidates the plasma analogy. We demonstrate that this can be remedied in the case when the separation between the quasiparticles is large, which allows us to obtain the correct statistical phases. Finally, we propose that a modification of the Laughlin state, that allows for local quasielectron operators, should have good topological properties for arbitrary configurations of excitations.

  11. An Attached Payload Operations Center (APOC) at the Goddard Space Flight Center (GSFC), volume 2

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An overview of the APOC is given. For Spacelab payloads channel 2 and 3 data are input via a Statistical Multiplexer (SM) to the various SIPS functions. These include recording of the data on High Density Recorders (HDR), DQM and demultiplexing of the composite data stream by the High Rate Demultiplexer (HRDM). This system performs the inverse functions of the onboard Spacelab High Rate Multiplexer (HRM) enabling access to the data streams as multiplexed onboard the Spacelab. The contents and characteristics of channels one, two and three data as downlinked by the Tracking and Data Relay Satellite System (TDRSS) ku-band are given.

  12. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  13. Variability of footprint ridge density and its use in estimation of sex in forensic examinations.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Pathania, Annu; Sharma, Ruchika; DiMaggio, John A

    2015-10-01

    The present study deals with a comparatively new biometric parameter of footprints called footprint ridge density. The study attempts to evaluate sex-dependent variations in ridge density in different areas of the footprint and its usefulness in discriminating sex in the young adult population of north India. The sample for the study consisted of 160 young adults (121 females) from north India. The left and right footprints were taken from each subject according to the standard procedures. The footprints were analysed using a 5 mm × 5 mm square and the ridge density was calculated in four different well-defined areas of the footprints. These were: F1 - the great toe on its proximal and medial side; F2 - the medial ball of the footprint, below the triradius (the triradius is a Y-shaped group of ridges on finger balls, palms and soles which forms the basis of ridge counting in identification); F3 - the lateral ball of the footprint, towards the most lateral part; and F4 - the heel in its central part where the maximum breadth at heel is cut by a perpendicular line drawn from the most posterior point on heel. This value represents the number of ridges in a 25 mm(2) area and reflects the ridge density value. Ridge densities analysed on different areas of footprints were compared with each other using the Friedman test for related samples. The total footprint ridge density was calculated as the sum of the ridge density in the four areas of footprints included in the study (F1 + F2 + F3 + F4). The results show that the mean footprint ridge density was higher in females than males in all the designated areas of the footprints. The sex differences in footprint ridge density were observed to be statistically significant in the analysed areas of the footprint, except for the heel region of the left footprint. The total footprint ridge density was also observed to be significantly higher among females than males. A statistically significant correlation is shown in the ridge densities among most areas of both left and right sides. Based on receiver operating characteristic (ROC) curve analysis, the sexing potential of footprint ridge density was observed to be considerably higher on the right side. The sexing potential for the four areas ranged between 69.2% and 85.3% on the right side, and between 59.2% and 69.6% on the left side. ROC analysis of the total footprint ridge density shows that the sexing potential of the right and left footprint was 91.5% and 77.7% respectively. The study concludes that footprint ridge density can be utilised in the determination of sex as a supportive parameter. The findings of the study should be utilised only on the north Indian population and may not be internationally generalisable. © The Author(s) 2014.

  14. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  15. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  16. How Much Water is in That Snowpack? Improving Basin-wide Snow Water Equivalent Estimates from the Airborne Snow Observatory

    NASA Astrophysics Data System (ADS)

    Bormann, K.; Painter, T. H.; Marks, D. G.; Kirchner, P. B.; Winstral, A. H.; Ramirez, P.; Goodale, C. E.; Richardson, M.; Berisford, D. F.

    2014-12-01

    In the western US, snowmelt from the mountains contribute the vast majority of fresh water supply, in an otherwise dry region. With much of California currently experiencing extreme drought, it is critical for water managers to have accurate basin-wide estimations of snow water content during the spring melt season. At the forefront of basin-scale snow monitoring is the Jet Propulsion Laboratory's Airborne Snow Observatory (ASO). With combined LiDAR /spectrometer instruments and weekly flights over key basins throughout California, the ASO suite is capable of retrieving high-resolution basin-wide snow depth and albedo observations. To make best use of these high-resolution snow depths, spatially distributed snow density data are required to leverage snow water equivalent (SWE) from the measured depths. Snow density is a spatially and temporally variable property and is difficult to estimate at basin scales. Currently, ASO uses a physically based snow model (iSnobal) to resolve distributed snow density dynamics across the basin. However, there are issues with the density algorithms in iSnobal, particularly with snow depths below 0.50 m. This shortcoming limited the use of snow density fields from iSnobal during the poor snowfall year of 2014 in the Sierra Nevada, where snow depths were generally low. A deeper understanding of iSnobal model performance and uncertainty for snow density estimation is required. In this study, the model is compared to an existing climate-based statistical method for basin-wide snow density estimation in the Tuolumne basin in the Sierra Nevada and sparse field density measurements. The objective of this study is to improve the water resource information provided to water managers during ASO operation in the future by reducing the uncertainty introduced during the snow depth to SWE conversion.

  17. The 2005 Project Progress Report for 1987-099-00 Dworshak Kokanee Population and Entrainment Assessment (contract # 16791) is attached to project 1987-099-00, contract # 26850. [POINTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2008-12-18

    During this contract, we continued testing underwater strobe lights to determine their effectiveness at repelling kokanee Oncorhynchus nerka away from Dworshak Dam. We tested one set of nine strobe lights flashing at a rate of 360 flashes/min in front of turbine 3 while operating at higher discharges than previously tested. The density and distribution of fish, (thought to be mostly kokanee), were monitored with a split-beam echo sounder. We then compared fish counts and densities during nights when the lights were flashing to counts and densities during adjacent nights without the lights on. On five nights between January 31 andmore » February 28, 2006, when no lights were present, fish counts near turbine 3 averaged eight fish and densities averaged 91 fish/ha. When strobe lights were turned on during five adjacent nights during the same period, mean counts dropped to four fish and densities dropped to 35 fish/ha. The decline in counts (49%) was not statistically significant (p = 0.182), but decline in densities (62%) was significant (p = 0.049). There appeared to be no tendency for fish to habituate to the lights during the night. Test results indicated that strobe lights were able to reduce fish densities by at least 50% in front of turbines operating at higher discharges, which would be sufficient to improve sportfish harvest. We also used split-beam hydroacoustics to monitor the kokanee population in Dworshak Reservoir during 2005. Estimated abundance of kokanee decreased from the 2004 population estimate. Based on hydroacoustic surveys, we estimated 3,011,626 kokanee (90% CI {+-} 15.2%) in Dworshak Reservoir, July 2005. This included 2,135,986 age-0 (90% CI {+-} 15.9%), 769,175 age-1 (90% CI {+-} 16.0%), and 107,465 age-2 (90% CI {+-} 15.2%). Poor survival of kokanee from age-1 to age-2 continued to keep age-2 densities below the management goal of 30-50 adults/ha. Entrainment sampling was conducted with fixed-site split-beam hydroacoustics a minimum of two days per month for a continuous 24 h period when dam operations permitted. The highest fish detection rates from entrainment assessments were again found during nighttime periods and lowest during the day. Fish detection rates were low during high discharges throughout the spring and summer and highest during low discharges in September and November. High discharge during drawdowns for anadromous fish flows in July and August again resulted in low detection rates and susceptibility to entrainment. Index counts of spawning kokanee in four tributary streams totaled 12,742 fish. This data fits the previously developed relationship between spawner counts and adult kokanee abundance in the reservoir.« less

  18. Uncertainty Management for Diagnostics and Prognostics of Batteries using Bayesian Techniques

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, kai

    2007-01-01

    Uncertainty management has always been the key hurdle faced by diagnostics and prognostics algorithms. A Bayesian treatment of this problem provides an elegant and theoretically sound approach to the modern Condition- Based Maintenance (CBM)/Prognostic Health Management (PHM) paradigm. The application of the Bayesian techniques to regression and classification in the form of Relevance Vector Machine (RVM), and to state estimation as in Particle Filters (PF), provides a powerful tool to integrate the diagnosis and prognosis of battery health. The RVM, which is a Bayesian treatment of the Support Vector Machine (SVM), is used for model identification, while the PF framework uses the learnt model, statistical estimates of noise and anticipated operational conditions to provide estimates of remaining useful life (RUL) in the form of a probability density function (PDF). This type of prognostics generates a significant value addition to the management of any operation involving electrical systems.

  19. Noise behavior of microwave amplifiers operating under nonlinear conditions

    NASA Astrophysics Data System (ADS)

    Escotte, L.; Gonneau, E.; Chambon, C.; Graffeuil, J.

    2005-12-01

    B The noise behavior of microwave amplifiers operating under a large-signal condition has been studied in this paper. A Gaussian noise is added to a microwave signal and they are applied at the input of several amplifying devices. Experimental data show a decrease of the output noise spectral density when the power of the microwave signal at the input of the devices increases due to the compression of the amplifiers. A distortion component due to the interaction of the signal and its harmonics with the noise is also demonstrated from a simplified theoretical model. The statistical properties of the signal and the noise have also been investigated in order to verify the Gaussianity of the noise at the output of the nonlinear circuits. We have also observed that the majority of the measured devices show some variations of their additive noise versus the input power level.

  20. Natural environment application for NASP-X-30 design and mission planning

    NASA Technical Reports Server (NTRS)

    Johnson, D. L.; Hill, C. K.; Brown, S. C.; Batts, G. W.

    1993-01-01

    The NASA/MSFC Mission Analysis Program has recently been utilized in various National Aero-Space Plane (NASP) mission and operational planning scenarios. This paper focuses on presenting various atmospheric constraint statistics based on assumed NASP mission phases using established natural environment design, parametric, threshold values. Probabilities of no-go are calculated using atmospheric parameters such as temperature, humidity, density altitude, peak/steady-state winds, cloud cover/ceiling, thunderstorms, and precipitation. The program although developed to evaluate test or operational missions after flight constraints have been established, can provide valuable information in the design phase of the NASP X-30 program. Inputting the design values as flight constraints the Mission Analysis Program returns the probability of no-go, or launch delay, by hour by month. This output tells the X-30 program manager whether the design values are stringent enough to meet his required test flight schedules.

  1. ELUCID - Exploring the Local Universe with ReConstructed Initial Density Field III: Constrained Simulation in the SDSS Volume

    NASA Astrophysics Data System (ADS)

    Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; Zhang, Youcai; Shi, JingJing; Jing, Y. P.; Liu, Chengze; Li, Shijie; Kang, Xi; Gao, Yang

    2016-11-01

    A method we developed recently for the reconstruction of the initial density field in the nearby universe is applied to the Sloan Digital Sky Survey Data Release 7. A high-resolution N-body constrained simulation (CS) of the reconstructed initial conditions, with 30723 particles evolved in a 500 {h}-1 {Mpc} box, is carried out and analyzed in terms of the statistical properties of the final density field and its relation with the distribution of Sloan Digital Sky Survey galaxies. We find that the statistical properties of the cosmic web and the halo populations are accurately reproduced in the CS. The galaxy density field is strongly correlated with the CS density field, with a bias that depends on both galaxy luminosity and color. Our further investigations show that the CS provides robust quantities describing the environments within which the observed galaxies and galaxy systems reside. Cosmic variance is greatly reduced in the CS so that the statistical uncertainties can be controlled effectively, even for samples of small volumes.

  2. Optimization of electrocoagulation process to treat grey wastewater in batch mode using response surface methodology.

    PubMed

    Karichappan, Thirugnanasambandham; Venkatachalam, Sivakumar; Jeganathan, Prakash Maran

    2014-01-10

    Discharge of grey wastewater into the ecological system causes the negative impact effect on receiving water bodies. In this present study, electrocoagulation process (EC) was investigated to treat grey wastewater under different operating conditions such as initial pH (4-8), current density (10-30 mA/cm2), electrode distance (4-6 cm) and electrolysis time (5-25 min) by using stainless steel (SS) anode in batch mode. Four factors with five levels Box-Behnken response surface design (BBD) was employed to optimize and investigate the effect of process variables on the responses such as total solids (TS), chemical oxygen demand (COD) and fecal coliform (FC) removal. The process variables showed significant effect on the electrocoagulation treatment process. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to study the electrocoagulation process statistically. The optimal operating conditions were found to be: initial pH of 7, current density of 20 mA/cm2, electrode distance of 5 cm and electrolysis time of 20 min. These results indicated that EC process can be scale up in large scale level to treat grey wastewater with high removal efficiency of TS, COD and FC.

  3. Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment

    PubMed Central

    Hashim, Mazlan

    2015-01-01

    This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning. PMID:25898919

  4. Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment.

    PubMed

    Shahabi, Himan; Hashim, Mazlan

    2015-04-22

    This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning.

  5. Generalized t-statistic for two-group classification.

    PubMed

    Komori, Osamu; Eguchi, Shinto; Copas, John B

    2015-06-01

    In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.

  6. A three-dimensional refractive index model for simulation of optical wave propagation in atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Paramonov, P. V.; Vorontsov, A. M.; Kunitsyn, V. E.

    2015-10-01

    Numerical modeling of optical wave propagation in atmospheric turbulence is traditionally performed with using the so-called "split"-operator method, when the influence of the propagation medium's refractive index inhomogeneities is accounted for only within a system of infinitely narrow layers (phase screens) where phase is distorted. Commonly, under certain assumptions, such phase screens are considered as mutually statistically uncorrelated. However, in several important applications including laser target tracking, remote sensing, and atmospheric imaging, accurate optical field propagation modeling assumes upper limitations on interscreen spacing. The latter situation can be observed, for instance, in the presence of large-scale turbulent inhomogeneities or in deep turbulence conditions, where interscreen distances become comparable with turbulence outer scale and, hence, corresponding phase screens cannot be statistically uncorrelated. In this paper, we discuss correlated phase screens. The statistical characteristics of screens are calculated based on a representation of turbulent fluctuations of three-dimensional (3D) refractive index random field as a set of sequentially correlated 3D layers displaced in the wave propagation direction. The statistical characteristics of refractive index fluctuations are described in terms of the von Karman power spectrum density. In the representation of these 3D layers by corresponding phase screens, the geometrical optics approximation is used.

  7. Minkowski Tensors in Two Dimensions: Probing the Morphology and Isotropy of the Matter and Galaxy Density Fields

    NASA Astrophysics Data System (ADS)

    Appleby, Stephen; Chingangbam, Pravabati; Park, Changbom; Hong, Sungwook E.; Kim, Juhan; Ganesan, Vidhya

    2018-05-01

    We apply the Minkowski tensor statistics to two-dimensional slices of the three-dimensional matter density field. The Minkowski tensors are a set of functions that are sensitive to directionally dependent signals in the data and, furthermore, can be used to quantify the mean shape of density fields. We begin by reviewing the definition of Minkowski tensors and introducing a method of calculating them from a discretely sampled field. Focusing on the statistic {W}21,1—a 2 × 2 matrix—we calculate its value for both the entire excursion set and individual connected regions and holes within the set. To study the morphology of structures within the excursion set, we calculate the eigenvalues λ 1, λ 2 for the matrix {W}21,1 of each distinct connected region and hole and measure their mean shape using the ratio β \\equiv < {λ }2/{λ }1> . We compare both {W}21,1 and β for a Gaussian field and a smoothed density field generated from the latest Horizon Run 4 cosmological simulation to study the effect of gravitational collapse on these functions. The global statistic {W}21,1 is essentially independent of gravitational collapse, as the process maintains statistical isotropy. However, β is modified significantly, with overdensities becoming relatively more circular compared to underdensities at low redshifts. When applying the statistics to a redshift-space distorted density field, the matrix {W}21,1 is no longer proportional to the identity matrix, and measurements of its diagonal elements can be used to probe the large-scale velocity field.

  8. Reciprocity in directed networks

    NASA Astrophysics Data System (ADS)

    Yin, Mei; Zhu, Lingjiong

    2016-04-01

    Reciprocity is an important characteristic of directed networks and has been widely used in the modeling of World Wide Web, email, social, and other complex networks. In this paper, we take a statistical physics point of view and study the limiting entropy and free energy densities from the microcanonical ensemble, the canonical ensemble, and the grand canonical ensemble whose sufficient statistics are given by edge and reciprocal densities. The sparse case is also studied for the grand canonical ensemble. Extensions to more general reciprocal models including reciprocal triangle and star densities will likewise be discussed.

  9. Cylinders out of a top hat: counts-in-cells for projected densities

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon

    2018-06-01

    Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.

  10. Evidence of codon usage in the nearest neighbor spacing distribution of bases in bacterial genomes

    NASA Astrophysics Data System (ADS)

    Higareda, M. F.; Geiger, O.; Mendoza, L.; Méndez-Sánchez, R. A.

    2012-02-01

    Statistical analysis of whole genomic sequences usually assumes a homogeneous nucleotide density throughout the genome, an assumption that has been proved incorrect for several organisms since the nucleotide density is only locally homogeneous. To avoid giving a single numerical value to this variable property, we propose the use of spectral statistics, which characterizes the density of nucleotides as a function of its position in the genome. We show that the cumulative density of bases in bacterial genomes can be separated into an average (or secular) plus a fluctuating part. Bacterial genomes can be divided into two groups according to the qualitative description of their secular part: linear and piecewise linear. These two groups of genomes show different properties when their nucleotide spacing distribution is studied. In order to analyze genomes having a variable nucleotide density, statistically, the use of unfolding is necessary, i.e., to get a separation between the secular part and the fluctuations. The unfolding allows an adequate comparison with the statistical properties of other genomes. With this methodology, four genomes were analyzed Burkholderia, Bacillus, Clostridium and Corynebacterium. Interestingly, the nearest neighbor spacing distributions or detrended distance distributions are very similar for species within the same genus but they are very different for species from different genera. This difference can be attributed to the difference in the codon usage.

  11. Statistical properties of two sine waves in Gaussian noise.

    NASA Technical Reports Server (NTRS)

    Esposito, R.; Wilson, L. R.

    1973-01-01

    A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).

  12. System theoretic models for high density VLSI structures

    NASA Astrophysics Data System (ADS)

    Dickinson, Bradley W.; Hopkins, William E., Jr.

    This research project involved the development of mathematical models for analysis, synthesis, and simulation of large systems of interacting devices. The work was motivated by problems that may become important in high density VLSI chips with characteristic feature sizes less than 1 micron: it is anticipated that interactions of neighboring devices will play an important role in the determination of circuit properties. It is hoped that the combination of high device densities and such local interactions can somehow be exploited to increase circuit speed and to reduce power consumption. To address these issues from the point of view of system theory, research was pursued in the areas of nonlinear and stochastic systems and into neural network models. Statistical models were developed to characterize various features of the dynamic behavior of interacting systems. Random process models for studying the resulting asynchronous modes of operation were investigated. The local interactions themselves may be modeled as stochastic effects. The resulting behavior was investigated through the use of various scaling limits, and by a combination of other analytical and simulation techniques. Techniques arising in a variety of disciplines where models of interaction were formulated and explored were considered and adapted for use.

  13. Assessing a Novel Method to Reduce Anesthesia Machine Contamination: A Prospective, Observational Trial.

    PubMed

    Biddle, Chuck J; George-Gay, Beverly; Prasanna, Praveen; Hill, Emily M; Davis, Thomas C; Verhulst, Brad

    2018-01-01

    Anesthesia machines are known reservoirs of bacterial species, potentially contributing to healthcare associated infections (HAIs). An inexpensive, disposable, nonpermeable, transparent anesthesia machine wrap (AMW) may reduce microbial contamination of the anesthesia machine. This study quantified the density and diversity of bacterial species found on anesthesia machines after terminal cleaning and between cases during actual anesthesia care to assess the impact of the AMW. We hypothesized reduced bioburden with the use of the AMW. In a prospective, experimental research design, the AMW was used in 11 surgical cases (intervention group) and not used in 11 control surgical cases. Cases were consecutively assigned to general surgical operating rooms. Seven frequently touched and difficult to disinfect "hot spots" were cultured on each machine preceding and following each case. The density and diversity of cultured colony forming units (CFUs) between the covered and uncovered machines were compared using Wilcoxon signed-rank test and Student's t -tests. There was a statistically significant reduction in CFU density and diversity when the AMW was employed. The protective effect of the AMW during regular anesthetic care provides a reliable and low-cost method to minimize the transmission of pathogens across patients and potentially reduces HAIs.

  14. Statistical Short-Range Guidance for Peak Wind Speed Forecasts on Kennedy Space Center/Cape Canaveral Air Force Station: Phase I Results

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.; Merceret, Francis J. (Technical Monitor)

    2002-01-01

    This report describes the results of the ANU's (Applied Meteorology Unit) Short-Range Statistical Forecasting task for peak winds. The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The Keith Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A 7 year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. In all climatologies, the average and peak wind speeds were highly variable in time. This indicated that the development of a peak wind forecasting tool would be difficult. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. The climatologies and PDFs provide tools with which to make peak wind forecasts that are critical to safe operations.

  15. Simulation and evaluation of phase noise for optical amplification using semiconductor optical amplifiers in DPSK applications

    NASA Astrophysics Data System (ADS)

    Hong, Wei; Huang, Dexiu; Zhang, Xinliang; Zhu, Guangxi

    2008-01-01

    A thorough simulation and evaluation of phase noise for optical amplification using semiconductor optical amplifier (SOA) is very important for predicting its performance in differential phase-shift keyed (DPSK) applications. In this paper, standard deviation and probability distribution of differential phase noise at the SOA output are obtained from the statistics of simulated differential phase noise. By using a full-wave model of SOA, the noise performance in the entire operation range can be investigated. It is shown that nonlinear phase noise substantially contributes to the total phase noise in case of a noisy signal amplified by a saturated SOA and the nonlinear contribution is larger with shorter SOA carrier lifetime. It is also shown that Gaussian distribution can be useful as a good approximation of the total differential phase noise statistics in the whole operation range. Power penalty due to differential phase noise is evaluated using a semi-analytical probability density function (PDF) of receiver noise. Obvious increase of power penalty at high signal input powers can be found for low input OSNR, which is due to both the large nonlinear differential phase noise and the dependence of BER vs. receiving power curvature on differential phase noise standard deviation.

  16. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  17. Active contours on statistical manifolds and texture segmentation

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2005-01-01

    A new approach to active contours on statistical manifolds is presented. The statistical manifolds are 2- dimensional Riemannian manifolds that are statistically defined by maps that transform a parameter domain onto a set of probability density functions. In this novel framework, color or texture features are measured at each image point and their statistical...

  18. Active contours on statistical manifolds and texture segmentaiton

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2005-01-01

    A new approach to active contours on statistical manifolds is presented. The statistical manifolds are 2- dimensional Riemannian manifolds that are statistically defined by maps that transform a parameter domain onto-a set of probability density functions. In this novel framework, color or texture features are measured at each Image point and their statistical...

  19. Density (DE)

    Treesearch

    John F. Caratti

    2006-01-01

    The FIREMON Density (DE) method is used to assess changes in plant species density and height for a macroplot. This method uses multiple quadrats and belt transects (transects having a width) to sample within plot variation and quantify statistically valid changes in plant species density and height over time. Herbaceous plant species are sampled with quadrats while...

  20. [Distribution of corneal densitometry and its correlation with ocular stray light in healthy eyes].

    PubMed

    Wu, Zhiqing; Wang, Yan; Zhang, Lin; Wu, Di; Wei, Shengsheng; Su, Xiaolian

    2014-01-01

    To evaluate and investigate the distribution of corneal density and its Correlation with stray-light value in adult and healthy eyes. A prospective study. Human corneal specimens ranging in age between 20 and 49 years, 116 patients (232 eyes) in total, divided into three groups: 20-29, 30-39, 40-49. Pentacam was used to evaluate total corneal average density and corneal thickness at different diameter around the corneal apex, for corneal density were ≤ 2 mm, >2 mm and ≤ 6 mm, >6 mm and ≤ 10 mm, for corneal thickness were 2 mm, 6 mm and 10 mm, C-quant was used for the stray-light value. Software SPSS 17.0 was used for statistical analysis. Independent samples t testing method was applied to compare the corneal densitometry in different gender and between left eyes and right ones, One-way ANOVA was applied to analyze the differences of corneal density in different age groups and diameters. Pearson correlation analysis was applied to assess the correlation in corneal densitometry values of different diameters, between corneal density of different diameters and age, corneal density of different diameters and corneal thickness of different diameters, corneal density of different diameters and stray-light values. Corneal density for ≤ 2 mm, >2 mm and ≤ 6 mm, >6 mm and ≤ 10 mm diameter are 10.1 ± 1.5(8.2-16.7), 9.3 ± 1.3(7.9-14.2), 9.6 ± 1.7(7.3-16.2). Corneal density of >6 mm and ≤ 10 mm diameter in different age groups were 8.9 ± 1.1, 9.3 ± 1.2, 10.7 ± 2.1, there was a statistical difference in these values (F = 28.939, P = 0.000), and there was a positive correlation between corneal density of >6 mm and ≤ 10 mm diameter and age (r = 0.417, P = 0.000), There were no statistical differences in corneal density values of ≤ 2 mm and >2 mm and ≤ 6 mm in different age groups (F = 1.575, 1.436; P > 0.05), and they had no correlation with age (r = 0.002, 0.048; P > 0.05). There was no statistical difference in corneal density in different gender (t = 1.744, 1.647, -1.181; P > 0.05). Corneal density values of left eyes and right ones had positive relationships at the same diameter (r = 0.977, P = 0.000; r = 0.992, P = 0.000; r = 0.933, P = 0.000), and there were no statistical differences (t = 0.124, 0.199, -0.020;P > 0.05). Between corneal density values of different diameter, there are also some positive relationships, >6 mm and ≤ 10 mm and ≤ 2 mm (r = 0.710, P = 0.000), >6 mm and ≤ 10 mm and >2 mm and ≤ 6 mm (r = 0.748, P = 0.000), ≤ 2 mm and >2 mm and ≤ 6 mm (r = 0.973, P = 0.000), relationship between ≤ 2 mm and >2 mm and ≤ 6 mm, >2 mm and ≤ 6 mm and >6 mm and ≤ 10 mm was obvious, and there was statistical difference in them (F = 17.057, P = 0.000) . The ocular stray light value was 0.95 ± 0.19(0.48-1.38), Corneal density values of ≤ 2 mm, >2 mm and ≤ 6 mm and >6 mm and ≤ 10 mm diameter had positive relationships with the stray light value (r = 0.134,0.146,0.159, P = 0.042,0.026,0.016). Corneal density can be influenced by age, the influence from age infected the corneal density of peripheral more. There was no correlation between corneal density and corneal thickness. There were some influences of corneal density of healthy eyes to the ocular stray light.

  1. A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals

    DOE PAGES

    Mohamed, Mamdouh S.; Larson, Bennett C.; Tischler, Jonathan Z.; ...

    2015-05-18

    The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoreticalmore » analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kr ner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.« less

  2. Reduction in postoperative high-density lipoprotein cholesterol levels in children undergoing the Fontan operation.

    PubMed

    Zyblewski, Sinai C; Argraves, W Scott; Graham, Eric M; Slate, Elizabeth H; Atz, Andrew M; Bradley, Scott M; McQuinn, Tim C; Wilkerson, Brent A; Wing, Shane B; Argraves, Kelley M

    2012-10-01

    Despite the emerging relevance of high-density lipoprotein (HDL) in the inflammatory cascade and vascular barrier integrity, HDL levels in children undergoing cardiac surgery are unexplored. As a measure of HDL levels, the HDL-cholesterol (HDL-C) in single-ventricle patients was quantified before and after the Fontan operation, and it was determined whether relationships existed between the duration and the type of postoperative pleural effusions. The study prospectively enrolled 12 children undergoing the Fontan operation. Plasma HDL-C levels were measured before and after cardiopulmonary bypass. The outcome variables of interest were the duration and type of chest tube drainage (chylous vs. nonchylous). The Kendall rank correlation coefficient and the Wilcoxon rank sum test were used. There were 11 complete observations. The median preoperative HDL-C level for all the subjects was 30 mg/dl (range, 24-53 mg/dl), and the median postcardiopulmonary bypass level was 21 mg/dl (range, 14-46 mg/dl) (p = 0.004). There was a tendency toward a moderate inverse correlation (-0.42) between the postcardiopulmonary bypass HDL-C level and the duration of chest tube drainage, but the result was not statistically significant (p = 0.07). In the chylous effusion group, the median postcardiopulmonary bypass HDL-C tended to be lower (16 vs. 23 mg/dl; p = 0.09). After the Fontan operation, the plasma HDL-C levels in children are significantly reduced. It is reasonable to conclude that the reduction in HDL-C reflects reduced plasma levels of HDL particles, which may have pertinent implications in postoperative pleural effusions given the antiinflammatory and endothelial barrier functions of HDL.

  3. Development and validation of a turbulent-mix model for variable-density and compressible flows.

    PubMed

    Banerjee, Arindam; Gore, Robert A; Andrews, Malcolm J

    2010-10-01

    The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.

  4. Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise.

    PubMed

    Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang

    2017-02-14

    In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α -stable (S α S) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf).

  5. Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise

    PubMed Central

    Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang

    2017-01-01

    In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α-stable (SαS) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf). PMID:28216590

  6. Experimental studies and statistical analysis of membrane fouling behavior and performance in microfiltration of microalgae by a gas sparging assisted process.

    PubMed

    Javadi, Najvan; Ashtiani, Farzin Zokaee; Fouladitajar, Amir; Zenooz, Alireza Moosavi

    2014-06-01

    Response surface methodology (RSM) and central composite design (CCD) were applied for modeling and optimization of cross-flow microfiltration of Chlorella sp. suspension. The effects of operating conditions, namely transmembrane pressure (TMP), feed flow rate (Qf) and optical density of feed suspension (ODf), on the permeate flux and their interactions were determined. Analysis of variance (ANOVA) was performed to test the significance of response surface model. The effect of gas sparging technique and different gas-liquid two phase flow regimes on the permeate flux was also investigated. Maximum flux enhancement was 61% and 15% for Chlorella sp. with optical densities of 1.0 and 3.0, respectively. These results indicated that gas sparging technique was more efficient in low concentration microalgae microfiltration in which up to 60% enhancement was achieved in slug flow pattern. Additionally, variations in the transmission of exopolysaccharides (EPS) and its effects on the fouling phenomenon were evaluated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. LORETA imaging of P300 in schizophrenia with individual MRI and 128-channel EEG.

    PubMed

    Pae, Ji Soo; Kwon, Jun Soo; Youn, Tak; Park, Hae-Jeong; Kim, Myung Sun; Lee, Boreom; Park, Kwang Suk

    2003-11-01

    We investigated the characteristics of P300 generators in schizophrenics by using voxel-based statistical parametric mapping of current density images. P300 generators, produced by a rare target tone of 1500 Hz (15%) under a frequent nontarget tone of 1000 Hz (85%), were measured in 20 right-handed schizophrenics and 21 controls. Low-resolution electromagnetic tomography (LORETA), using a realistic head model of the boundary element method based on individual MRI, was applied to the 128-channel EEG. Three-dimensional current density images were reconstructed from the LORETA intensity maps that covered the whole cortical gray matter. Spatial normalization and intensity normalization of the smoothed current density images were used to reduce anatomical variance and subject-specific global activity and statistical parametric mapping (SPM) was applied for the statistical analysis. We found that the sources of P300 were consistently localized at the left superior parietal area in normal subjects, while those of schizophrenics were diversely distributed. Upon statistical comparison, schizophrenics, with globally reduced current densities, showed a significant P300 current density reduction in the left medial temporal area and in the left inferior parietal area, while both left prefrontal and right orbitofrontal areas were relatively activated. The left parietotemporal area was found to correlate negatively with Positive and Negative Syndrome Scale total scores of schizophrenic patients. In conclusion, the reduced and increased areas of current density in schizophrenic patients suggest that the medial temporal and frontal areas contribute to the pathophysiology of schizophrenia, the frontotemporal circuitry abnormality.

  8. Pathological upgrading in prostate cancer patients eligible for active surveillance: Does prostate-specific antigen density matter?

    PubMed

    Jin, Byung-Soo; Kang, Seok-Hyun; Kim, Duk-Yoon; Oh, Hoon-Gyu; Kim, Chun-Il; Moon, Gi-Hak; Kwon, Tae-Gyun; Park, Jae-Shin

    2015-09-01

    To evaluate prospectively the role of prostate-specific antigen (PSA) density in predicting Gleason score upgrading in prostate cancer patients eligible for active surveillance (T1/T2, biopsy Gleason score≤6, PSA≤10 ng/mL, and ≤2 positive biopsy cores). Between January 2010 and November 2013, among patients who underwent greater than 10-core transrectal ultrasound-guided biopsy, 60 patients eligible for active surveillance underwent radical prostatectomy. By use of the modified Gleason criteria, the tumor grade of the surgical specimens was examined and compared with the biopsy results. Tumor upgrading occurred in 24 patients (40.0%). Extracapsular disease and positive surgical margins were found in 6 patients (10.0%) and 8 patients (17.30%), respectively. A statistically significant correlation between PSA density and postoperative upgrading was found (p=0.030); this was in contrast with the other studied parameters, which failed to reach significance, including PSA, prostate volume, number of biopsy cores, and number of positive cores. Tumor upgrading was also highly associated with extracapsular cancer extension (p=0.000). The estimated optimal cutoff value of PSA density was 0.13 ng/mL(2), obtained by receiver operating characteristic analysis (area under the curve=0.66; p=0.020; 95% confidence interval, 0.53-0.78). PSA density is a strong predictor of Gleason score upgrading after radical prostatectomy in patients eligible for active surveillance. Because tumor upgrading increases the potential for postoperative pathological adverse findings and prognosis, PSA density should be considered when treating and consulting patients eligible for active surveillance.

  9. Contingency and statistical laws in replicate microbial closed ecosystems.

    PubMed

    Hekstra, Doeke R; Leibler, Stanislas

    2012-05-25

    Contingency, the persistent influence of past random events, pervades biology. To what extent, then, is each course of ecological or evolutionary dynamics unique, and to what extent are these dynamics subject to a common statistical structure? Addressing this question requires replicate measurements to search for emergent statistical laws. We establish a readily replicated microbial closed ecosystem (CES), sustaining its three species for years. We precisely measure the local population density of each species in many CES replicates, started from the same initial conditions and kept under constant light and temperature. The covariation among replicates of the three species densities acquires a stable structure, which could be decomposed into discrete eigenvectors, or "ecomodes." The largest ecomode dominates population density fluctuations around the replicate-average dynamics. These fluctuations follow simple power laws consistent with a geometric random walk. Thus, variability in ecological dynamics can be studied with CES replicates and described by simple statistical laws. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events

    NASA Astrophysics Data System (ADS)

    McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.

    2015-12-01

    Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.

  11. Experimental study of high density foods for the Space Operations Center

    NASA Technical Reports Server (NTRS)

    Ahmed, S. M.

    1981-01-01

    The experimental study of high density foods for the Space Operations Center is described. A sensory evaluation of the high density foods was conducted first to test the acceptability of the products. A shelf-life study of the high density foods was also conducted for three different time lengths at three different temperatures. The nutritional analysis of the high density foods is at present incomplete.

  12. Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.

    2016-01-01

    Flexibility where possible, and structure where necessary. Consider the needs of national security, safe airspace operations, economic opportunities, and emerging technologies. Risk-based approach based on population density, assets on the ground, density of operations, etc. Digital, virtual, dynamic, and as needed UTM services to manage operations.

  13. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  14. Shot-to-shot reproducibility of a self-magnetically insulated ion diode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pushkarev, A. I.; Isakova, Yu. I.; Khailov, I. P.

    In this paper we present the analysis of shot to shot reproducibility of the ion beam which is formed by a self-magnetically insulated ion diode with an explosive emission graphite cathode. The experiments were carried out with the TEMP-4M accelerator operating in double-pulse mode: the first pulse is of negative polarity (300-500 ns, 100-150 kV), and this is followed by a second pulse of positive polarity (150 ns, 250-300 kV). The ion current density was 10-70 A/cm{sup 2} depending on the diode geometry. The beam was composed from carbon ions (80%-85%) and protons. It was found that shot to shotmore » variation in the ion current density was about 35%-40%, whilst the diode voltage and current were comparatively stable with the variation limited to no more than 10%. It was shown that focusing of the ion beam can improve the stability of the ion current generation and reduces the variation to 18%-20%. In order to find out the reason for the shot-to-shot variation in ion current density we examined the statistical correlation between the current density of the accelerated beam and other measured characteristics of the diode, such as the accelerating voltage, total current, and first pulse duration. The correlation between the ion current density measured simultaneously at different positions within the cross-section of the beam was also investigated. It was shown that the shot-to-shot variation in ion current density is mainly attributed to the variation in the density of electrons diffusing from the drift region into the A-K gap.« less

  15. Shot-to-shot reproducibility of a self-magnetically insulated ion diode.

    PubMed

    Pushkarev, A I; Isakova, Yu I; Khailov, I P

    2012-07-01

    In this paper we present the analysis of shot to shot reproducibility of the ion beam which is formed by a self-magnetically insulated ion diode with an explosive emission graphite cathode. The experiments were carried out with the TEMP-4M accelerator operating in double-pulse mode: the first pulse is of negative polarity (300-500 ns, 100-150 kV), and this is followed by a second pulse of positive polarity (150 ns, 250-300 kV). The ion current density was 10-70 A/cm(2) depending on the diode geometry. The beam was composed from carbon ions (80%-85%) and protons. It was found that shot to shot variation in the ion current density was about 35%-40%, whilst the diode voltage and current were comparatively stable with the variation limited to no more than 10%. It was shown that focusing of the ion beam can improve the stability of the ion current generation and reduces the variation to 18%-20%. In order to find out the reason for the shot-to-shot variation in ion current density we examined the statistical correlation between the current density of the accelerated beam and other measured characteristics of the diode, such as the accelerating voltage, total current, and first pulse duration. The correlation between the ion current density measured simultaneously at different positions within the cross-section of the beam was also investigated. It was shown that the shot-to-shot variation in ion current density is mainly attributed to the variation in the density of electrons diffusing from the drift region into the A-K gap.

  16. Predation and fragmentation portrayed in the statistical structure of prey time series

    PubMed Central

    Hendrichsen, Ditte K; Topping, Chris J; Forchhammer, Mads C

    2009-01-01

    Background Statistical autoregressive analyses of direct and delayed density dependence are widespread in ecological research. The models suggest that changes in ecological factors affecting density dependence, like predation and landscape heterogeneity are directly portrayed in the first and second order autoregressive parameters, and the models are therefore used to decipher complex biological patterns. However, independent tests of model predictions are complicated by the inherent variability of natural populations, where differences in landscape structure, climate or species composition prevent controlled repeated analyses. To circumvent this problem, we applied second-order autoregressive time series analyses to data generated by a realistic agent-based computer model. The model simulated life history decisions of individual field voles under controlled variations in predator pressure and landscape fragmentation. Analyses were made on three levels: comparisons between predated and non-predated populations, between populations exposed to different types of predators and between populations experiencing different degrees of habitat fragmentation. Results The results are unambiguous: Changes in landscape fragmentation and the numerical response of predators are clearly portrayed in the statistical time series structure as predicted by the autoregressive model. Populations without predators displayed significantly stronger negative direct density dependence than did those exposed to predators, where direct density dependence was only moderately negative. The effects of predation versus no predation had an even stronger effect on the delayed density dependence of the simulated prey populations. In non-predated prey populations, the coefficients of delayed density dependence were distinctly positive, whereas they were negative in predated populations. Similarly, increasing the degree of fragmentation of optimal habitat available to the prey was accompanied with a shift in the delayed density dependence, from strongly negative to gradually becoming less negative. Conclusion We conclude that statistical second-order autoregressive time series analyses are capable of deciphering interactions within and across trophic levels and their effect on direct and delayed density dependence. PMID:19419539

  17. Petrophysical Properties of Twenty Drill Cores from the Los Azufres, Mexico, Geothermal Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iglesias, E.R.; Contreras L., E.; Garcia G., A.

    1987-01-20

    For this study we selected 20 drill cores covering a wide range of depths (400-3000 m), from 15 wells, that provide a reasonable coverage of the field. Only andesite, the largely predominant rock type in the field, was included in this sample. We measured bulk density, grain (solids) density, effective porosity and (matrix) permeability on a considerable number of specimens taken from the cores; and inferred the corresponding total porosity and fraction of interconnected total porosity. We characterized the statistical distributions of the measured and inferred variables. The distributions of bulk density and grain density resulted approximately normal; the distributionsmore » of effective porosity, total porosity and fraction of total porosity turned out to be bimodal; the permeability distribution resulted highly skewed towards very small (1 mdarcy) values, though values as high as 400 mdarcies were measured. We also characterized the internal inhomogeneity of the cores by means of the ratio (standard deviation/mean) corresponding to the bulk density in each core (in average there are 9 specimens per core). The cores were found to present clearly discernible inhomogeneity; this quantitative characterization will help design new experimental work and interpret currently available and forthcoming results. We also found statistically significant linear correlations between total density and density of solids, effective porosity and total density, total porosity and total density, fraction of interconnected total porosity and the inverse of the effective porosity, total porosity and effective porosity; bulk density and total porosity also correlate with elevation. These results provide the first sizable and statistically detailed database available on petrophysical properties of the Los Azufres andesites. 1 tab., 16 figs., 4 refs.« less

  18. Radiographic comparison of different concentrations of recombinant human bone morphogenetic protein with allogenic bone compared with the use of 100% mineralized cancellous bone allograft in maxillary sinus grafting.

    PubMed

    Froum, Stuart J; Wallace, Stephen; Cho, Sang-Choon; Khouly, Ismael; Rosenberg, Edwin; Corby, Patricia; Froum, Scott; Mascarenhas, Patrick; Tarnow, Dennis P

    2014-01-01

    The purpose of this study was to radiographically evaluate, then analyze, bone height, volume, and density with reference to percentage of vital bone after maxillary sinuses were grafted using two different doses of recombinant human bone morphogenetic protein 2/acellular collagen sponge (rhBMP-2/ACS) combined with mineralized cancellous bone allograft (MCBA) and a control sinus grafted with MCBA only. A total of 18 patients (36 sinuses) were used for analysis of height and volume measurements, having two of three graft combinations (one in each sinus): (1) control, MCBA only; (2) test 1, MCBA + 5.6 mL of rhBMP-2/ACS (containing 8.4 mg of rhBMP-2); and (3) test 2, MCBA + 2.8 mL of rhBMP-2/ACS (containing 4.2 mg of rhBMP-2). The study was completed with 16 patients who also had bilateral cores removed 6 to 9 months following sinus augmentation. A computer software system was used to evaluate 36 computed tomography scans. Two time points where selected for measurements of height: The results indicated that height of the grafted sinus was significantly greater in the treatment groups compared with the control. However, by the second time point, there were no statistically significant differences. Three weeks post-surgery bone volume measurements showed similar statistically significant differences between test and controls. However, prior to core removal, test group 1 with the greater dose of rhBMP-2 showed a statistically significant greater increase compared with test group 2 and the control. There was no statistically significant difference between the latter two groups. All three groups had similar volume and shrinkage. Density measurements varied from the above results, with the control showing statistically significant greater density at both time points. By contrast, the density increase over time in both rhBMP groups was similar and statistically higher than in the control group. There were strong associations between height and volume in all groups and between volume and new vital bone only in the control group. There were no statistically significant relationships observed between height and bone density or between volume and bone density for any parameter measured. More cases and monitoring of the future survival of implants placed in these augmented sinuses are needed to verify these results.

  19. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  20. Effects of ICRF power on SOL density profiles and LH coupling during simultaneous LH and ICRF operation on Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Lau, C.; Lin, Y.; Wallace, G.; Wukitch, S. J.; Hanson, G. R.; Labombard, B.; Ochoukov, R.; Shiraiwa, S.; Terry, J.

    2013-09-01

    A dedicated experiment during simultaneous lower hybrid (LH) and ion cyclotron range-of-frequencies (ICRF) operations is carried out to evaluate and understand the effects of ICRF power on the scrape-off-layer (SOL) density profiles and on the resultant LH coupling for a wide range of plasma parameters on Alcator C-Mod. Operation of the LH launcher with the adjacent ICRF antenna significantly degrades LH coupling while operation with the ICRF antenna that is not magnetically connected to the LH launcher minimally affects LH coupling. An X-mode reflectometer system at three poloidal locations adjacent to the LH launcher and a visible video camera imaging the LH launcher are used to measure local SOL density profile and emissivity modifications with the application of LH and LH + ICRF power. These measurements confirm that the density in front of the LH launcher depends strongly on the magnetic field line mapping of the active ICRF antenna. Reflectometer measurements also observe both ICRF-driven and LH-driven poloidal density profile asymmetries, especially a strong density depletion at certain poloidal locations in front of the LH launcher during operation with a magnetically connected ICRF antenna. The results indicate that understanding both LH-driven flows and ICRF sheath driven flows may be necessary to understand the observed density profile modifications and LH coupling results during simultaneous LH + ICRF operation.

  1. A comparison of different densities of levobupivacaine solutions for unilateral spinal anaesthesia.

    PubMed

    Yağan, Özgür; Taş, Nilay; Küçük, Ahmet; Hancı, Volkan

    2016-01-01

    The aim of the study was to compare the block characteristics and clinical effects of dextrose added to levobupivacaine solutions at different concentrations to provide unilateral spinal anaesthesia in lower extremity surgery. This prospective, randomised, double-blind study comprised 75 ASA I-II risk patients for whom unilateral total knee arthroscopy was planned. The patients were assigned to three groups: in Group I, 60mg dextrose was added to 7.5mg of 0.5% levobupivacaine, in Group II, 80mg and in Group III, 100mg. Spinal anaesthesia was applied to the patient in the lateral decubitus position with the operated side below and the patient was kept in position for 10min. The time for the sensorial block to achieve T12 level was slower in Group I than in Groups II and III (p<0.05, p<0.00). The time to full recovery of the sensorial block was 136min in Group I, 154min in Group II and 170min in Group III. The differences were statistically significant (p<0.05). The mean duration of the motor block was 88min in Group I, 105min in Group II, and 139min in Group III and the differences were statistically significant (p<0.05). The time to urination in Group I was statistically significantly shorter than in the other groups (p<0.00). The results of the study showed that together with an increase in density, the sensory and motor block duration was lengthened. It can be concluded that 30mg mL(-1) concentration of dextrose added to 7.5mg levobupivacaine is sufficient to provide unilateral spinal anaesthesia in day-case arthroscopic knee surgery. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.

  2. [A comparison of different densities of levobupivacaine solutions for unilateral spinal anaesthesia].

    PubMed

    Yağan, Özgür; Taş, Nilay; Küçük, Ahmet; Hancı, Volkan

    2016-01-01

    The aim of the study was to compare the block characteristics and clinical effects of dextrose added to levobupivacaine solutions at different concentrations to provide unilateral spinal anaesthesia in lower extremity surgery. This prospective, randomised, double-blind study comprised 75 ASA I-II risk patients for whom unilateral total knee arthroscopy was planned. The patients were assigned to three groups: in Group I, 60mg dextrose was added to 7.5mg of 0.5% levobupivacaine, in Group II, 80mg and in Group III, 100mg. Spinal anaesthesia was applied to the patient in the lateral decubitus position with the operated side below and the patient was kept in position for 10min. The time for the sensorial block to achieve T12 level was slower in Group I than in Groups II and III (p<0.05, p<0.00). The time to full recovery of the sensorial block was 136min in Group I, 154min in Group II and 170min in Group III. The differences were statistically significant (p<0.05). The mean duration of the motor block was 88min in Group I, 105min in Group II, and 139min in Group III and the differences were statistically significant (p<0.05). The time to urination in Group I was statistically significantly shorter than in the other groups (p<0.00). The results of the study showed that together with an increase in density, the sensory and motor block duration was lengthened. It can be concluded that 30mgmL(-1) concentration of dextrose added to 7.5mg levobupivacaine is sufficient to provide unilateral spinal anaesthesia in day-case arthroscopic knee surgery. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.

  3. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  4. The MSFC Solar Activity Future Estimation (MSAFE) Model

    NASA Technical Reports Server (NTRS)

    Suggs, Ron

    2017-01-01

    The Natural Environments Branch of the Engineering Directorate at Marshall Space Flight Center (MSFC) provides solar cycle forecasts for NASA space flight programs and the aerospace community. These forecasts provide future statistical estimates of sunspot number, solar radio 10.7 cm flux (F10.7), and the geomagnetic planetary index, Ap, for input to various space environment models. For example, many thermosphere density computer models used in spacecraft operations, orbital lifetime analysis, and the planning of future spacecraft missions require as inputs the F10.7 and Ap. The solar forecast is updated each month by executing MSAFE using historical and the latest month's observed solar indices to provide estimates for the balance of the current solar cycle. The forecasted solar indices represent the 13-month smoothed values consisting of a best estimate value stated as a 50 percentile value along with approximate +/- 2 sigma values stated as 95 and 5 percentile statistical values. This presentation will give an overview of the MSAFE model and the forecast for the current solar cycle.

  5. Liquid phase fluid dynamic (methanol) run in the LaPorte alternative fuels development unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharat L. Bhatt

    1997-05-01

    A fluid dynamic study was successfully completed in a bubble column at DOE's Alternative Fuels Development Unit (AFDU) in LaPorte, Texas. Significant fluid dynamic information was gathered at pilot scale during three weeks of Liquid Phase Methanol (LPMEOJP) operations in June 1995. In addition to the usual nuclear density and temperature measurements, unique differential pressure data were collected using Sandia's high-speed data acquisition system to gain insight on flow regime characteristics and bubble size distribution. Statistical analysis of the fluctuations in the pressure data suggests that the column was being operated in the churn turbulent regime at most of themore » velocities considered. Dynamic gas disengagement experiments showed a different behavior than seen in low-pressure, cold-flow work. Operation with a superficial gas velocity of 1.2 ft/sec was achieved during this run, with stable fluid dynamics and catalyst performance. Improvements included for catalyst activation in the design of the Clean Coal III LPMEOH{trademark} plant at Kingsport, Tennessee, were also confirmed. In addition, an alternate catalyst was demonstrated for LPMEOH{trademark}.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, R; Bai, W

    Purpose: Because of statistical noise in Monte Carlo dose calculations, effective point doses may not be accurate. Volume spheres are useful for evaluating dose in Monte Carlo plans, which have an inherent statistical uncertainty.We use a user-defined sphere volume instead of a point, take sphere sampling around effective point make the dose statistics to decrease the stochastic errors. Methods: Direct dose measurements were made using a 0.125cc Semiflex ion chamber (IC) 31010 isocentrically placed in the center of a homogeneous Cylindric sliced RW3 phantom (PTW, Germany).In the scanned CT phantom series the sensitive volume length of the IC (6.5mm) weremore » delineated and defined the isocenter as the simulation effective points. All beams were simulated in Monaco in accordance to the measured model. In our simulation using 2mm voxels calculation grid spacing and choose calculate dose to medium and request the relative standard deviation ≤0.5%. Taking three different assigned IC over densities (air electron density(ED) as 0.01g/cm3 default CT scanned ED and Esophageal lumen ED 0.21g/cm3) were tested at different sampling sphere radius (2.5, 2, 1.5 and 1 mm) statistics dose were compared with the measured does. Results: The results show that in the Monaco TPS for the IC using Esophageal lumen ED 0.21g/cm3 and sampling sphere radius 1.5mm the statistical value is the best accordance with the measured value, the absolute average percentage deviation is 0.49%. And when the IC using air electron density(ED) as 0.01g/cm3 and default CT scanned EDthe recommented statistical sampling sphere radius is 2.5mm, the percentage deviation are 0.61% and 0.70%, respectivly. Conclusion: In Monaco treatment planning system for the ionization chamber 31010 recommend air cavity using ED 0.21g/cm3 and sampling 1.5mm sphere volume instead of a point dose to decrease the stochastic errors. Funding Support No.C201505006.« less

  7. Spatial separation and entanglement of identical particles

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; di Martino, Sara; Facchi, Paolo; Florio, Giuseppe

    2014-04-01

    We reconsider the effect of indistinguishability on the reduced density operator of the internal degrees of freedom (tracing out the spatial degrees of freedom) for a quantum system composed of identical particles located in different spatial regions. We explicitly show that if the spin measurements are performed in disjoint spatial regions then there are no constraints on the structure of the reduced state of the system. This implies that the statistics of identical particles has no role from the point of view of separability and entanglement when the measurements are spatially separated. We extend the treatment to the case of n particles and show the connection with some recent criteria for separability based on subalgebras of observables.

  8. Testing the system detection unit for measuring solid minerals bulk density

    NASA Astrophysics Data System (ADS)

    Voytyuk, I. N.; Kopteva, A. V.

    2017-10-01

    The paper provides a brief description of the system for measuring flux per volume of solid minerals via example of mineral coal. The paper discloses the operational principle of the detection unit. The paper provides full description of testing methodology, as well as practical implementation of the detection unit testing. This paper describes the removal of two data arrays via the channel of scattered anddirect radiation for the detection units of two generations. This paper describes Matlab software to determine the statistical characteristics of the studied objects. The mean value of pulses per cycles, and pulse counting inaccuracy relatively the mean value were determined for the calculation of the stability account of the detection units.

  9. Extraction of lead and ridge characteristics from SAR images of sea ice

    NASA Technical Reports Server (NTRS)

    Vesecky, John F.; Smith, Martha P.; Samadani, Ramin

    1990-01-01

    Image-processing techniques for extracting the characteristics of lead and pressure ridge features in SAR images of sea ice are reported. The methods are applied to a SAR image of the Beaufort Sea collected from the Seasat satellite on October 3, 1978. Estimates of lead and ridge statistics are made, e.g., lead and ridge density (number of lead or ridge pixels per unit area of image) and the distribution of lead area and orientation as well as ridge length and orientation. The information derived is useful in both ice science and polar operations for such applications as albedo and heat and momentum transfer estimates, as well as ship routing and offshore engineering.

  10. 78 FR 63569 - Proposed Collection; Comment Request for Cognitive and Psychological Research Coordinated by...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-24

    ... Cognitive and Psychological Research Coordinated by Statistics of Income on Behalf of All IRS Operations... Cognitive and Psychological Research Coordinated by Statistics of Income on Behalf of All IRS Operations...: Cognitive and Psychological Research Coordinated by Statistics of Income on Behalf of All IRS Operations...

  11. 75 FR 56657 - Proposed Collection; Comment Request for Cognitive and Psychological Research Coordinated by...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-16

    ... Cognitive and Psychological Research Coordinated by Statistics of Income on Behalf of All IRS Operations... Cognitive and Psychological Research Coordinated by Statistics of Income on Behalf of All IRS Operations...: Cognitive and Psychological Research Coordinated by Statistics of Income on Behalf of All IRS Operations...

  12. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  13. Modeling shock-driven reaction in low density PMDI foam

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Alexander, C. Scott; Reinhart, William; Peterson, David

    Shock experiments on low density polyurethane foams reveal evidence of reaction at low impact pressures. However, these reaction thresholds are not evident over the low pressures reported for historical Hugoniot data of highly distended polyurethane at densities below 0.1 g/cc. To fill this gap, impact data given in a companion paper for polymethylene diisocyanate (PMDI) foam with a density of 0.087 g/cc were acquired for model validation. An equation of state (EOS) was developed to predict the shock response of these highly distended materials over the full range of impact conditions representing compaction of the inert material, low-pressure decomposition, and compression of the reaction products. A tabular SESAME EOS of the reaction products was generated using the JCZS database in the TIGER equilibrium code. In particular, the Arrhenius Burn EOS, a two-state model which transitions from an unreacted to a reacted state using single step Arrhenius kinetics, as implemented in the shock physics code CTH, was modified to include a statistical distribution of states. Hence, a single EOS is presented that predicts the onset to reaction due to shock loading in PMDI-based polyurethane foams. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract DE-AC04-94AL85000.

  14. Obtaining sub-daily new snow density from automated measurements in high mountain regions

    NASA Astrophysics Data System (ADS)

    Helfricht, Kay; Hartl, Lea; Koch, Roland; Marty, Christoph; Olefs, Marc

    2018-05-01

    The density of new snow is operationally monitored by meteorological or hydrological services at daily time intervals, or occasionally measured in local field studies. However, meteorological conditions and thus settling of the freshly deposited snow rapidly alter the new snow density until measurement. Physically based snow models and nowcasting applications make use of hourly weather data to determine the water equivalent of the snowfall and snow depth. In previous studies, a number of empirical parameterizations were developed to approximate the new snow density by meteorological parameters. These parameterizations are largely based on new snow measurements derived from local in situ measurements. In this study a data set of automated snow measurements at four stations located in the European Alps is analysed for several winter seasons. Hourly new snow densities are calculated from the height of new snow and the water equivalent of snowfall. Considering the settling of the new snow and the old snowpack, the average hourly new snow density is 68 kg m-3, with a standard deviation of 9 kg m-3. Seven existing parameterizations for estimating new snow densities were tested against these data, and most calculations overestimate the hourly automated measurements. Two of the tested parameterizations were capable of simulating low new snow densities observed at sheltered inner-alpine stations. The observed variability in new snow density from the automated measurements could not be described with satisfactory statistical significance by any of the investigated parameterizations. Applying simple linear regressions between new snow density and wet bulb temperature based on the measurements' data resulted in significant relationships (r2 > 0.5 and p ≤ 0.05) for single periods at individual stations only. Higher new snow density was calculated for the highest elevated and most wind-exposed station location. Whereas snow measurements using ultrasonic devices and snow pillows are appropriate for calculating station mean new snow densities, we recommend instruments with higher accuracy e.g. optical devices for more reliable investigations of the variability of new snow densities at sub-daily intervals.

  15. Operator pencil passing through a given operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biggs, A., E-mail: khudian@manchester.ac.uk, E-mail: adam.biggs@student.manchester.ac.uk; Khudaverdian, H. M., E-mail: khudian@manchester.ac.uk, E-mail: adam.biggs@student.manchester.ac.uk

    Let Δ be a linear differential operator acting on the space of densities of a given weight λ{sub 0} on a manifold M. One can consider a pencil of operators Π-circumflex(Δ)=(Δ{sub λ}) passing through the operator Δ such that any Δ{sub λ} is a linear differential operator acting on densities of weight λ. This pencil can be identified with a linear differential operator Δ-circumflex acting on the algebra of densities of all weights. The existence of an invariant scalar product in the algebra of densities implies a natural decomposition of operators, i.e., pencils of self-adjoint and anti-self-adjoint operators. We studymore » lifting maps that are on one hand equivariant with respect to divergenceless vector fields, and, on the other hand, with values in self-adjoint or anti-self-adjoint operators. In particular, we analyze the relation between these two concepts, and apply it to the study of diff (M)-equivariant liftings. Finally, we briefly consider the case of liftings equivariant with respect to the algebra of projective transformations and describe all regular self-adjoint and anti-self-adjoint liftings. Our constructions can be considered as a generalisation of equivariant quantisation.« less

  16. Unmanned Aerial Systems Traffic Management (UTM): Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Jung, Jaewoo; Kopardekar, Parimal H.

    2016-01-01

    Flexibility where possible, and structure where necessary. Consider the needs of national security, safe airspace operations, economic opportunities, and emerging technologies. Risk-based approach based on population density, assets on the ground, density of operations, etc. Digital, virtual, dynamic, and as needed UTM services to manage operations.

  17. Unmanned Aerial Systems Traffic Management (UTM): Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.; Cavolowsky, John

    2015-01-01

    Flexibility where possible, and structure where necessary. Consider the needs of national security, safe airspace operations, economic opportunities, and emerging technologies. Risk-based approach based on population density, assets on the ground, density of operations, etc. Digital, virtual, dynamic, and as needed UTM services to manage operations.

  18. Corneal endothelial cell density after femtosecond thin-flap LASIK and PRK for myopia: a contralateral eye study.

    PubMed

    Smith, Ryan T; Waring, George O; Durrie, Daniel S; Stahl, Jason E; Thomas, Priscilla

    2009-12-01

    To compare the effect of femtosecond thinflap LASIK and photorefractive keratectomy (PRK) on postoperative endothelial cell density. In a prospective, randomized, contralateral, single-center clinical trial, 25 patients (mean age: 30+/-5 years [range: 21 to 38 years]) underwent PRK in one eye and thin-flap LASIK in the fellow eye for the correction of myopia using a wavefront-guided platform. The central corneal endothelial cell density was measured using the NIDEK Confoscan 4 preoperatively, and at 1 and 3 months postoperatively. Changes in endothelial cell density were analyzed over time between the two refractive techniques. In PRK, the average preoperative endothelial cell density was 3011+/-329 cells/mm(2), which decreased to 2951+/-327 cells/mm(2) at 1 month (P=.5736) and 2982+/-365 cells/mm(2) at 3 months (P=.6513). In thinflap LASIK, the average preoperative endothelial cell density was 2995+/-325 cells/mm(2), which decreased to 2977+/-358 cells/mm(2) at 1 month (P=.5756) and 2931+/-369 cells/mm(2) at 3 months (P=.4106). No statistically significant difference was found between the two groups at 1 (P=.7404) or 3 (P=.3208) months postoperatively. No statistically significant change was noted in endothelial cell density following either PRK or thin-flap LASIK for the treatment of myopia. Furthermore, no statistically significant difference was found between the two groups out to 3 months postoperatively, indicating that thin-flap LASIK is as safe as PRK with regards to endothelial health.

  19. Advanced intermediate temperature sodium-nickel chloride batteries with ultra-high energy density

    DOE PAGES

    Li, Guosheng; Lu, Xiaochuan; Kim, Jin Yong; ...

    2016-02-11

    Here we demonstrate for the first time that planar Na-NiCl 2 batteries can be operated at an intermediate temperature of 190°C with ultra-high energy density. A specific energy density of 350 Wh/kg, which is 3 times higher than that of conventional tubular Na-NiCl 2 batteries operated at 280°C, was obtained for planar Na-NiCl 2 batteries operated at 190°C over a long-term cell test (1000 cycles). The high energy density and superior cycle stability are attributed to the slower particle growth of the cathode materials (NaCl and Ni) at 190°C. The results reported in this work demonstrate that planar Na-NiCl 2more » batteries operated at an intermediate temperature could greatly benefit this traditional energy storage technology by improving battery energy density, cycle life and reducing material costs.« less

  20. 47 CFR 54.807 - Interstate access universal service support.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Common Carrier Bureau Report, Statistics of Communications Common Carriers, Table 6.10—Selected Operating Statistics. Interested parties may obtain this report from the U.S. Government Printing Office or by... Bureau Report, Statistics of Communications Common Carriers, Table 6.10—Selected Operating Statistics...

  1. Voice Response System Statistics Program : Operational Handbook.

    DOT National Transportation Integrated Search

    1980-06-01

    This report documents the Voice Response System (VRS) Statistics Program developed for the preflight weather briefing VRS. It describes the VRS statistical report format and contents, the software program structure, and the program operation.

  2. Fermi liquid, clustering, and structure factor in dilute warm nuclear matter

    NASA Astrophysics Data System (ADS)

    Röpke, G.; Voskresensky, D. N.; Kryukov, I. A.; Blaschke, D.

    2018-02-01

    Properties of nuclear systems at subsaturation densities can be obtained from different approaches. We demonstrate the use of the density autocorrelation function which is related to the isothermal compressibility and, after integration, to the equation of state. This way we connect the Landau Fermi liquid theory well elaborated in nuclear physics with the approaches to dilute nuclear matter describing cluster formation. A quantum statistical approach is presented, based on the cluster decomposition of the polarization function. The fundamental quantity to be calculated is the dynamic structure factor. Comparing with the Landau Fermi liquid theory which is reproduced in lowest approximation, the account of bound state formation and continuum correlations gives the correct low-density result as described by the second virial coefficient and by the mass action law (nuclear statistical equilibrium). Going to higher densities, the inclusion of medium effects is more involved compared with other quantum statistical approaches, but the relation to the Landau Fermi liquid theory gives a promising approach to describe not only thermodynamic but also collective excitations and non-equilibrium properties of nuclear systems in a wide region of the phase diagram.

  3. An analysis of a large dataset on immigrant integration in Spain. The Statistical Mechanics perspective on Social Action

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia

    2014-02-01

    How does immigrant integration in a country change with immigration density? Guided by a statistical mechanics perspective we propose a novel approach to this problem. The analysis focuses on classical integration quantifiers such as the percentage of jobs (temporary and permanent) given to immigrants, mixed marriages, and newborns with parents of mixed origin. We find that the average values of different quantifiers may exhibit either linear or non-linear growth on immigrant density and we suggest that social action, a concept identified by Max Weber, causes the observed non-linearity. Using the statistical mechanics notion of interaction to quantitatively emulate social action, a unified mathematical model for integration is proposed and it is shown to explain both growth behaviors observed. The linear theory instead, ignoring the possibility of interaction effects would underestimate the quantifiers up to 30% when immigrant densities are low, and overestimate them as much when densities are high. The capacity to quantitatively isolate different types of integration mechanisms makes our framework a suitable tool in the quest for more efficient integration policies.

  4. Comparison of the detectability of high- and low-contrast details on a TFT screen and a CRT screen designed for radiologic diagnosis.

    PubMed

    Kotter, Elmar; Bley, Thorsten A; Saueressig, Ulrich; Fisch, Dagmar; Springer, Oliver; Winterer, Jan Torsten; Schaefer, Oliver; Langer, Mathias

    2003-11-01

    To evaluate the detection rate of fine details of a new thin-film transistor (TFT) grayscale monitor designed for radiologic diagnosis, compared with a type of cathode ray tube (CRT) screen used routinely for diagnostic radiology. Fifteen radiographs of a statistical phantom presenting low- and high-contrast details were obtained and read out with an Agfa ADC compact storage phosphor system. Each radiograph presented 60 high-density (high-contrast) and 60 low-density (low-contrast) test bodies. Approximately half the test bodies contained holes with different diameters. Observers were asked to detect the presence or absence of a hole in the test body on a 5-point confidence range. The total of 1800 test bodies was reviewed by 5 radiologists on the TFT monitor (20.8 inches; 1536 x 2048 pixels; maximum luminance, 650 cd/m2; contrast, 600:1) and the CRT monitor (21 inches; P45 Phosphor; 2048 x 2560 pixels operated at 1728 x 2304 pixels; maximum luminance, 600 cd/m2; contrast, 300:1). The data were analyzed by receiver-operator characteristic analysis. For high-contrast details, the mean area under the curve rated 0.9336 for the TFT monitor and 0.9312 for the CRT monitor. For low-contrast details, the mean area under the curve rated 0.9189 for the TFT monitor and 0.9224 for the CRT monitor. At P

  5. High-Density Signal Interface Electromagnetic Radiation Prediction for Electromagnetic Compatibility Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halligan, Matthew

    Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less

  6. Modeling of the reactant conversion rate in a turbulent shear flow

    NASA Technical Reports Server (NTRS)

    Frankel, S. H.; Madnia, C. K.; Givi, P.

    1992-01-01

    Results are presented of direct numerical simulations (DNS) of spatially developing shear flows under the influence of infinitely fast chemical reactions of the type A + B yields Products. The simulation results are used to construct the compositional structure of the scalar field in a statistical manner. The results of this statistical analysis indicate that the use of a Beta density for the probability density function (PDF) of an appropriate Shvab-Zeldovich mixture fraction provides a very good estimate of the limiting bounds of the reactant conversion rate within the shear layer. This provides a strong justification for the implementation of this density in practical modeling of non-homogeneous turbulent reacting flows. However, the validity of the model cannot be generalized for predictions of higher order statistical quantities. A closed form analytical expression is presented for predicting the maximum rate of reactant conversion in non-homogeneous reacting turbulence.

  7. Non-Abelian statistics of vortices with non-Abelian Dirac fermions.

    PubMed

    Yasui, Shigehiro; Hirono, Yuji; Itakura, Kazunori; Nitta, Muneto

    2013-05-01

    We extend our previous analysis on the exchange statistics of vortices having a single Dirac fermion trapped in each core to the case where vortices trap two Dirac fermions with U(2) symmetry. Such a system of vortices with non-Abelian Dirac fermions appears in color superconductors at extremely high densities and in supersymmetric QCD. We show that the exchange of two vortices having doublet Dirac fermions in each core is expressed by non-Abelian representations of a braid group, which is explicitly verified in the matrix representation of the exchange operators when the number of vortices is up to four. We find that the result contains the matrices previously obtained for the vortices with a single Dirac fermion in each core as a special case. The whole braid group does not immediately imply non-Abelian statistics of identical particles because it also contains exchanges between vortices with different numbers of Dirac fermions. However, we find that it does contain, as its subgroup, genuine non-Abelian statistics for the exchange of the identical particles, that is, vortices with the same number of Dirac fermions. This result is surprising compared with conventional understanding because all Dirac fermions are defined locally at each vortex, unlike the case of Majorana fermions for which Dirac fermions are defined nonlocally by Majorana fermions located at two spatially separated vortices.

  8. Statistical properties of kinetic and total energy densities in reverberant spaces.

    PubMed

    Jacobsen, Finn; Molares, Alfonso Rodríguez

    2010-04-01

    Many acoustical measurements, e.g., measurement of sound power and transmission loss, rely on determining the total sound energy in a reverberation room. The total energy is usually approximated by measuring the mean-square pressure (i.e., the potential energy density) at a number of discrete positions. The idea of measuring the total energy density instead of the potential energy density on the assumption that the former quantity varies less with position than the latter goes back to the 1930s. However, the phenomenon was not analyzed until the late 1970s and then only for the region of high modal overlap, and this analysis has never been published. Moreover, until fairly recently, measurement of the total sound energy density required an elaborate experimental arrangement based on finite-difference approximations using at least four amplitude and phase matched pressure microphones. With the advent of a three-dimensional particle velocity transducer, it has become somewhat easier to measure total rather than only potential energy density in a sound field. This paper examines the ensemble statistics of kinetic and total sound energy densities in reverberant enclosures theoretically, experimentally, and numerically.

  9. Heartwood and sapwood in eucalyptus trees: non-conventional approach to wood quality.

    PubMed

    Cherelli, Sabrina G; Sartori, Maria Márcia P; Próspero, André G; Ballarin, Adriano W

    2018-01-01

    This study evaluated the quality of heartwood and sapwood from mature trees of three species of Eucalyptus, by means of the qualification of their proportion, determination of basic and apparent density using non-destructive attenuation of gamma radiation technique and calculation of the density uniformity index. Six trees of each species (Eucalyptus grandis - 18 years old, Eucalyptus tereticornis - 35 years old and Corymbia citriodora - 28 years old) were used in the experimental program. The heartwood and sapwood were delimited by macroscopic analysis and the calculation of areas and percentage of heartwood and sapwood were performed using digital image. The uniformity index was calculated following methodology which numerically quantifies the dispersion of punctual density values of the wood around the mean density along the radius. The percentage of the heartwood was higher than the sapwood in all species studied. The density results showed no statistical difference between heartwood and sapwood. Differently from the density results, in all species studied there was statistical differences between uniformity indexes for heartwood and sapwood regions, making justifiable the inclusion of the density uniformity index as a quality parameter for Eucalyptus wood.

  10. The role of drop velocity in statistical spray description

    NASA Technical Reports Server (NTRS)

    Groeneweg, J. F.; El-Wakil, M. M.; Myers, P. S.; Uyehara, O. A.

    1978-01-01

    The justification for describing a spray by treating drop velocity as a random variable on an equal statistical basis with drop size was studied experimentally. A double exposure technique using fluorescent drop photography was used to make size and velocity measurements at selected locations in a steady ethanol spray formed by a swirl atomizer. The size velocity data were categorized to construct bivariate spray density functions to describe the spray immediately after formation and during downstream propagation. Bimodal density functions were formed by environmental interaction during downstream propagation. Large differences were also found between spatial mass density and mass flux size distribution at the same location.

  11. Carrier statistics and quantum capacitance effects on mobility extraction in two-dimensional crystal semiconductor field-effect transistors

    NASA Astrophysics Data System (ADS)

    Ma, Nan; Jena, Debdeep

    2015-03-01

    In this work, the consequence of the high band-edge density of states on the carrier statistics and quantum capacitance in transition metal dichalcogenide two-dimensional semiconductor devices is explored. The study questions the validity of commonly used expressions for extracting carrier densities and field-effect mobilities from the transfer characteristics of transistors with such channel materials. By comparison to experimental data, a new method for the accurate extraction of carrier densities and mobilities is outlined. The work thus highlights a fundamental difference between these materials and traditional semiconductors that must be considered in future experimental measurements.

  12. Optimization of electrocoagulation process to treat grey wastewater in batch mode using response surface methodology

    PubMed Central

    2014-01-01

    Background Discharge of grey wastewater into the ecological system causes the negative impact effect on receiving water bodies. Methods In this present study, electrocoagulation process (EC) was investigated to treat grey wastewater under different operating conditions such as initial pH (4–8), current density (10–30 mA/cm2), electrode distance (4–6 cm) and electrolysis time (5–25 min) by using stainless steel (SS) anode in batch mode. Four factors with five levels Box-Behnken response surface design (BBD) was employed to optimize and investigate the effect of process variables on the responses such as total solids (TS), chemical oxygen demand (COD) and fecal coliform (FC) removal. Results The process variables showed significant effect on the electrocoagulation treatment process. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to study the electrocoagulation process statistically. The optimal operating conditions were found to be: initial pH of 7, current density of 20 mA/cm2, electrode distance of 5 cm and electrolysis time of 20 min. Conclusion These results indicated that EC process can be scale up in large scale level to treat grey wastewater with high removal efficiency of TS, COD and FC. PMID:24410752

  13. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  14. Predicting Space Weather Effects on Close Approach Events

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Newman, Lauri K.; Besser, Rebecca L.; Pachura, Daniel A.

    2015-01-01

    The NASA Robotic Conjunction Assessment Risk Analysis (CARA) team sends ephemeris data to the Joint Space Operations Center (JSpOC) for conjunction assessment screening against the JSpOC high accuracy catalog and then assesses risk posed to protected assets from predicted close approaches. Since most spacecraft supported by the CARA team are located in LEO orbits, atmospheric drag is the primary source of state estimate uncertainty. Drag magnitude and uncertainty is directly governed by atmospheric density and thus space weather. At present the actual effect of space weather on atmospheric density cannot be accurately predicted because most atmospheric density models are empirical in nature, which do not perform well in prediction. The Jacchia-Bowman-HASDM 2009 (JBH09) atmospheric density model used at the JSpOC employs a solar storm active compensation feature that predicts storm sizes and arrival times and thus the resulting neutral density alterations. With this feature, estimation errors can occur in either direction (i.e., over- or under-estimation of density and thus drag). Although the exact effect of a solar storm on atmospheric drag cannot be determined, one can explore the effects of JBH09 model error on conjuncting objects' trajectories to determine if a conjunction is likely to become riskier, less risky, or pass unaffected. The CARA team has constructed a Space Weather Trade-Space tool that systematically alters the drag situation for the conjuncting objects and recalculates the probability of collision for each case to determine the range of possible effects on the collision risk. In addition to a review of the theory and the particulars of the tool, the different types of observed output will be explained, along with statistics of their frequency.

  15. The effect of undersizing and tapping on bone to implant contact and implant primary stability: A histomorphometric study on bovine ribs.

    PubMed

    Di Stefano, Danilo Alessio; Perrotti, Vittoria; Greco, Gian Battista; Cappucci, Claudia; Arosio, Paolo; Piattelli, Adriano; Iezzi, Giovanna

    2018-06-01

    Implant site preparation may be adjusted to achieve the maximum possible primary stability. The aim of this investigation was to study the relation among bone-to-implant contact at insertion, bone density, and implant primary stability intra-operatively measured by a torque-measuring implant motor, when implant sites were undersized or tapped. Undersized (n=14), standard (n=13), and tapped (n=13) implant sites were prepared on 9 segments of bovine ribs. After measuring bone density using the implant motor, 40 implants were placed, and their primary stability assessed by measuring the integral of the torque-depth insertion curve. Bovine ribs were then processed histologically, the bone-to-implant contact measured and statistically correlated to bone density and the integral. Bone-to-implant contact and the integral of the torque-depth curve were significantly greater for undersized sites than tapped sites. Moreover, a correlation between bone to implant contact, the integral and bone density was found under all preparation conditions. The slope of the bone-to-implant/density and integral/density lines was significantly greater for undersized sites, while those corresponding to standard prepared and tapped sites did not differ significantly. The integral of the torque-depth curve provided reliable information about bone-to-implant contact and primary implant stability even in tapped or undersized sites. The linear relations found among the parameters suggests a connection between extent and modality of undersizing and the corresponding increase of the integral and, consequently, of primary stability. These results might help the physician determine the extent of undersizing needed to achieve the proper implant primary stability, according to the planned loading protocol.

  16. Correlation techniques and measurements of wave-height statistics

    NASA Technical Reports Server (NTRS)

    Guthart, H.; Taylor, W. C.; Graf, K. A.; Douglas, D. G.

    1972-01-01

    Statistical measurements of wave height fluctuations have been made in a wind wave tank. The power spectral density function of temporal wave height fluctuations evidenced second-harmonic components and an f to the minus 5th power law decay beyond the second harmonic. The observations of second harmonic effects agreed very well with a theoretical prediction. From the wave statistics, surface drift currents were inferred and compared to experimental measurements with satisfactory agreement. Measurements were made of the two dimensional correlation coefficient at 15 deg increments in angle with respect to the wind vector. An estimate of the two-dimensional spatial power spectral density function was also made.

  17. Characterisation of an urban bus network for environmental purposes.

    PubMed

    André, Michel; Villanova, André

    2004-12-01

    Since pollutant emissions are closely related to the operating conditions of vehicles, their evaluation usually involves studying these operating conditions (through bus instrumentation and monitoring under actual operation), the design of representative driving or engine test cycles and the measurement of pollutant emissions. A preliminary characterisation of the routes on a bus network should make it possible to identify typical routes, the driving conditions and pollutant emissions of which are then studied. Two approaches are envisaged and applied to the Paris area, for which a wealth of information is available, which should be transferable to other bus networks. Both approaches are based on factorial analysis and automatic clustering, to allow optimum description and the identification of a pertinent typology of the bus routes in several classes. The first attempt at characterisation is based on statistics relating to bus operations: route characteristics (length, dedicated bus lanes, number of stops, location of stops: schools, tourist sites, hospitals, railways or underground stations), travel time, commercial speed, annual statistics (number of passengers, number of vehicles per hour, total kilometres), the irregularity of travel (variation of travel times, injuries, congestion.), as well as information on the problems encountered (congestion, distribution of the passenger load, junctions, bends). A second approach is based on the analysis of the "urban context" in which buses are driven. Population, employment, housing, road network, traffic and places that generate or disturb traffic (schools, railway stations, shopping areas, etc.) are calculated for the Ile de France region, by cells of 100 x 100 m, and collected in a geographical information system (GIS). Statistical analyses enable a typology of these urban cells to be established, the main parameters being density, type of housing, road types and traffic levels. The bus routes are then analysed according to their itineraries across these typical areas (distances travelled in each type of area) using a similar approach. A comparison of the typologies obtained from operational data and from urban data highlights the advantages and disadvantages of the two approaches. The first result from these typologies is the selection of routes which are representative of the different classes, in order to instrument buses and record driving patterns. This method should also make it possible to link driving conditions and urban characteristics, and then to allocate pollutant emission factors to given geographical situations, in particular, in the context of emission inventories or impact studies.

  18. Social Capital and the Trade Unions: Reciprocity, or Understanding the Ties that Bind Us?

    ERIC Educational Resources Information Center

    Whaites, Michael

    2005-01-01

    Australia once enjoyed the highest union density in the world but is now facing a "crisis of membership". Trade union membership and density have been in decline in Australia since the late 1970s with a 50 percent decrease in density since 1990. The Australian Bureau of Statistics currently puts trade union density at around 25 percent…

  19. Quality Assurance for Rapid Airfield Construction

    DTIC Science & Technology

    2008-05-01

    necessary to conduct a volume-replacement density test for in-place soil. This density test, which was developed during this investigation, involves...the test both simpler and quicker. The Clegg hammer results are the primary means of judging compaction; thus, the requirements for density tests are...minimized through a stepwise acceptance procedure. Statistical criteria for evaluating Clegg hammer and density measurements are also included

  20. 14 CFR 93.211 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Operations at High Density Traffic Airports § 93.211 Applicability. (a) This subpart prescribes rules... individual air carriers and commuter operators at the High Density Traffic Airports identified in subpart K...

  1. 14 CFR 93.211 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Operations at High Density Traffic Airports § 93.211 Applicability. (a) This subpart prescribes rules... individual air carriers and commuter operators at the High Density Traffic Airports identified in subpart K...

  2. 14 CFR 93.211 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Operations at High Density Traffic Airports § 93.211 Applicability. (a) This subpart prescribes rules... individual air carriers and commuter operators at the High Density Traffic Airports identified in subpart K...

  3. 14 CFR 93.211 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Operations at High Density Traffic Airports § 93.211 Applicability. (a) This subpart prescribes rules... individual air carriers and commuter operators at the High Density Traffic Airports identified in subpart K...

  4. 14 CFR 93.211 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Operations at High Density Traffic Airports § 93.211 Applicability. (a) This subpart prescribes rules... individual air carriers and commuter operators at the High Density Traffic Airports identified in subpart K...

  5. CUGatesDensity—Quantum circuit analyser extended to density matrices

    NASA Astrophysics Data System (ADS)

    Loke, T.; Wang, J. B.

    2013-12-01

    CUGatesDensity is an extension of the original quantum circuit analyser CUGates (Loke and Wang, 2011) [7] to provide explicit support for the use of density matrices. The new package enables simulation of quantum circuits involving statistical ensemble of mixed quantum states. Such analysis is of vital importance in dealing with quantum decoherence, measurements, noise and error correction, and fault tolerant computation. Several examples involving mixed state quantum computation are presented to illustrate the use of this package. Catalogue identifier: AEPY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5368 No. of bytes in distributed program, including test data, etc.: 143994 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer installed with a copy of Mathematica 6.0 or higher. Operating system: Any system with a copy of Mathematica 6.0 or higher installed. Classification: 4.15. Nature of problem: To simulate arbitrarily complex quantum circuits comprised of single/multiple qubit and qudit quantum gates with mixed state registers. Solution method: A density matrix representation for mixed states and a state vector representation for pure states are used. The construct is based on an irreducible form of matrix decomposition, which allows a highly efficient implementation of general controlled gates with multiple conditionals. Running time: The examples provided in the notebook CUGatesDensity.nb take approximately 30 s to run on a laptop PC.

  6. 14 CFR 93.227 - Slot use and loss.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Operations at High Density Traffic Airports § 93.227 Slot use and loss. (a) Except as provided in paragraphs... commuter operator or other person holding a slot at a high density airport shall, within 14 days after the... High Density Traffic Airport on Thanksgiving Day, the Friday following Thanksgiving Day, and the period...

  7. 14 CFR 93.227 - Slot use and loss.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Operations at High Density Traffic Airports § 93.227 Slot use and loss. (a) Except as provided in paragraphs... commuter operator or other person holding a slot at a high density airport shall, within 14 days after the... High Density Traffic Airport on Thanksgiving Day, the Friday following Thanksgiving Day, and the period...

  8. 14 CFR 93.227 - Slot use and loss.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Operations at High Density Traffic Airports § 93.227 Slot use and loss. (a) Except as provided in paragraphs... commuter operator or other person holding a slot at a high density airport shall, within 14 days after the... High Density Traffic Airport on Thanksgiving Day, the Friday following Thanksgiving Day, and the period...

  9. 14 CFR 93.227 - Slot use and loss.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Operations at High Density Traffic Airports § 93.227 Slot use and loss. (a) Except as provided in paragraphs... commuter operator or other person holding a slot at a high density airport shall, within 14 days after the... High Density Traffic Airport on Thanksgiving Day, the Friday following Thanksgiving Day, and the period...

  10. 14 CFR 93.227 - Slot use and loss.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Operations at High Density Traffic Airports § 93.227 Slot use and loss. (a) Except as provided in paragraphs... commuter operator or other person holding a slot at a high density airport shall, within 14 days after the... High Density Traffic Airport on Thanksgiving Day, the Friday following Thanksgiving Day, and the period...

  11. Extension and Density of Root Fillings and Post-operative Apical Radiolucencies in the Veterans Affairs Dental Longitudinal Study

    PubMed Central

    Zhong, Yan; Chasen, Joel; Yamanaka, Ryan; Garcia, Raul; Kaye, Elizabeth Krall; Kaufman, Jay S; Cai, Jianwen; Wilcosky, Tim; Trope, Martin; Caplan, Daniel J

    2008-01-01

    We evaluated the association between radiographically-assessed extension and density of root canal fillings and post-operative apical radiolucencies (AR) using data from 288 participants in the Veterans Affairs Dental Longitudinal Study. Study subjects were not VA patients; all received their medical and dental care in the private sector. Generalized Estimating Equations were used to account for multiple teeth within subjects and to control for covariates of interest. Defective root filling density was associated with increased odds of post-operative AR among teeth with no pre-operative AR (Odds Ratio=3.0, 95%CI=1.3–7.1), though pre-operative AR was the strongest risk factor for post-operative AR (Odds Ratio=29.2, 95%CI=13.6–63.0 among teeth with ideal density). Compared to well-extended root fillings, neither over- nor under-extended root fillings separately were related to post-operative AR, but when those two categories were collapsed into one “poorly-extended” category, poor extension was related to post-operative AR (Odds Ratio=1.8, 95%CI=1.1–3.2). PMID:18570982

  12. Impact of operating conditions on the acetylene contamination in the cathode of proton exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Zhai, Yunfeng; St-Pierre, Jean

    2017-12-01

    Realistically, proton exchange membrane fuel cells (PEMFCs) are operated under varying operating conditions that potentially impact the acetylene contamination reactions. In this paper, the effects of the cell operating conditions on the acetylene contamination in PEMFCs are investigated under different current densities and temperatures with different acetylene concentrations in the cathode. Electrochemical impedance spectroscopy is applied during the constant-current operation to analyze the impacts of the operating conditions on the acetylene electrochemical reactions. The experimental results indicate that higher acetylene concentrations, higher current densities and lower cell temperatures decrease the cell performance more. In particular, cathode poisoning becomes more severe at medium cell current densities. The cell cathode potentials at such current densities are not sufficient to completely oxidize the intermediate or sufficiently low to completely reduce the adsorbed acetylene. Based on these investigations, the possible condition-dependent limitations of the acetylene concentration and cell operating voltage are proposed for insight into the acetylene contamination mitigation stratagem. Regarding the barrier conditions, the acetylene reactions change abruptly, and adjusting the cell operation parameters to change the acetylene adsorbate and intermediate accumulation conditions to induce complete oxidation or reduction conditions may mitigate the severe acetylene contamination effects on PEMFCs.

  13. Long-Term PMD Characterization of a Metropolitan G.652 Fiber Plant

    NASA Astrophysics Data System (ADS)

    Poggiolini, Pierluigi; Nespola, Antonino; Abrate, Silvio; Ferrero, Valter; Lezzi, C.

    2006-11-01

    Using the Jones matrix eigenanalysis method, the differential group delay (DGD) behavior of a metropolitan G.652 buried operational cable plant of a major Italian telecom operator in the city of Torino was characterized. The measurement campaign lasted 73 consecutive days. An extremely stable DGD behavior was found, despite the fact that the cables run across a very densely built city environment. On the other hand, it was also found that routine maintenance on the infrastructure, although indirect and nondisruptive, repeatedly altered this picture. Based on these results, it is argued that the recently introduced “hinge” model describes the plant DGD statistical behavior better than the traditional description. This argument is also supported by the close fit that was obtained with a theoretical hinge-model DGD probability density function of the measured DGD data histogram. It is also argued that in this kind of scenario fast adaptive compensation is most likely the only realistic solution for avoiding sudden performance degradation or out-of-service.

  14. Linear Classifier with Reject Option for the Detection of Vocal Fold Paralysis and Vocal Fold Edema

    NASA Astrophysics Data System (ADS)

    Kotropoulos, Constantine; Arce, Gonzalo R.

    2009-12-01

    Two distinct two-class pattern recognition problems are studied, namely, the detection of male subjects who are diagnosed with vocal fold paralysis against male subjects who are diagnosed as normal and the detection of female subjects who are suffering from vocal fold edema against female subjects who do not suffer from any voice pathology. To do so, utterances of the sustained vowel "ah" are employed from the Massachusetts Eye and Ear Infirmary database of disordered speech. Linear prediction coefficients extracted from the aforementioned utterances are used as features. The receiver operating characteristic curve of the linear classifier, that stems from the Bayes classifier when Gaussian class conditional probability density functions with equal covariance matrices are assumed, is derived. The optimal operating point of the linear classifier is specified with and without reject option. First results using utterances of the "rainbow passage" are also reported for completeness. The reject option is shown to yield statistically significant improvements in the accuracy of detecting the voice pathologies under study.

  15. Design strategy for terahertz quantum dot cascade lasers.

    PubMed

    Burnett, Benjamin A; Williams, Benjamin S

    2016-10-31

    The development of quantum dot cascade lasers has been proposed as a path to obtain terahertz semiconductor lasers that operate at room temperature. The expected benefit is due to the suppression of nonradiative electron-phonon scattering and reduced dephasing that accompanies discretization of the electronic energy spectrum. We present numerical modeling which predicts that simple scaling of conventional quantum well based designs to the quantum dot regime will likely fail due to electrical instability associated with high-field domain formation. A design strategy adapted for terahertz quantum dot cascade lasers is presented which avoids these problems. Counterintuitively, this involves the resonant depopulation of the laser's upper state with the LO-phonon energy. The strategy is tested theoretically using a density matrix model of transport and gain, which predicts sufficient gain for lasing at stable operating points. Finally, the effect of quantum dot size inhomogeneity on the optical lineshape is explored, suggesting that the design concept is robust to a moderate amount of statistical variation.

  16. Instantaneous polarization statistic property of EM waves incident on time-varying reentry plasma

    NASA Astrophysics Data System (ADS)

    Bai, Bowen; Liu, Yanming; Li, Xiaoping; Yao, Bo; Shi, Lei

    2018-06-01

    An analytical method is proposed in this paper to study the effect of time-varying reentry plasma sheath on the instantaneous polarization statistic property of electromagnetic (EM) waves. Based on the disturbance property of the hypersonic fluid, the spatial-temporal model of the time-varying reentry plasma sheath is established. An analytical technique referred to as transmission line analogy is developed to calculate the instantaneous transmission coefficient of EM wave propagation in time-varying plasma. Then, the instantaneous polarization statistic theory of EM wave propagation in the time-varying plasma sheath is developed. Taking the S-band telemetry right hand circularly polarized wave as an example, effects of incident angle and plasma parameters, including the electron density and the collision frequency on the EM wave's polarization statistic property are studied systematically. Statistical results indicate that the lower the collision frequency and the larger the electron density and incident angle is, the worse the deterioration of the polarization property is. Meanwhile, in conditions of critical parameters of certain electron density, collision frequency, and incident angle, the transmitted waves have both the right and left hand polarization mode, and the polarization mode will reverse. The calculation results could provide useful information for adaptive polarization receiving of the spacecraft's reentry communication.

  17. Nonstationary envelope process and first excursion probability

    NASA Technical Reports Server (NTRS)

    Yang, J.

    1972-01-01

    A definition of the envelope of nonstationary random processes is proposed. The establishment of the envelope definition makes it possible to simulate the nonstationary random envelope directly. Envelope statistics, such as the density function, joint density function, moment function, and level crossing rate, which are relevent to analyses of catastrophic failure, fatigue, and crack propagation in structures, are derived. Applications of the envelope statistics to the prediction of structural reliability under random loadings are discussed in detail.

  18. Activity of left inferior frontal gyrus related to word repetition effects: LORETA imaging with 128-channel EEG and individual MRI.

    PubMed

    Kim, Young Youn; Lee, Boreom; Shin, Yong Wook; Kwon, Jun Soo; Kim, Myung-Sun

    2006-02-01

    We investigated the brain substrate of word repetition effects on the implicit memory task using low-resolution electromagnetic tomography (LORETA) with high-density 128-channel EEG and individual MRI as a realistic head model. Thirteen right-handed, healthy subjects performed a word/non-word discrimination task, in which the words and non-words were presented visually, and some of the words appeared twice with a lag of one or five items. All of the subjects exhibited word repetition effects with respect to the behavioral data, in which a faster reaction time was observed to the repeated word (old word) than to the first presentation of the word (new word). The old words elicited more positive-going potentials than the new words, beginning at 200 ms and lasting until 500 ms post-stimulus. We conducted source reconstruction using LORETA at a latency of 400 ms with the peak mean global field potentials and used statistical parametric mapping for the statistical analysis. We found that the source elicited by the old words exhibited a statistically significant current density reduction in the left inferior frontal gyrus. This is the first study to investigate the generators of word repetition effects using voxel-by-voxel statistical mapping of the current density with individual MRI and high-density EEG.

  19. Spectra of random operators with absolutely continuous integrated density of states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rio, Rafael del, E-mail: delrio@iimas.unam.mx, E-mail: delriomagia@gmail.com

    2014-04-15

    The structure of the spectrum of random operators is studied. It is shown that if the density of states measure of some subsets of the spectrum is zero, then these subsets are empty. In particular follows that absolute continuity of the integrated density of states implies singular spectra of ergodic operators is either empty or of positive measure. Our results apply to Anderson and alloy type models, perturbed Landau Hamiltonians, almost periodic potentials, and models which are not ergodic.

  20. Relationships between brightness of nighttime lights and population density

    NASA Astrophysics Data System (ADS)

    Naizhuo, Z.

    2012-12-01

    Brightness of nighttime lights has been proven to be a good proxy for socioeconomic and demographic statistics. Moreover, the satellite nighttime lights data have been used to spatially disaggregate amounts of gross domestic product (GDP), fossil fuel carbon dioxide emission, and electric power consumption (Ghosh et al., 2010; Oda and Maksyutov, 2011; Zhao et al., 2012). Spatial disaggregations were performed in these previous studies based on assumed linear relationships between digital number (DN) value of pixels in the nighttime light images and socioeconomic data. However, reliability of the linear relationships was never tested due to lack of relative high-spatial-resolution (equal to or finer than 1 km × 1 km) statistical data. With the similar assumption that brightness linearly correlates to population, Bharti et al. (2011) used nighttime light data as a proxy for population density and then developed a model about seasonal fluctuations of measles in West Africa. The Oak Ridge National Laboratory used sub-national census population data and high spatial resolution remotely-sensed-images to produce LandScan population raster datasets. The LandScan population datasets have 1 km × 1 km spatial resolution which is consistent with the spatial resolution of the nighttime light images. Therefore, in this study I selected 2008 LandScan population data as baseline reference data and the contiguous United State as study area. Relationships between DN value of pixels in the 2008 Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) stable light image and population density were established. Results showed that an exponential function can more accurately reflect the relationship between luminosity and population density than a linear function. Additionally, a certain number of saturated pixels with DN value of 63 exist in urban core areas. If directly using the exponential function to estimate the population density for the whole brightly lit area, relatively large under-estimations would emerge in the urban core regions. Previous studies have shown that GDP, carbon dioxide emission, and electric power consumption strongly correlate to urban population (Ghosh et al., 2010; Sutton et al., 2007; Zhao et al., 2012). Thus, although this study only examined the relationships between brightness of nighttime lights and population density, the results can provide insight for the spatial disaggregations of socioeconomic data (e.g. GDP, carbon dioxide emission, and electric power consumption) using the satellite nighttime light image data. Simply distributing the socioeconomic data to each pixel in proportion to the DN value of the nighttime light images may generate relatively large errors. References Bharit N, Tatem AJ, Ferrari MJ, Grais RF, Djibo A, Grenfell BT, 2011. Science, 334:1424-1427. Ghosh T, Elvidge CD, Sutton PC, Baugh KE, Ziskin D, Tuttle BT, 2010. Energies, 3:1895-1913. Oda T, Maksyutov S, 2011. Atmospheric Chemistry and Physics, 11:543-556. Sutton PC, Elvidge CD, Ghosh T, 2007. International Journal of Ecological Economics and Statistics, 8:5-21. Zhao N, Ghosh T, Samson EL, 2012. International Journal of Remote sensing, 33:6304-6320.

  1. Probability density function shape sensitivity in the statistical modeling of turbulent particle dispersion

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.

  2. Thermal properties of soils: effect of biochar application

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Lipiec, Jerzy

    2014-05-01

    Thermal properties (thermal conductivity, heat capacity and thermal diffusivity) have a significant effect on the soil surface energy partitioning and resulting in the temperature distribution. Thermal properties of soil depend on water content, bulk density and organic matter content. An important source of organic matter is biochar. Biochar as a material is defined as: "charcoal for application as a soil conditioner". Biochar is generally associated with co-produced end products of pyrolysis. Many different materials are used as biomass feedstock for biochar, including wood, crop residues and manures. Additional predictions were done for terra preta soil (also known as "Amazonian dark earth"), high in charcoal content, due to adding a mixture of charcoal, bone, and manure for thousands of years i.e. approximately 10-1,000 times longer than residence times of most soil organic matter. The effect of biochar obtained from the wood biomass and other organic amendments (peat, compost) on soil thermal properties is presented in this paper. The results were compared with wetland soils of different organic matter content. The measurements of the thermal properties at various water contents were performed after incubation, under laboratory conditions using KD2Pro, Decagon Devices. The measured data were compared with predictions made using Usowicz statistical-physical model (Usowicz et al., 2006) for biochar, mineral soil and soil with addition of biochar at various water contents and bulk densities. The model operates statistically by probability of occurrence of contacts between particular fractional compounds. It combines physical properties, specific to particular compounds, into one apparent conductance specific to the mixture. The results revealed that addition of the biochar and other organic amendments into the soil caused considerable reduction of the thermal conductivity and diffusivity. The mineral soil showed the highest thermal conductivity and diffusivity that decreased in soil with addition of biochar and pure biochar. The reduction of both properties was mostly due to decrease in both particle density and bulk density. Both biochar and the organic amendments addition resulted in a decrease of the heat capacity of the mixtures in dry state and considerable increase in wet state. The lowest and highest reduction in the thermal conductivity with decreasing water content was obtained for pure biochar and mineral soil, respectively. The thermal diffusivity had a characteristic maximum at higher bulk densities and lower water contents. The wetland soil higher in organic matter content exhibit smaller temporal variation of the thermal properties compared to soils lower in organic matter content in response to changes of water content. The statistical-physical model was found to be useful for satisfactory predicting thermal properties of the soil with addition of biochar and organic amendments. Usowicz B. et al., 2006. Thermal conductivity modelling of terrestrial soil media - A comparative study. Planetary and Space Science 54, 1086-1095.

  3. The quiescent H-mode regime for high performance edge localized mode-stable operation in future burning plasmas [The quiescent H-mode regime for high performance ELM-stable operation in future burning plasmas

    DOE PAGES

    Garofalo, Andrea M.; Burrell, Keith H.; Eldon, David; ...

    2015-05-26

    For the first time, DIII-D experiments have achieved stationary quiescent H-mode (QH-mode) operation for many energy confinement times at simultaneous ITER-relevant values of beta, confinement, and safety factor, in an ITER similar shape. QH-mode provides excellent energy confinement, even at very low plasma rotation, while operating without edge localized modes (ELMs) and with strong impurity transport via the benign edge harmonic oscillation (EHO). By tailoring the plasma shape to improve the edge stability, the QH-mode operating space has also been extended to densities exceeding 80% of the Greenwald limit, overcoming the long-standing low-density limit of QH-mode operation. In the theory,more » the density range over which the plasma encounters the kink-peeling boundary widens as the plasma cross-section shaping is increased, thus increasing the QH-mode density threshold. Here, the DIII-D results are in excellent agreement with these predictions, and nonlinear MHD analysis of reconstructed QH-mode equilibria shows unstable low n kink-peeling modes growing to a saturated level, consistent with the theoretical picture of the EHO. Furthermore, high density operation in the QH-mode regime has opened a path to a new, previously predicted region of parameter space, named “Super H-mode” because it is characterized by very high pedestals that can be more than a factor of two above the peeling-ballooning stability limit for similar ELMing H-mode discharges at the same density.« less

  4. Thermodynamic analysis of energy density in pressure retarded osmosis: The impact of solution volumes and costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimund, Kevin K.; McCutcheon, Jeffrey R.; Wilson, Aaron D.

    A general method was developed for estimating the volumetric energy efficiency of pressure retarded osmosis via pressure-volume analysis of a membrane process. The resulting model requires only the osmotic pressure, π, and mass fraction, w, of water in the concentrated and dilute feed solutions to estimate the maximum achievable specific energy density, uu, as a function of operating pressure. The model is independent of any membrane or module properties. This method utilizes equilibrium analysis to specify the volumetric mixing fraction of concentrated and dilute solution as a function of operating pressure, and provides results for the total volumetric energy densitymore » of similar order to more complex models for the mixing of seawater and riverwater. Within the framework of this analysis, the total volumetric energy density is maximized, for an idealized case, when the operating pressure is π/(1+√w⁻¹), which is lower than the maximum power density operating pressure, Δπ/2, derived elsewhere, and is a function of the solute osmotic pressure at a given mass fraction. It was also found that a minimum 1.45 kmol of ideal solute is required to produce 1 kWh of energy while a system operating at “maximum power density operating pressure” requires at least 2.9 kmol. Utilizing this methodology, it is possible to examine the effects of volumetric solution cost, operation of a module at various pressure, and operation of a constant pressure module with various feed.« less

  5. Beyond δ: Tailoring marked statistics to reveal modified gravity

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.

  6. Effect of the collective motions of molecules inside a condensed phase on fluctuations in the density of small bodies

    NASA Astrophysics Data System (ADS)

    Tovbin, Yu. K.

    2017-11-01

    An approach to calculating the effects of fluctuations in density that considers the collective motions of molecules in small condensed phases (e.g., droplets, microcrystals, adsorption at microcrystal faces) is proposed. Statistical sums of the vibrational, rotational, and translational motions of molecules are of a collective character expressed in the dependences of these statistical sums on the local configurations of neighboring molecules. This changes their individual contributions to the free energy and modifies fluctuations in density in the inner homogeneous regions of small bodies. Interactions between nearest neighbors are considered in a quasi-chemical approximation that reflects the effects of short-range direct correlations. Expressions for isotherms relating the densities of mixture components to the chemical potentials in a thermostat are obtained, along with equations for pair distribution functions.

  7. Visualizing statistical significance of disease clusters using cartograms.

    PubMed

    Kronenfeld, Barry J; Wong, David W S

    2017-05-15

    Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate framework for visually assessing the statistical significance of spatial clusters.

  8. 14 CFR 298.63 - Reporting of aircraft operating expenses and related statistics by small certificated air carriers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and related statistics by small certificated air carriers. 298.63 Section 298.63 Aeronautics and Space... aircraft operating expenses and related statistics by small certificated air carriers. (a) Each small... Related Statistics.” This schedule shall be filed quarterly as prescribed in § 298.60. Data reported on...

  9. 14 CFR 298.63 - Reporting of aircraft operating expenses and related statistics by small certificated air carriers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and related statistics by small certificated air carriers. 298.63 Section 298.63 Aeronautics and Space... aircraft operating expenses and related statistics by small certificated air carriers. (a) Each small... Related Statistics.” This schedule shall be filed quarterly as prescribed in § 298.60. Data reported on...

  10. Exact statistical results for binary mixing and reaction in variable density turbulence

    NASA Astrophysics Data System (ADS)

    Ristorcelli, J. R.

    2017-02-01

    We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ ⁣2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ ⁣2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived analytic results relating several other second and third order moments and see coupling between odd and even order moments demonstrating a natural and inherent skewness in the mixing in variable density turbulence. The analytic results have applications in the areas of isothermal material mixing, isobaric thermal mixing, and simple chemical reaction (in progress variable formulation).

  11. 14 CFR 93.125 - Arrival or departure reservation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports... may operate an aircraft to or from an airport designated as a high density traffic airport unless he...

  12. 14 CFR 93.125 - Arrival or departure reservation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports... may operate an aircraft to or from an airport designated as a high density traffic airport unless he...

  13. 14 CFR 93.125 - Arrival or departure reservation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports... may operate an aircraft to or from an airport designated as a high density traffic airport unless he...

  14. 14 CFR 93.125 - Arrival or departure reservation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports... may operate an aircraft to or from an airport designated as a high density traffic airport unless he...

  15. 14 CFR 93.125 - Arrival or departure reservation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports... may operate an aircraft to or from an airport designated as a high density traffic airport unless he...

  16. Post-operative pain control after tonsillectomy: dexametasone vs tramadol.

    PubMed

    Topal, Kubra; Aktan, Bulent; Sakat, Muhammed Sedat; Kilic, Korhan; Gozeler, Mustafa Sitki

    2017-06-01

    Tramadol was found to be more effective than dexamethasone in post-operative pain control, with long-lasting relief of pain. This study aimed to compare the effects of pre-operative local injections of tramadol and dexamethasone on post-operative pain, nausea and vomiting in patients who underwent tonsillectomy. Sixty patients between 3-13 years of age who were planned for tonsillectomy were included in the study. Patients were divided into three groups. Group 1 was the control group. Patients in Group 2 received 0.3 mg/kg Dexamethasone and Group 3 received 0.1 mg/kg Tramadol injection to the peritonsillary space just before the operation. Patients were evaluated for nausea, vomiting, and pain. When the control and the dexamethasone groups were compared; there were statistically significant differences in pain scores at post-operative 15 and 30 min, whereas there was no statistically significant difference in pain scores at other hours. When the control and tramadol groups were compared, there was a statistically significant difference in pain scores at all intervals. When tramadol and dexamethasone groups were compared, there was no statistically significant difference in pain scores at post-operative 15 and 30 min, 1 and 2 h, whereas there was a statistically significant difference in pain scores at post-operative 6 and 24 h.

  17. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  18. Entropy-guided switching trimmed mean deviation-boosted anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Nnolim, Uche A.

    2016-07-01

    An effective anisotropic diffusion (AD) mean filter variant is proposed for filtering of salt-and-pepper impulse noise. The implemented filter is robust to impulse noise ranging from low to high density levels. The algorithm involves a switching scheme in addition to utilizing the unsymmetric trimmed mean/median deviation to filter image noise while greatly preserving image edges, regardless of impulse noise density (ND). It operates with threshold parameters selected manually or adaptively estimated from the image statistics. It is further combined with the partial differential equations (PDE)-based AD for edge preservation at high NDs to enhance the properties of the trimmed mean filter. Based on experimental results, the proposed filter easily and consistently outperforms the median filter and its other variants ranging from simple to complex filter structures, especially the known PDE-based variants. In addition, the switching scheme and threshold calculation enables the filter to avoid smoothing an uncorrupted image, and filtering is activated only when impulse noise is present. Ultimately, the particular properties of the filter make its combination with the AD algorithm a unique and powerful edge-preservation smoothing filter at high-impulse NDs.

  19. Alignment-Independent Comparisons of Human Gastrointestinal Tract Microbial Communities in a Multidimensional 16S rRNA Gene Evolutionary Space▿

    PubMed Central

    Rudi, Knut; Zimonja, Monika; Kvenshagen, Bente; Rugtveit, Jarle; Midtvedt, Tore; Eggesbø, Merete

    2007-01-01

    We present a novel approach for comparing 16S rRNA gene clone libraries that is independent of both DNA sequence alignment and definition of bacterial phylogroups. These steps are the major bottlenecks in current microbial comparative analyses. We used direct comparisons of taxon density distributions in an absolute evolutionary coordinate space. The coordinate space was generated by using alignment-independent bilinear multivariate modeling. Statistical analyses for clone library comparisons were based on multivariate analysis of variance, partial least-squares regression, and permutations. Clone libraries from both adult and infant gastrointestinal tract microbial communities were used as biological models. We reanalyzed a library consisting of 11,831 clones covering complete colons from three healthy adults in addition to a smaller 390-clone library from infant feces. We show that it is possible to extract detailed information about microbial community structures using our alignment-independent method. Our density distribution analysis is also very efficient with respect to computer operation time, meeting the future requirements of large-scale screenings to understand the diversity and dynamics of microbial communities. PMID:17337554

  20. The distribution of density in supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan; Hopkins, Philip F.

    2017-11-01

    We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.

  1. Statistics of excitations in the electron glass model

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo

    2011-03-01

    We study the statistics of elementary excitations in the classical electron glass model of localized electrons interacting via the unscreened Coulomb interaction in the presence of disorder. We reconsider the long-standing puzzle of the exponential suppression of the single-particle density of states near the Fermi level, by measuring accurately the density of states of charged and electron-hole pair excitations via finite temperature Monte Carlo simulation and zero-temperature relaxation. We also investigate the statistics of large charge rearrangements after a perturbation of the system, which may shed some light on the slow relaxation and glassy phenomena recently observed in a variety of Anderson insulators. In collaboration with Martin Goethe.

  2. Random dopant fluctuations and statistical variability in n-channel junctionless FETs

    NASA Astrophysics Data System (ADS)

    Akhavan, N. D.; Umana-Membreno, G. A.; Gu, R.; Antoszewski, J.; Faraone, L.

    2018-01-01

    The influence of random dopant fluctuations on the statistical variability of the electrical characteristics of n-channel silicon junctionless nanowire transistor (JNT) has been studied using three dimensional quantum simulations based on the non-equilibrium Green’s function (NEGF) formalism. Average randomly distributed body doping densities of 2 × 1019, 6 × 1019 and 1 × 1020 cm-3 have been considered employing an atomistic model for JNTs with gate lengths of 5, 10 and 15 nm. We demonstrate that by properly adjusting the doping density in the JNT, a near ideal statistical variability and electrical performance can be achieved, which can pave the way for the continuation of scaling in silicon CMOS technology.

  3. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  4. Extending the physics basis of quiescent H-mode toward ITER relevant parameters

    DOE PAGES

    Solomon, W. M.; Burrell, K. H.; Fenstermacher, M. E.; ...

    2015-06-26

    Recent experiments on DIII-D have addressed several long-standing issues needed to establish quiescent H-mode (QH-mode) as a viable operating scenario for ITER. In the past, QH-mode was associated with low density operation, but has now been extended to high normalized densities compatible with operation envisioned for ITER. Through the use of strong shaping, QH-mode plasmas have been maintained at high densities, both absolute (more » $$\\bar{n}$$ e ≈ 7 × 10 19 m ₋3) and normalized Greenwald fraction ($$\\bar{n}$$ e/n G > 0.7). In these plasmas, the pedestal can evolve to very high pressure and edge current as the density is increased. High density QH-mode operation with strong shaping has allowed access to a previously predicted regime of very high pedestal dubbed “Super H-mode”. Calculations of the pedestal height and width from the EPED model are quantitatively consistent with the experimentally observed density evolution. The confirmation of the shape dependence of the maximum density threshold for QH-mode helps validate the underlying theoretical model of peeling- ballooning modes for ELM stability. In general, QH-mode is found to achieve ELM- stable operation while maintaining adequate impurity exhaust, due to the enhanced impurity transport from an edge harmonic oscillation, thought to be a saturated kink- peeling mode driven by rotation shear. In addition, the impurity confinement time is not affected by rotation, even though the energy confinement time and measured E×B shear are observed to increase at low toroidal rotation. Together with demonstrations of high beta, high confinement and low q 95 for many energy confinement times, these results suggest QH-mode as a potentially attractive operating scenario for the ITER Q=10 mission.« less

  5. Advanced intermediate temperature sodium-nickel chloride batteries with ultra-high energy density.

    PubMed

    Li, Guosheng; Lu, Xiaochuan; Kim, Jin Y; Meinhardt, Kerry D; Chang, Hee Jung; Canfield, Nathan L; Sprenkle, Vincent L

    2016-02-11

    Sodium-metal halide batteries have been considered as one of the more attractive technologies for stationary electrical energy storage, however, they are not used for broader applications despite their relatively well-known redox system. One of the roadblocks hindering market penetration is the high-operating temperature. Here we demonstrate that planar sodium-nickel chloride batteries can be operated at an intermediate temperature of 190 °C with ultra-high energy density. A specific energy density of 350 Wh kg(-1), higher than that of conventional tubular sodium-nickel chloride batteries (280 °C), is obtained for planar sodium-nickel chloride batteries operated at 190 °C over a long-term cell test (1,000 cycles), and it attributed to the slower particle growth of the cathode materials at the lower operating temperature. Results reported here demonstrate that planar sodium-nickel chloride batteries operated at an intermediate temperature could greatly benefit this traditional energy storage technology by improving battery energy density, cycle life and reducing material costs.

  6. Advanced intermediate temperature sodium-nickel chloride batteries with ultra-high energy density

    NASA Astrophysics Data System (ADS)

    Li, Guosheng; Lu, Xiaochuan; Kim, Jin Y.; Meinhardt, Kerry D.; Chang, Hee Jung; Canfield, Nathan L.; Sprenkle, Vincent L.

    2016-02-01

    Sodium-metal halide batteries have been considered as one of the more attractive technologies for stationary electrical energy storage, however, they are not used for broader applications despite their relatively well-known redox system. One of the roadblocks hindering market penetration is the high-operating temperature. Here we demonstrate that planar sodium-nickel chloride batteries can be operated at an intermediate temperature of 190 °C with ultra-high energy density. A specific energy density of 350 Wh kg-1, higher than that of conventional tubular sodium-nickel chloride batteries (280 °C), is obtained for planar sodium-nickel chloride batteries operated at 190 °C over a long-term cell test (1,000 cycles), and it attributed to the slower particle growth of the cathode materials at the lower operating temperature. Results reported here demonstrate that planar sodium-nickel chloride batteries operated at an intermediate temperature could greatly benefit this traditional energy storage technology by improving battery energy density, cycle life and reducing material costs.

  7. Comment on ‘Are physicists afraid of mathematics?’

    NASA Astrophysics Data System (ADS)

    Higginson, Andrew D.; Fawcett, Tim W.

    2016-11-01

    In 2012, we showed that the citation count for articles in ecology and evolutionary biology declines with increasing density of equations. Kollmer et al (2015 New J. Phys. 17 013036) claim this effect is an artefact of the manner in which we plotted the data. They also present citation data from Physical Review Letters and argue, based on graphs, that citation counts are unrelated to equation density. Here we show that both claims are misguided. We identified the effects in biology not by visual means, but using the most appropriate statistical analysis. Since Kollmer et al did not carry out any statistical analysis, they cannot draw reliable inferences about the citation patterns in physics. We show that when statistically analysed their data actually do provide evidence that in physics, as in biology, citation counts are lower for articles with a high density of equations. This indicates that a negative relationship between equation density and citations may extend across the breadth of the sciences, even those in which researchers are well accustomed to mathematical descriptions of natural phenomena. We restate our assessment that this is a genuine problem and discuss what we think should be done about it.

  8. Histograms and Frequency Density.

    ERIC Educational Resources Information Center

    Micromath, 2003

    2003-01-01

    Introduces exercises on histograms and frequency density. Guides pupils to Discovering Important Statistical Concepts Using Spreadsheets (DISCUSS), created at the University of Coventry. Includes curriculum points, teaching tips, activities, and internet address (http://www.coventry.ac.uk/discuss/). (KHR)

  9. Automated skin lesion segmentation with kernel density estimation

    NASA Astrophysics Data System (ADS)

    Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.

    2017-07-01

    Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.

  10. Special Operations Forces Language and Culture Needs Assessment Project: Training Emphasis: Language and Culture

    DTIC Science & Technology

    2010-02-25

    gave significantly higher emphasis ratings (i.e., a statistically significant difference between SOF operators and SOF leaders). Responses were made...i.e., a statistically significant difference between SOF operators and SOF leaders). Responses were made on the following scale: 1 = No emphasis, 2...missions?” Means with an asterisk (*) indicate that the group gave significantly higher emphasis ratings (i.e., a statistically significant difference

  11. 14 CFR 93.213 - Definitions and general provisions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Carrier IFR Operations at High Density Traffic Airports § 93.213 Definitions and general provisions. (a... operation each day during a specific hour or 30 minute period at one of the High Density Traffic Airports...

  12. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning.

    PubMed

    Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A; Roubidoux, Marilyn A; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M; Samala, Ravi K

    2018-01-09

    Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm  ×  800 µm from 100 µm  ×  100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79  ±  0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC  =  0.72  ±  0.18 and r  =  0.85. For the independent test set, DCNN achieved DC  =  0.76  ±  0.09 and r  =  0.94, while feature-based learning achieved DC  =  0.62  ±  0.21 and r  =  0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.

  13. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning

    NASA Astrophysics Data System (ADS)

    Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M.; Samala, Ravi K.

    2018-01-01

    Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input ‘for processing’ DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm  ×  800 µm from 100 µm  ×  100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice’s coefficient (DC) of 0.79  ±  0.13 and Pearson’s correlation (r) of 0.97, whereas feature-based learning obtained DC  =  0.72  ±  0.18 and r  =  0.85. For the independent test set, DCNN achieved DC  =  0.76  ±  0.09 and r  =  0.94, while feature-based learning achieved DC  =  0.62  ±  0.21 and r  =  0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.

  14. Chemometrical assessment of the electrical parameters obtained by long-term operating freshwater sediment microbial fuel cells.

    PubMed

    Mitov, Mario; Bardarov, Ivo; Mandjukov, Petko; Hubenova, Yolina

    2015-12-01

    The electrical parameters of nine freshwater sediment microbial fuel cells (SMFCs) were monitored for a period of over 20 months. The developed SMFCs, divided into three groups, were started up and continuously operated under different constant loads (100, 510 and 1100 Ω) for 2.5 months. At this stage of the experiment, the highest power density values, reaching 1.2 ± 0.2 mW/m(2), were achieved by the SMFCs loaded with 510 Ω. The maximum power obtained at periodical polarization during the rest period, however, ranged between 26.2 ± 2.8 and 35.3 ± 2.8 mW/m(2), strongly depending on the internal cell resistance. The statistical evaluation of data derived from the polarization curves shows that after 300 days of operation all examined SMFCs reached a steady-state and the system might be assumed as homoscedastic. The estimated values of standard and expanded uncertainties of the electric parameters indicate a high repeatability and reproducibility of the SMFCs' performance. Results obtained in subsequent discharge-recovery cycles reveal the opportunity for practical application of studied SMFCs as autonomous power sources.

  15. [The Influence of risk factors on visual performance in of phototoxic maculopathy in occupational welders].

    PubMed

    Li, Qingtao; Zhang, Xinfang

    2014-10-01

    To investigate the Influence of risk factors that cause the phototoxic maculopathy by welding arc in occupational welders. We examined randomly a group of 86 male occupational welders 172 eyes from some local metal manufacturing enterprise from August 2010 to December 2013. The ophthalmologic examination which the participants underwent thorough including the best visual acuity, fundus examination by the supplementary lens, fundus photography, and the high definition optical coherence tomography (OCT) scan. All participants of this study underwent thorough the medicine examined by a specialist of occupational who prior to the OCT. All the subjects were divided into groups according to age, protection degrees , length of service, operating time . The incidences of phototoxic maculopathy were compared within groups. The subjects was divided randomly into the lutein group and the placebo group. The examination including the best visual acuity, serum lutein concentrations, macular pigmentoptical density (MPOD), Contrast and glare Sensitivity. (1) The total incidence of phototoxic maculopathy is 32.0%. (2) The incidences of phototoxic maculopathy in the strict protection group, the randomed protective group and the nonprofessional protection group were respectively 21.4%, 36.7%, 53.6%. The incidence in the strict protection group was lower than the other two groups, the incidence was the highest in the nonprofessional protection group, and the difference was statistically significant. (3) The longer length of service, and operating time , the more incidence of phototoxic maculopathy develop. (4) The lutein group prior to the placebo group at the best visual acuity, serum lutein concentrations, macular pigmentoptical density (MPOD), Contrast and glare Sensitivity. The risk factors of phototoxic maculopathy in male occupational welders are the length of service, operating time, protection degrees and the lutein assistantly. The incidence of phototoxic maculopathy occurs regardless of age.

  16. 47 CFR 25.146 - Licensing and operating rules for the non-geostationary satellite orbit Fixed-Satellite Service...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-density, in the space-to-Earth direction, (EPFD down) limits. (i) Provide a set of power flux-density (PFD) masks, on the surface of the Earth, for each space station in the NGSO FSS system. The PFD masks shall.... (2) Single-entry additional operational equivalent power flux-density, in the space-to-Earth...

  17. Husimi coordinates of multipartite separable states

    NASA Astrophysics Data System (ADS)

    Parfionov, Georges; Zapatrin, Romàn R.

    2010-12-01

    A parametrization of multipartite separable states in a finite-dimensional Hilbert space is suggested. It is proved to be a diffeomorphism between the set of zero-trace operators and the interior of the set of separable density operators. The result is applicable to any tensor product decomposition of the state space. An analytical criterion for separability of density operators is established in terms of the boundedness of a sequence of operators.

  18. Statistics from the Operation of the Low-Level Wind Shear Alert System (LLWAS) during the Joint Airport Weather Studies (JAWS) Project.

    DTIC Science & Technology

    1984-12-01

    AD-RI59 367 STATISTICS FROM THE OPERATION OF THE LOW-LEVEL WIND I/i SHEAR ALERT SYSTEM (L..(U) NATIONAL CENTER FOR ATOMSPHERIC RESEARCH BOULDER CO...NATIONAL BUREAU OF STANDARDS-1963A % % Oh b DOT/FAAIPM-84132 Statistics from the Operation of the Program Engineering Low-Level Wind Shear Alert System and...The Operation of The Low-Level Wind December 1984 Shear Alert System (LLWAS) During The JAWS Project: 6. Performing Organization Code An Interim Report

  19. Quons, an interpolation between Bose and Fermi oscillators

    NASA Technical Reports Server (NTRS)

    Greenberg, O. W.

    1993-01-01

    After a brief mention of Bose and Fermi oscillators and of particles which obey other types of statistics, including intermediate statistics, parastatistics, paronic statistics, anyon statistics, and infinite statistics, I discuss the statistics of 'quons' (pronounced to rhyme with muons), particles whose annihilation and creation operators obey the q-deformed commutation relation (the quon algebra or q-mutator) which interpolates between fermions and bosons. I emphasize that the operator for interaction with an external source must be an effective Bose operator in all cases. To accomplish this for parabose, parafermi and quon operators, I introduce parabose, parafermi, and quon Grassmann numbers, respectively. I also discuss interactions of non-relativistic quons, quantization of quon fields with antiparticles, calculation of vacuum matrix elements of relativistic quon fields, demonstration of the TCP theorem, cluster decomposition, and Wick's theorem for relativistic quon fields, and the failure of local commutativity of observables for relativistic quon fields. I conclude with the bound on the parameter q for electrons due to the Ramberg-Snow experiment.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garofalo, Andrea M.; Burrell, Keith H.; Eldon, David

    For the first time, DIII-D experiments have achieved stationary quiescent H-mode (QH-mode) operation for many energy confinement times at simultaneous ITER-relevant values of beta, confinement, and safety factor, in an ITER similar shape. QH-mode provides excellent energy confinement, even at very low plasma rotation, while operating without edge localized modes (ELMs) and with strong impurity transport via the benign edge harmonic oscillation (EHO). By tailoring the plasma shape to improve the edge stability, the QH-mode operating space has also been extended to densities exceeding 80% of the Greenwald limit, overcoming the long-standing low-density limit of QH-mode operation. In the theory,more » the density range over which the plasma encounters the kink-peeling boundary widens as the plasma cross-section shaping is increased, thus increasing the QH-mode density threshold. Here, the DIII-D results are in excellent agreement with these predictions, and nonlinear MHD analysis of reconstructed QH-mode equilibria shows unstable low n kink-peeling modes growing to a saturated level, consistent with the theoretical picture of the EHO. Furthermore, high density operation in the QH-mode regime has opened a path to a new, previously predicted region of parameter space, named “Super H-mode” because it is characterized by very high pedestals that can be more than a factor of two above the peeling-ballooning stability limit for similar ELMing H-mode discharges at the same density.« less

  1. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  2. Does transport time help explain the high trauma mortality rates in rural areas? New and traditional predictors assessed by new and traditional statistical methods

    PubMed Central

    Røislien, Jo; Lossius, Hans Morten; Kristiansen, Thomas

    2015-01-01

    Background Trauma is a leading global cause of death. Trauma mortality rates are higher in rural areas, constituting a challenge for quality and equality in trauma care. The aim of the study was to explore population density and transport time to hospital care as possible predictors of geographical differences in mortality rates, and to what extent choice of statistical method might affect the analytical results and accompanying clinical conclusions. Methods Using data from the Norwegian Cause of Death registry, deaths from external causes 1998–2007 were analysed. Norway consists of 434 municipalities, and municipality population density and travel time to hospital care were entered as predictors of municipality mortality rates in univariate and multiple regression models of increasing model complexity. We fitted linear regression models with continuous and categorised predictors, as well as piecewise linear and generalised additive models (GAMs). Models were compared using Akaike's information criterion (AIC). Results Population density was an independent predictor of trauma mortality rates, while the contribution of transport time to hospital care was highly dependent on choice of statistical model. A multiple GAM or piecewise linear model was superior, and similar, in terms of AIC. However, while transport time was statistically significant in multiple models with piecewise linear or categorised predictors, it was not in GAM or standard linear regression. Conclusions Population density is an independent predictor of trauma mortality rates. The added explanatory value of transport time to hospital care is marginal and model-dependent, highlighting the importance of exploring several statistical models when studying complex associations in observational data. PMID:25972600

  3. Extended Statistical Short-Range Guidance for Peak Wind Speed Analyses at the Shuttle Landing Facility: Phase II Results

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.

    2003-01-01

    This report describes the results from Phase II of the AMU's Short-Range Statistical Forecasting task for peak winds at the Shuttle Landing Facility (SLF). The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The 45th Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A seven year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. A PC-based Graphical User Interface (GUI) tool was created to display the data quickly.

  4. Compressor seal rub energetics study

    NASA Technical Reports Server (NTRS)

    Laverty, W. F.

    1978-01-01

    The rub mechanics of compressor abradable blade tip seals at simulated engine conditions were investigated. Twelve statistically planned, instrumented rub tests were conducted with titanium blades and Feltmetal fibermetal rubstrips. The tests were conducted with single stationary blades rubbing against seal material bonded to rotating test disks. The instantaneous rub torque, speed, incursion rate and blade temperatures were continuously measured and recorded. Basic rub parameters (incursion rate, rub depth, abradable density, blade thickness and rub velocity) were varied to determine the effects on rub energy and heat split between the blade, rubstrip surface and rub debris. The test data was reduced, energies were determined and statistical analyses were completed to determine the primary and interactive effects. Wear surface morphology, profile measurements and metallographic analysis were used to determine wear, glazing, melting and material transfer. The rub energies for these tests were most significantly affected by the incursion rate while rub velocity and blade thickness were of secondary importance. The ratios of blade wear to seal wear were representative of those experienced in engine operation of these seal system materials.

  5. Clarifying the link between von Neumann and thermodynamic entropies

    NASA Astrophysics Data System (ADS)

    Deville, Alain; Deville, Yannick

    2013-01-01

    The state of a quantum system being described by a density operator ρ, quantum statistical mechanics calls the quantity - kTr( ρln ρ), introduced by von Neumann, its von Neumann or statistical entropy. A 1999 Shenker's paper initiated a debate about its link with the entropy of phenomenological thermodynamics. Referring to Gibbs's and von Neumann's founding texts, we replace von Neumann's 1932 contribution in its historical context, after Gibbs's 1902 treatise and before the creation of the information entropy concept, which places boundaries into the debate. Reexamining von Neumann's reasoning, we stress that the part of his reasoning implied in the debate mainly uses thermodynamics, not quantum mechanics, and identify two implicit postulates. We thoroughly examine Shenker's and ensuing papers, insisting upon the presence of open thermodynamical subsystems, imposing us the use of the chemical potential concept. We briefly mention Landau's approach to the quantum entropy. On the whole, it is shown that von Neumann's viewpoint is right, and why Shenker's claim that von Neumann entropy "is not the quantum-mechanical correlate of thermodynamic entropy" can't be retained.

  6. Breast Cancer Risk and Mammographic Density Assessed with Semiautomated and Fully Automated Methods and BI-RADS.

    PubMed

    Jeffers, Abra M; Sieh, Weiva; Lipson, Jafi A; Rothstein, Joseph H; McGuire, Valerie; Whittemore, Alice S; Rubin, Daniel L

    2017-02-01

    Purpose To compare three metrics of breast density on full-field digital mammographic (FFDM) images as predictors of future breast cancer risk. Materials and Methods This institutional review board-approved study included 125 women with invasive breast cancer and 274 age- and race-matched control subjects who underwent screening FFDM during 2004-2013 and provided informed consent. The percentage of density and dense area were assessed semiautomatically with software (Cumulus 4.0; University of Toronto, Toronto, Canada), and volumetric percentage of density and dense volume were assessed automatically with software (Volpara; Volpara Solutions, Wellington, New Zealand). Clinical Breast Imaging Reporting and Data System (BI-RADS) classifications of breast density were extracted from mammography reports. Odds ratios and 95% confidence intervals (CIs) were estimated by using conditional logistic regression stratified according to age and race and adjusted for body mass index, parity, and menopausal status, and the area under the receiver operating characteristic curve (AUC) was computed. Results The adjusted odds ratios and 95% CIs for each standard deviation increment of the percentage of density, dense area, volumetric percentage of density, and dense volume were 1.61 (95% CI: 1.19, 2.19), 1.49 (95% CI: 1.15, 1.92), 1.54 (95% CI: 1.12, 2.10), and 1.41 (95% CI: 1.11, 1.80), respectively. Odds ratios for women with extremely dense breasts compared with those with scattered areas of fibroglandular density were 2.06 (95% CI: 0.85, 4.97) and 2.05 (95% CI: 0.90, 4.64) for BI-RADS and Volpara density classifications, respectively. Clinical BI-RADS was more accurate (AUC, 0.68; 95% CI: 0.63, 0.74) than Volpara (AUC, 0.64; 95% CI: 0.58, 0.70) and continuous measures of percentage of density (AUC, 0.66; 95% CI: 0.60, 0.72), dense area (AUC, 0.66; 95% CI: 0.60, 0.72), volumetric percentage of density (AUC, 0.64; 95% CI: 0.58, 0.70), and density volume (AUC, 0.65; 95% CI: 0.59, 0.71), although the AUC differences were not statistically significant. Conclusion Mammographic density on FFDM images was positively associated with breast cancer risk by using the computer assisted methods and BI-RADS. BI-RADS classification was as accurate as computer-assisted methods for discrimination of patients from control subjects. © RSNA, 2016.

  7. Becoming angular momentum density flow through nonlinear mass transfer into a gravitating spheroidal body

    NASA Astrophysics Data System (ADS)

    Krot, A. M.

    2009-04-01

    A statistical theory for a cosmological body forming based on the spheroidal body model has been proposed in the works [1]-[4]. This work studies a slowly evolving process of gravitational condensation of a spheroidal body from an infinitely distributed gas-dust substance in space. The equation for an initial evolution of mass density function of a gas-dust cloud is considered here. It is found this equation coincides completely with the analogous equation for a slowly gravitational compressed spheroidal body [5]. A conductive flow in dissipative systems was investigated by I. Prigogine in his works (see, for example, [6], [7]). As it has been found in [2], [5], there exists a conductive antidiffusion flow in a slowly compressible gravitating spheroidal body. Applying the equation of continuity to this conductive flow density we obtain a linear antidiffusion equation [5]. However, if an intensity of conductive flow density increases sharply then the linear antidiffusion equation becomes a nonlinear one. Really, it was pointed to [6] analogous linear equations of diffusion or thermal conductivity transform in nonlinear equations respectively. In this case, the equation of continuity describes a nonlinear mass flow being a source of instabilities into a gravitating spheroidal body because the gravitational compression factor G is a function of not only time but a mass density. Using integral substitution we can reduce a nonlinear antidiffusion equation to the linear antidiffusion equation relative to a new function. If the factor G can be considered as a specific angular momentum then the new function is an angular momentum density. Thus, a nonlinear momentum density flow induces a flow of angular momentum density because streamlines of moving continuous substance come close into a gravitating spheroidal body. Really, the streamline approach leads to more tight interactions of "liquid particles" that implies a superposition of their specific angular momentums. This superposition forms an antidiffusion flow of an angular momentum density into a gravitating spheroidal body. References: [1] Krot, A.M. The statistical model of gravitational interaction of particles. Achievement in Modern Radioelectronics (spec.issue"Cosmic Radiophysics", Moscow), 1996, no.8, pp. 66-81 (in Russian). [2] Krot, A.M. Statistical description of gravitational field: a new approach. Proc. SPIE's 14th Annual Intern.Symp. "AeroSense", Orlando, Florida, USA, 2000, vol.4038, pp.1318-1329. [3] Krot, A.M. The statistical model of rotating and gravitating spheroidal body with the point of view of general relativity. Proc.35th COSPAR Scientific Assembly, Paris, France, 2004, Abstract A-00162. [4] Krot, A. The statistical approach to exploring formation of Solar system. Proc.EGU General Assembly, Vienna, Austria, 2006, Geophys.Res.Abstracts, vol.8, A-00216; SRef-ID: 1607-7962/gra/. [5] Krot, A.M. A statistical approach to investigate the formation of the solar system. Chaos, Solitons and Fractals, 2008, doi:10.1016/j.chaos.2008.06.014. [6] Glansdorff, P. and Prigogine, I. Thermodynamic Theory of Structure, Stability and Fluctuations. London, 1971. [7] Nicolis, G. and Prigogine, I. Self-organization in Nonequilibrium Systems:From Dissipative Structures to Order through Fluctuation. John Willey and Sons, New York etc., 1977.

  8. Stockholder projector analysis: A Hilbert-space partitioning of the molecular one-electron density matrix with orthogonal projectors

    NASA Astrophysics Data System (ADS)

    Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel

    2012-01-01

    A previously introduced partitioning of the molecular one-electron density matrix over atoms and bonds [D. Vanfleteren et al., J. Chem. Phys. 133, 231103 (2010)] is investigated in detail. Orthogonal projection operators are used to define atomic subspaces, as in Natural Population Analysis. The orthogonal projection operators are constructed with a recursive scheme. These operators are chemically relevant and obey a stockholder principle, familiar from the Hirshfeld-I partitioning of the electron density. The stockholder principle is extended to density matrices, where the orthogonal projectors are considered to be atomic fractions of the summed contributions. All calculations are performed as matrix manipulations in one-electron Hilbert space. Mathematical proofs and numerical evidence concerning this recursive scheme are provided in the present paper. The advantages associated with the use of these stockholder projection operators are examined with respect to covalent bond orders, bond polarization, and transferability.

  9. 76 FR 4992 - Agency Information Collection: Activity Under OMB Review: Report of Financial and Operating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-27

    ... Operating Statistics for Large Certificated Air Carriers AGENCY: Research & Innovative Technology Administration (RITA), Bureau of Transportation Statistics (BTS), DOT. ACTION: Notice. SUMMARY: In compliance with the Paperwork Reduction Act of 1995, Public Law 104-13, the Bureau of Transportation Statistics...

  10. Analysis of defect structure in silicon. Characterization of SEMIX material. Silicon sheet growth development for the large area silicon sheet task of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Natesh, R.; Stringfellow, G. B.; Virkar, A. V.; Dunn, J.; Guyer, T.

    1983-01-01

    Statistically significant quantitative structural imperfection measurements were made on samples from ubiquitous crystalline process (UCP) Ingot 5848 - 13C. Important correlation was obtained between defect densities, cell efficiency, and diffusion length. Grain boundary substructure displayed a strong influence on the conversion efficiency of solar cells from Semix material. Quantitative microscopy measurements gave statistically significant information compared to other microanalytical techniques. A surface preparation technique to obtain proper contrast of structural defects suitable for quantimet quantitative image analyzer (QTM) analysis was perfected and is used routinely. The relationships between hole mobility and grain boundary density was determined. Mobility was measured using the van der Pauw technique, and grain boundary density was measured using quantitative microscopy technique. Mobility was found to decrease with increasing grain boundary density.

  11. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE PAGES

    Angland, P.; Haberberger, D.; Ivancic, S. T.; ...

    2017-10-30

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  12. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angland, P.; Haberberger, D.; Ivancic, S. T.

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  13. Nonlinear GARCH model and 1 / f noise

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Ruseckas, J.

    2015-06-01

    Auto-regressive conditionally heteroskedastic (ARCH) family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH(1,1) process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH(1,1) process has power law distribution, but its power spectral density is Brownian noise-like. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the 1 /fβ form, including 1 / f noise.

  14. Statistical Algorithms Accounting for Background Density in the Detection of UXO Target Areas at DoD Munitions Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzke, Brett D.; Wilson, John E.; Hathaway, J.

    2008-02-12

    Statistically defensible methods are presented for developing geophysical detector sampling plans and analyzing data for munitions response sites where unexploded ordnance (UXO) may exist. Detection methods for identifying areas of elevated anomaly density from background density are shown. Additionally, methods are described which aid in the choice of transect pattern and spacing to assure with degree of confidence that a target area (TA) of specific size, shape, and anomaly density will be identified using the detection methods. Methods for evaluating the sensitivity of designs to variation in certain parameters are also discussed. Methods presented have been incorporated into the Visualmore » Sample Plan (VSP) software (free at http://dqo.pnl.gov/vsp) and demonstrated at multiple sites in the United States. Application examples from actual transect designs and surveys from the previous two years are demonstrated.« less

  15. Traffic on complex networks: Towards understanding global statistical properties from microscopic density fluctuations

    NASA Astrophysics Data System (ADS)

    Tadić, Bosiljka; Thurner, Stefan; Rodgers, G. J.

    2004-03-01

    We study the microscopic time fluctuations of traffic load and the global statistical properties of a dense traffic of particles on scale-free cyclic graphs. For a wide range of driving rates R the traffic is stationary and the load time series exhibits antipersistence due to the regulatory role of the superstructure associated with two hub nodes in the network. We discuss how the superstructure affects the functioning of the network at high traffic density and at the jamming threshold. The degree of correlations systematically decreases with increasing traffic density and eventually disappears when approaching a jamming density Rc. Already before jamming we observe qualitative changes in the global network-load distributions and the particle queuing times. These changes are related to the occurrence of temporary crises in which the network-load increases dramatically, and then slowly falls back to a value characterizing free flow.

  16. Beyond δ : Tailoring marked statistics to reveal modified gravity

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models that seek to explain cosmic acceleration through modifications to general relativity (GR) evade stringent Solar System constraints through a restoring, screening mechanism. Down-weighting the high-density, screened regions in favor of the low density, unscreened ones offers the potential to enhance the amount of information carried in such modified gravity models. In this work, we assess the performance of a new "marked" transformation and perform a systematic comparison with the clipping and logarithmic transformations, in the context of Λ CDM and the symmetron and f (R ) modified gravity models. Performance is measured in terms of the fractional boost in the Fisher information and the signal-to-noise ratio (SNR) for these models relative to the statistics derived from the standard density distribution. We find that all three statistics provide improved Fisher boosts over the basic density statistics. The model parameters for the marked and clipped transformation that best enhance signals and the Fisher boosts are determined. We also show that the mark is useful both as a Fourier and real-space transformation; a marked correlation function also enhances the SNR relative to the standard correlation function, and can on mildly nonlinear scales show a significant difference between the Λ CDM and the modified gravity models. Our results demonstrate how a series of simple analytical transformations could dramatically increase the predicted information extracted on deviations from GR, from large-scale surveys, and give the prospect for a much more feasible potential detection.

  17. 77 FR 18304 - Agency Information Collection; Activity Under OMB Review; Report of Financial and Operating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-27

    ... Operating Statistics for Large Certificated Air Carriers AGENCY: Research & Innovative Technology... DEPARTMENT OF TRANSPORTATION Research & Innovative Technology Administration [Docket ID Number... Operating Statistics for Large Certificated Air Carriers. Form No.: BTS Form 41. Type Of Review...

  18. Is the nonlipomatous component of dedifferentiated liposarcoma always soft tissue on CT? Analysis of CT densities and correlation with rate of growth in 60 patients.

    PubMed

    Tirumani, Sree Harsha; Wagner, Andrew J; Tirumani, Harika; Shinagare, Atul B; Jagannathan, Jyothi P; Hornick, Jason L; George, Suzanne; Ramaiya, Nikhil H

    2015-06-01

    To define the various CT densities of nonlipomatous component of dedifferentiated liposarcoma (DDLPS) and to determine if the rate of growth varies with density. This study identified 60 patients with DDPLS (38 men, 22 women; mean age at diagnosis 59 years, range, 35-82 years) who had one or more resections. CT scan immediately before the surgical resection (presurgery) and up to a maximum of one year before the surgery (baseline) was reviewed by two radiologists to note the density of the nonlipomatous elements and rate of growth during that period. Clinical and histopathological data were extracted from electronic medical records. Rate of growth of various densities was compared using Kruskal-Wallis test. Three distinct densities of the nonlipomatous component were noted: soft tissue density (SD), fluid density (FD), and mixed density (MD). Of 109 lesions on the presurgery scan (SD = 78; MD = 22; FD = 9), scans at baseline were available for 72/109 lesions (SD = 49; MD = 14; FD = 9). Median growth rate/month without treatment, with chemotherapy, and with radiotherapy were 40%, 24%, and 62%, respectively, for SD lesions and 28%, 61%, and 52% for MD lesions. For FD lesions, it was 72% and 35%, respectively, without treatment and with chemotherapy. There was no statistical difference in the rate of growth of various densities. Density changed over time in 8/72 (11%) lesions, including 2/49 SD lesions (to MD), 1/14 MD lesions (to SD), and 5/9 FD lesions (to SD). DDLPS has three distinct CT densities of which soft tissue density is the most common. Despite not being statistically significant, fluid density lesions had rapid growth rate and often converted to soft tissue density in our study.

  19. Text Density and Learner-Control as Design Variables with CBI and Print Media.

    ERIC Educational Resources Information Center

    Ross, Steven M.; And Others

    This study investigated the effects of computer and print text density on learning, and the nature and effects of learner preference for different density levels in both print and computer presentation modes. Subjects were 48 undergraduate teacher education majors, who were assigned at random to six treatment groups in which a statistics lesson…

  20. The Impact of Phonological Neighborhood Density on Typical and Atypical Emerging Lexicons

    ERIC Educational Resources Information Center

    Stokes, Stephanie F.

    2014-01-01

    According to the Extended Statistical Learning account (ExSL; Stokes, Kern & dos Santos, 2012) late talkers (LTs) continue to use neighborhood density (ND) as a cue for word learning when their peers no longer use a density learning mechanism. In the current article, LTs expressive ("active") lexicon ND values differed from those of…

  1. Angular filter refractometry analysis using simulated annealing.

    PubMed

    Angland, P; Haberberger, D; Ivancic, S T; Froula, D H

    2017-10-01

    Angular filter refractometry (AFR) is a novel technique used to characterize the density profiles of laser-produced, long-scale-length plasmas [Haberberger et al., Phys. Plasmas 21, 056304 (2014)]. A new method of analysis for AFR images was developed using an annealing algorithm to iteratively converge upon a solution. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on the minimization of the χ 2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in an average uncertainty in the density profile of 5%-20% in the region of interest.

  2. Raising the Bar: Increased Hydraulic Pressure Allows Unprecedented High Power Densities in Pressure-Retarded Osmosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straub, AP; Yip, NY; Elimelech, M

    2014-01-01

    Pressure-retarded osmosis (PRO) has the potential to generate sustainable energy from salinity gradients. PRO is typically considered for operation with river water and seawater, but a far greater energy of mixing can be harnessed from hypersaline solutions. This study investigates the power density that can be obtained in PRO from such concentrated solutions. Thin-film composite membranes with an embedded woven mesh were supported by tricot fabric feed spacers in a specially designed crossflow cell to maximize the operating pressure of the system, reaching a stable applied hydraulic pressure of 48 bar (700 psi) for more than 10 h. Operation atmore » this increased hydraulic pressure allowed unprecedented power densities, up to 60 W/m(2) with a 3 M (180 g/L) NaCl draw solution. Experimental power densities demonstrate reasonable agreement with power densities modeled using measured membrane properties, indicating high-pressure operation does not drastically alter membrane performance. Our findings exhibit the promise of the generation of power from high-pressure PRO with concentrated solutions.« less

  3. 14 CFR 93.219 - Allocation of slots for essential air service operations and applicable limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SPECIAL AIR TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic... or from a High Density Traffic Airport under the Department of Transportation's Essential Air Service...

  4. 14 CFR 93.217 - Allocation of slots for international operations and applicable limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic Airports § 93... available, additional slots during the high density hours shall be allocated at Kennedy Airport for new...

  5. 75 FR 9017 - Orders Limiting Scheduled Operations at John F. Kennedy International Airport, LaGuardia Airport...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-26

    ...; High Density Rule at Reagan National Airport AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... by February 16. Under the FAA's High Density Rule and orders limiting scheduled operations at the...

  6. 14 CFR 93.217 - Allocation of slots for international operations and applicable limitations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic Airports § 93... available, additional slots during the high density hours shall be allocated at Kennedy Airport for new...

  7. 14 CFR 93.217 - Allocation of slots for international operations and applicable limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic Airports § 93... available, additional slots during the high density hours shall be allocated at Kennedy Airport for new...

  8. 14 CFR 93.217 - Allocation of slots for international operations and applicable limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic Airports § 93... available, additional slots during the high density hours shall be allocated at Kennedy Airport for new...

  9. 14 CFR 93.219 - Allocation of slots for essential air service operations and applicable limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SPECIAL AIR TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic... or from a High Density Traffic Airport under the Department of Transportation's Essential Air Service...

  10. 14 CFR 93.217 - Allocation of slots for international operations and applicable limitations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic Airports § 93... available, additional slots during the high density hours shall be allocated at Kennedy Airport for new...

  11. 14 CFR 93.219 - Allocation of slots for essential air service operations and applicable limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SPECIAL AIR TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic... or from a High Density Traffic Airport under the Department of Transportation's Essential Air Service...

  12. 14 CFR 93.219 - Allocation of slots for essential air service operations and applicable limitations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SPECIAL AIR TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic... or from a High Density Traffic Airport under the Department of Transportation's Essential Air Service...

  13. 14 CFR 93.219 - Allocation of slots for essential air service operations and applicable limitations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SPECIAL AIR TRAFFIC RULES Allocation of Commuter and Air Carrier IFR Operations at High Density Traffic... or from a High Density Traffic Airport under the Department of Transportation's Essential Air Service...

  14. Automated breast tissue density assessment using high order regional texture descriptors in mammography

    NASA Astrophysics Data System (ADS)

    Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun

    2014-03-01

    Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.

  15. Abiotic controls of emergent macrophyte density in a bedrock channel - The Cahaba River, AL (USA)

    NASA Astrophysics Data System (ADS)

    Vaughn, Ryan S.; Davis, Lisa

    2015-10-01

    Research examining bedrock channels is growing. Despite this, biotic-abiotic interactions remain a topic mostly addressed in alluvial systems. This research identified hydrogeomorphic factors operating at the patch-scale (100-102 m) in bedrock shoals of the Cahaba River (AL) that help determine the distribution of the emergent aquatic macrophyte, Justicia americana. Macrophyte patch density (number of stems/m2) and percent bedrock void surface area (rock surface area/m2 occupied by joints, fractures, and potholes) were measured (n = 24 within two bedrock shoals) using stem counts and underwater photography, respectively. One-dimensional hydrologic modeling (HEC-RAS 4.1.0) was completed for a section within a shoal to examine velocity and channel depth as controlling variables for macrophyte patch density. Results from binary logistic regression analysis identified depth and velocity as good predictors of the presence or absence of Justicia americana within shoal structures (depth p = 0.001, velocity p = 0.007), which is a similar finding to previous research conducted in alluvial systems. Correlation analysis between bedrock surface void area and stem density demonstrated a statistically significant positive correlation (r = 0.665, p = 0.01), elucidating a link between abiotic-biotic processes that may well be unique to bedrock channels. These results suggest that the amount of void space present in bedrock surfaces, in addition to localized depth and velocity, helps control macrophyte patch density in bedrock shoal complexes. The utility of geomorphology in explaining patch-scale habitat heterogeneity in this study highlights geomorphology's potential to help understand macrophyte habitat heterogeneity at the reach scale, while also demonstrating its promise for mapping and understanding habitat heterogeneity at the system scale.

  16. Bluetongue Disease Risk Assessment Based on Observed and Projected Culicoides obsoletus spp. Vector Densities

    PubMed Central

    Brugger, Katharina; Rubel, Franz

    2013-01-01

    Bluetongue is an arboviral disease of ruminants causing significant economic losses. Our risk assessment is based on the epidemiological key parameter, the basic reproduction number. It is defined as the number of secondary cases caused by one primary case in a fully susceptible host population, in which values greater than one indicate the possibility, i.e., the risk, for a major disease outbreak. In the course of the Bluetongue virus serotype 8 (BTV-8) outbreak in Europe in 2006 we developed such a risk assessment for the University of Veterinary Medicine Vienna, Austria. Basic reproduction numbers were calculated using a well-known formula for vector-borne diseases considering the population densities of hosts (cattle and small ruminants) and vectors (biting midges of the Culicoides obsoletus spp.) as well as temperature dependent rates. The latter comprise the biting and mortality rate of midges as well as the reciprocal of the extrinsic incubation period. Most important, but generally unknown, is the spatio-temporal distribution of the vector density. Therefore, we established a continuously operating daily monitoring to quantify the seasonal cycle of the vector population by a statistical model. We used cross-correlation maps and Poisson regression to describe vector densities by environmental temperature and precipitation. Our results comprise time series of observed and simulated Culicoides obsoletus spp. counts as well as basic reproduction numbers for the period 2009–2011. For a spatio-temporal risk assessment we projected our results from the location of Vienna to the entire region of Austria. We compiled both daily maps of vector densities and the basic reproduction numbers, respectively. Basic reproduction numbers above one were generally found between June and August except in the mountainous regions of the Alps. The highest values coincide with the locations of confirmed BTV cases. PMID:23560090

  17. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    NASA Astrophysics Data System (ADS)

    Kutiev, Ivan; Marinov, Pencho; Fidanova, Stefka; Belehaki, Anna; Tsagouri, Ioanna

    2012-12-01

    Validation results on the latest version of TaD model (TaDv2) show realistic reconstruction of the electron density profiles (EDPs) with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  18. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  19. A statistical model of operational impacts on the framework of the bridge crane

    NASA Astrophysics Data System (ADS)

    Antsev, V. Yu; Tolokonnikov, A. S.; Gorynin, A. D.; Reutov, A. A.

    2017-02-01

    The technical regulations of the Customs Union demands implementation of the risk analysis of the bridge cranes operation at their design stage. The statistical model has been developed for performance of random calculations of risks, allowing us to model possible operational influences on the bridge crane metal structure in their various combination. The statistical model is practically actualized in the software product automated calculation of risks of failure occurrence of bridge cranes.

  20. Comparison of Danish dichotomous and BI-RADS classifications of mammographic density.

    PubMed

    Hodge, Rebecca; Hellmann, Sophie Sell; von Euler-Chelpin, My; Vejborg, Ilse; Andersen, Zorana Jovanovic

    2014-06-01

    In the Copenhagen mammography screening program from 1991 to 2001, mammographic density was classified either as fatty or mixed/dense. This dichotomous mammographic density classification system is unique internationally, and has not been validated before. To compare the Danish dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. Of the 120 women, 32 (26.7%) were classified as having fatty and 88 (73.3%) as mixed/dense mammographic density, according to Danish dichotomous classification. According to BI-RADS density classification, 12 (10.0%) women were classified as having predominantly fatty (BI-RADS code 1), 46 (38.3%) as having scattered fibroglandular (BI-RADS code 2), 57 (47.5%) as having heterogeneously dense (BI-RADS 3), and five (4.2%) as having extremely dense (BI-RADS code 4) mammographic density. The inter-rater variability assessed by weighted kappa statistic showed a substantial agreement (0.75). The dichotomous mammographic density classification system utilized in early years of Copenhagen's mammographic screening program (1991-2001) agreed well with the BI-RADS density classification system.

  1. Analysis of thrips distribution: application of spatial statistics and Kriging

    Treesearch

    John Aleong; Bruce L. Parker; Margaret Skinner; Diantha Howard

    1991-01-01

    Kriging is a statistical technique that provides predictions for spatially and temporally correlated data. Observations of thrips distribution and density in Vermont soils are made in both space and time. Traditional statistical analysis of such data assumes that the counts taken over space and time are independent, which is not necessarily true. Therefore, to analyze...

  2. New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.

    PubMed

    McCoy, Airlie J

    2002-10-01

    Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.

  3. Statistical variability and confidence intervals for planar dose QA pass rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less

  4. Effects of mental workload on physiological and subjective responses during traffic density monitoring: A field study.

    PubMed

    Fallahi, Majid; Motamedzade, Majid; Heidarimoghadam, Rashid; Soltanian, Ali Reza; Miyake, Shinji

    2016-01-01

    This study evaluated operators' mental workload while monitoring traffic density in a city traffic control center. To determine the mental workload, physiological signals (ECG, EMG) were recorded and the NASA-Task Load Index (TLX) was administered for 16 operators. The results showed that the operators experienced a larger mental workload during high traffic density than during low traffic density. The traffic control center stressors caused changes in heart rate variability features and EMG amplitude, although the average workload score was significantly higher in HTD conditions than in LTD conditions. The findings indicated that increasing traffic congestion had a significant effect on HR, RMSSD, SDNN, LF/HF ratio, and EMG amplitude. The results suggested that when operators' workload increases, their mental fatigue and stress level increase and their mental health deteriorate. Therefore, it maybe necessary to implement an ergonomic program to manage mental health. Furthermore, by evaluating mental workload, the traffic control center director can organize the center's traffic congestion operators to sustain the appropriate mental workload and improve traffic control management. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Advanced intermediate temperature sodium–nickel chloride batteries with ultra-high energy density

    PubMed Central

    Li, Guosheng; Lu, Xiaochuan; Kim, Jin Y.; Meinhardt, Kerry D.; Chang, Hee Jung; Canfield, Nathan L.; Sprenkle, Vincent L.

    2016-01-01

    Sodium-metal halide batteries have been considered as one of the more attractive technologies for stationary electrical energy storage, however, they are not used for broader applications despite their relatively well-known redox system. One of the roadblocks hindering market penetration is the high-operating temperature. Here we demonstrate that planar sodium–nickel chloride batteries can be operated at an intermediate temperature of 190 °C with ultra-high energy density. A specific energy density of 350 Wh kg−1, higher than that of conventional tubular sodium–nickel chloride batteries (280 °C), is obtained for planar sodium–nickel chloride batteries operated at 190 °C over a long-term cell test (1,000 cycles), and it attributed to the slower particle growth of the cathode materials at the lower operating temperature. Results reported here demonstrate that planar sodium–nickel chloride batteries operated at an intermediate temperature could greatly benefit this traditional energy storage technology by improving battery energy density, cycle life and reducing material costs. PMID:26864635

  6. Evolution of statistical properties for a nonlinearly propagating sinusoid.

    PubMed

    Shepherd, Micah R; Gee, Kent L; Hanford, Amanda D

    2011-07-01

    The nonlinear propagation of a pure sinusoid is considered using time domain statistics. The probability density function, standard deviation, skewness, kurtosis, and crest factor are computed for both the amplitude and amplitude time derivatives as a function of distance. The amplitude statistics vary only in the postshock realm, while the amplitude derivative statistics vary rapidly in the preshock realm. The statistical analysis also suggests that the sawtooth onset distance can be considered to be earlier than previously realized. © 2011 Acoustical Society of America

  7. Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team

    2018-05-01

    Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.

  8. Bone density and functional results after femoral revision with a cementless press-fit stem.

    PubMed

    Canovas, F; Roche, O; Girard, J; Bonnomet, F; Goldschild, M; Le Béguec, P

    2015-05-01

    The influence of radiographic bone density changes in the area surrounding a total hip arthroplasty (THA) revision with a cementless press-fit stem is unknown, notably in terms of functional results. We have therefore conducted a study aiming to (1) propose a radiographic method to assess bone density, (2) measure the functional effects of reduced bone density, and (3) determine the factors contributing to these modifications. A reduction in radiographic bone density has a negative influence on the functional result after revision using a cementless press-fit stem. We retrospectively assessed 150 THA revisions at a mean follow-up of 6.3 ± 3.2 years (range, 2-15 years). The clinical assessment was based on the Harris Hip Score. Bone density modifications were measured radiographically and the method was evaluated. The change in bone density was classified into two groups: (1) bone density not reduced or < 2 Gruen zones (118 cases [79%]); (2) bone density reduced ≥ 2 zones (32 cases [21%]). The variables showing a potential influence were the Cortical Index (CI), the type of primary stability with the press-fit system, and the femoral implant length. Inter- and intraobserver reliability of radiographic bone density measurement was evaluated as moderate or good (Kappa, 0.58; 0.60 and 0.67, respectively). For the Harris Hip Score at follow-up, there was a borderline statistical relation between stages 1 and 2: for the 118 stage 1 patients, this score was 83.62 ± 11.54 (range, 27-99) versus 78.34 ± 15.98 (range, 62-91) for stage 2 patients (P = 0.09). A CI ≤ 0.44 showed mediocre bone quality contributing to decreased bone density (P < 0.02). On the other hand, there was no statistically significant relation with the type of primary fixation (P = 0.34) or the length of the implant (P = 0.23). A cementless revision femoral stem can induce a reduction in bone density with possible functional effects. The negative role played by bone scarcity on the functional score is confirmed, and even though the difference is not statistically significant, we suggest using a short stem when this is possible. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  9. Radiographic technical quality of root canal treatment performed ex vivo by dental students at Valencia University Medical and Dental School, Spain

    PubMed Central

    Faus-Matoses, Vicente; Alegre-Domingo, Teresa; Faus-Llácer, Vicente J.

    2014-01-01

    Objectives: To evaluate radiographically the quality of root canal fillings and compare manual and rotary preparation performed on extracted teeth by undergraduate dental students. Study Design: A total of 561 premolars and molars extracted teeth were prepared using nickel-titanium rotary files or manual instrumentation and filled with gutta-percha using a cold lateral condensation technique, by 4th grade undergraduate students. Periapical radiographs were used to assess the technical quality of the root canal filling, evaluating three variables: length, density and taper. These data were recorded, scored and used to study the “technical success rate” and the “overall score”. The length of each root canal filling was classified as acceptable, short and overfilled, based on their relationship with the radiographic apex. Density and taper of filling were evaluated based on the presence of voids and the uniform tapering of the filling, respectively. Statistical analysis was used to evaluate the quality of root canal treatment, considering p < 0.05 as a statistical significant level. Results: The percentage of technical success was 44% and the overall score was 7.8 out of 10. Technical success and overall score were greater with rotary instruments (52% against 28% with a manual one, p < 0.001; 8.3 against 6.7 respectively, p < 0.001). Conclusions: It appears that inexperienced operators perform better root canal treatment (RCT) with the use of rotary instrumentation. Key words:Dental education, endodontics, rotary instrumentation, radiographs, root canal treatment, undergraduate students. PMID:24121911

  10. The comparative safety of genipin versus UVA-riboflavin crosslinking of rabbit corneas

    PubMed Central

    Song, Wenjing; Tang, Yun; Qiao, Jing; Li, Haili; Rong, Bei; Yang, Songlin; Wu, Yuan

    2017-01-01

    Purpose To investigate, after 24 h, the safety of genipin or ultraviolet A (UVA)-riboflavin crosslinking of keratocytes and endothelial cells. Methods Fifteen New Zealand white rabbits were selected and divided into a PBS group (five rabbits), a 0.2% genipin crosslinking (GP-CXL) group (five rabbits), and a UVA-riboflavin crosslinking (UVA-CXL) group (five rabbits). In the GP-CXL and PBS groups, 0.2% genipin or PBS was applied to the corneal surface of the right eyes. In the UVA-CXL group, a clinical crosslinking procedure was used. Before and after surgery, the operated eyes of each group were characterized with confocal microscopy, and the corneal buttons were excised for endothelium staining and electron microscopy. Results The corneal endothelial cell density of the GP-CXL, UVA-CLX, and PBS groups changed. There was a statistically significant difference in thickness and changes in corneal endothelial cell density between the UVA-CXL group and the PBS group (p<0.05), and between the UVA-CXL group and the GP-CXL group (p<0.05), but no statistically significant difference between the GP-CXL group and the PBS group. Confocal microscopy, transmission electron microscopy, and hematoxylin and eosin staining showed that there was keratocyte apoptosis in the anterior and middle stroma and endothelial cell damage in the UVA-CXL group. In the GP-CXL group, only active keratocytes were found and minimal endothelial cell damage. Conclusions Treatment of rabbit corneas with 0.2% genipin showed minimal toxicity toward keratocytes and endothelial cells. Genipin is safer than UVA-CXL for crosslinking of thin corneas. PMID:28761323

  11. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  12. A projection operator method for the analysis of magnetic neutron form factors

    NASA Astrophysics Data System (ADS)

    Kaprzyk, S.; Van Laar, B.; Maniawski, F.

    1981-03-01

    A set of projection operators in matrix form has been derived on the basis of decomposition of the spin density into a series of fully symmetrized cubic harmonics. This set of projection operators allows a formulation of the Fourier analysis of magnetic form factors in a convenient way. The presented method is capable of checking the validity of various theoretical models used for spin density analysis up to now. The general formalism is worked out in explicit form for the fcc and bcc structures and deals with that part of spin density which is contained within the sphere inscribed in the Wigner-Seitz cell. This projection operator method has been tested on the magnetic form factors of nickel and iron.

  13. The structure and statistics of interstellar turbulence

    NASA Astrophysics Data System (ADS)

    Kritsuk, A. G.; Ustyugov, S. D.; Norman, M. L.

    2017-06-01

    We explore the structure and statistics of multiphase, magnetized ISM turbulence in the local Milky Way by means of driven periodic box numerical MHD simulations. Using the higher order-accurate piecewise-parabolic method on a local stencil (PPML), we carry out a small parameter survey varying the mean magnetic field strength and density while fixing the rms velocity to observed values. We quantify numerous characteristics of the transient and steady-state turbulence, including its thermodynamics and phase structure, kinetic and magnetic energy power spectra, structure functions, and distribution functions of density, column density, pressure, and magnetic field strength. The simulations reproduce many observables of the local ISM, including molecular clouds, such as the ratio of turbulent to mean magnetic field at 100 pc scale, the mass and volume fractions of thermally stable Hi, the lognormal distribution of column densities, the mass-weighted distribution of thermal pressure, and the linewidth-size relationship for molecular clouds. Our models predict the shape of magnetic field probability density functions (PDFs), which are strongly non-Gaussian, and the relative alignment of magnetic field and density structures. Finally, our models show how the observed low rates of star formation per free-fall time are controlled by the multiphase thermodynamics and large-scale turbulence.

  14. Thin layer asphaltic concrete density measuring using nuclear gages.

    DOT National Transportation Integrated Search

    1989-03-01

    A Troxler 4640 thin layer nuclear gage was evaluated under field conditions to determine if it would provide improved accuracy of density measurements on asphalt overlays of 1-3/4 and 2 inches in thickness. Statistical analysis shows slightly improve...

  15. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    NASA Astrophysics Data System (ADS)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  16. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-10-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  17. A method for age-matched OCT angiography deviation mapping in the assessment of disease- related changes to the radial peripapillary capillaries.

    PubMed

    Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y

    2018-01-01

    To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.

  18. Mass density slope of elliptical galaxies from strong lensing and resolved stellar kinematics

    NASA Astrophysics Data System (ADS)

    Lyskova, N.; Churazov, E.; Naab, T.

    2018-04-01

    We discuss constraints on the mass density distribution (parametrized as ρ ∝ r-γ) in early-type galaxies provided by strong lensing and stellar kinematics data. The constraints come from mass measurements at two `pinch' radii. One `pinch' radius r1 = 2.2REinst is defined such that the Einstein (i.e. aperture) mass can be converted into the spherical mass almost independently of the mass-model. Another `pinch' radius r2 = Ropt is chosen so that the dynamical mass, derived from the line-of-sight velocity dispersion, is least sensitive to the anisotropy of stellar orbits. We verified the performance of this approach on a sample of simulated elliptical galaxies and on a sample of 15 SLACS lens galaxies at 0.01 ≤ z ≤ 0.35, which have already been analysed in Barnabè et al. by the self-consistent joint lensing and kinematic code. For massive simulated galaxies, the density slope γ is recovered with an accuracy of ˜13 per cent, unless r1 and r2 happen to be close to each other. For SLACS galaxies, we found good overall agreement with the results of Barnabè et al. with a sample-averaged slope γ = 2.1 ± 0.05. Although the two-pinch-radii approach has larger statistical uncertainties, it is much simpler and uses only few arithmetic operations with directly observable quantities.

  19. A GIS-based automated procedure for landslide susceptibility mapping by the Conditional Analysis method: the Baganza valley case study (Italian Northern Apennines)

    NASA Astrophysics Data System (ADS)

    Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo

    2006-08-01

    Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.

  20. Impact of neutral density fluctuations on gas puff imaging diagnostics

    NASA Astrophysics Data System (ADS)

    Wersal, C.; Ricci, P.

    2017-11-01

    A three-dimensional turbulence simulation of the SOL and edge regions of a toroidally limited tokamak is carried out. The simulation couples self-consistently the drift-reduced two-fluid Braginskii equations to a kinetic equation for neutral atoms. A diagnostic neutral gas puff on the low-field side midplane is included and the impact of neutral density fluctuations on D_α light emission investigated. We find that neutral density fluctuations affect the D_α emission. In particular, at a radial distance from the gas puff smaller than the neutral mean free path, neutral density fluctuations are anti-correlated with plasma density, electron temperature, and D_α fluctuations. It follows that the neutral fluctuations reduce the D_α emission in most of the observed region and, therefore, have to be taken into account when interpreting the amplitude of the D_α emission. On the other hand, higher order statistical moments (skewness, kurtosis) and turbulence characteristics (such as correlation length, or the autocorrelation time) are not significantly affected by the neutral fluctuations. At distances from the gas puff larger than the neutral mean free path, a non-local shadowing effect influences the neutral density fluctuations. There, the D_α fluctuations are correlated with the neutral density fluctuations, and the high-order statistical moments and measurements of other turbulence properties are strongly affected by the neutral density fluctuations.

  1. A Balanced Approach to Adaptive Probability Density Estimation.

    PubMed

    Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy

    2017-01-01

    Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  2. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  3. Long-ranged Fermi-Pasta-Ulam systems in thermal contact: Crossover from q-statistics to Boltzmann-Gibbs statistics

    NASA Astrophysics Data System (ADS)

    Bagchi, Debarshee; Tsallis, Constantino

    2017-04-01

    The relaxation to equilibrium of two long-range-interacting Fermi-Pasta-Ulam-like models (β type) in thermal contact is numerically studied. These systems, with different sizes and energy densities, are coupled to each other by a few thermal contacts which are short-range harmonic springs. By using the kinetic definition of temperature, we compute the time evolution of temperature and energy density of the two systems. Eventually, for some time t >teq, the temperature and energy density of the coupled system equilibrate to values consistent with standard Boltzmann-Gibbs thermostatistics. The equilibration time teq depends on the system size N as teq ∼Nγ where γ ≃ 1.8. We compute the velocity distribution P (v) of the oscillators of the two systems during the relaxation process. We find that P (v) is non-Gaussian and is remarkably close to a q-Gaussian distribution for all times before thermal equilibrium is reached. During the relaxation process we observe q > 1 while close to t =teq the value of q converges to unity and P (v) approaches a Gaussian. Thus the relaxation phenomenon in long-ranged systems connected by a thermal contact can be generically described as a crossover from q-statistics to Boltzmann-Gibbs statistics.

  4. Growth characteristics of aquatic macrophytes cultured in nutrient-enriched water. I. Water hyacinth, water lettuce, and pennywort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, K.R.; DeBusk, W.F.

    Seasonal growth characteristics and biomass yield potential of 3 floating aquatic macrophytes cultured in nutrient nonlimiting conditions were evaluated in central Florida's climatic conditions. Growth cycle (growth curve) of the plants was found to be complete when maximum plant density was reached and no additional increase in growth was recorded. Biomass yield per unit area and time was found to be maximum in the linear phase of the growth curve; plant density in this phase was defined as ''operational plant density,'' a density range in which a biomass production system is operated to obtain the highest possible yields. Biomass yieldsmore » were found to be 106, 72, and 41 t(dry wt) ha/sup -1/yr/sup -1/, respectively, for water hyacinth (Eichhornia crassipes), water lettuce (Pistia stratiotes), and pennywort (Hydrocotyle umbellata). Operational plant density was found to be in the range of 500-2000 g dry wt m/sup -2/ for water hyacinth, 200-700 g dry wt m/sup -2/ for water lettuce, and 250-650 g dry wt/sup -2/ for pennywort. Seasonality was observed in growth rates but not in operational plant density. Specific growth rate (% increase per day) was found to maximum at low plant densities and decreased as the plant density increased. Results show that water hyacinth and water lettuce can be successfully grown for a period of about 10 mo, while pennywort, a cool season plant, can be integrated into water hyacinth/water lettuce biomass production system to obtain high yields in the winter.« less

  5. Growth characteristics of aquatic macrophytes cultured in nutrient-enriched water. I. Water hyacinth, water lettuce, and pennywort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, K.R.; DeBusk, W.F.

    Seasonal growth characteristics and biomass yield potential of 3 floating aquatic macrophytes cultured in nutrient nonlimiting conditions were evaluated in central Florida's climatic conditions. Growth cycle (growth curve) of the plants was found to be complete when maximum plant density was reached and no additional increase in growth was recorded. Biomass yield per unit area and time was found to be maximum in the linear phase of the growth curve; plant density in this phase was defined as operational plant density, a density range in which a biomass production system is operated to obtain the highest possible yields. Biomass yieldsmore » were found to be 106, 72, and 41 t (dry wt) ha/sup -1/ yr/sup -1/, respectively, for water hyacinth (Eichhornia crassipes), water lettuce (Pistia stratiotes), and pennywort (Hydrocotyle umbellata). Operational plant density was found to be in the range of 500-2,000 g dry wt m/sup -2/ for water hyacinth, 200-700 g dry wt m/sup -2/ for water lettuce, and 250-650 g dry wt m/sup -2/ for pennywort. Seasonality was observed in growth rates but not in operational plant density. Specific growth rate (% increase per day) was found to maximum at low plant densities and decreased as the plant density increased. Results show that water hyacinth and water lettuce can be successfully grown for a period of about 10 mo, while pennywort, a cool season plant, can be integrated into water hyacinth/water lettuce biomass production system to obtain high yields in the winter.« less

  6. Density regulation in Northeast Atlantic fish populations: Density dependence is stronger in recruitment than in somatic growth.

    PubMed

    Zimmermann, Fabian; Ricard, Daniel; Heino, Mikko

    2018-05-01

    Population regulation is a central concept in ecology, yet in many cases its presence and the underlying mechanisms are difficult to demonstrate. The current paradigm maintains that marine fish populations are predominantly regulated by density-dependent recruitment. While it is known that density-dependent somatic growth can be present too, its general importance remains unknown and most practical applications neglect it. This study aimed to close this gap by for the first time quantifying and comparing density dependence in growth and recruitment over a large set of fish populations. We fitted density-dependent models to time-series data on population size, recruitment and age-specific weight from commercially exploited fish populations in the Northeast Atlantic Ocean and the Baltic Sea. Data were standardized to enable a direct comparison within and among populations, and estimated parameters were used to quantify the impact of density regulation on population biomass. Statistically significant density dependence in recruitment was detected in a large proportion of populations (70%), whereas for density dependence in somatic growth the prevalence of density dependence depended heavily on the method (26% and 69%). Despite age-dependent variability, the density dependence in recruitment was consistently stronger among age groups and between alternative approaches that use weight-at-age or weight increments to assess growth. Estimates of density-dependent reduction in biomass underlined these results: 97% of populations with statistically significant parameters for growth and recruitment showed a larger impact of density-dependent recruitment on population biomass. The results reaffirm the importance of density-dependent recruitment in marine fishes, yet they also show that density dependence in somatic growth is not uncommon. Furthermore, the results are important from an applied perspective because density dependence in somatic growth affects productivity and catch composition, and therefore the benefits of maintaining fish populations at specific densities. © 2018 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  7. Zubarev's Nonequilibrium Statistical Operator Method in the Generalized Statistics of Multiparticle Systems

    NASA Astrophysics Data System (ADS)

    Glushak, P. A.; Markiv, B. B.; Tokarchuk, M. V.

    2018-01-01

    We present a generalization of Zubarev's nonequilibrium statistical operator method based on the principle of maximum Renyi entropy. In the framework of this approach, we obtain transport equations for the basic set of parameters of the reduced description of nonequilibrium processes in a classical system of interacting particles using Liouville equations with fractional derivatives. For a classical systems of particles in a medium with a fractal structure, we obtain a non-Markovian diffusion equation with fractional spatial derivatives. For a concrete model of the frequency dependence of a memory function, we obtain generalized Kettano-type diffusion equation with the spatial and temporal fractality taken into account. We present a generalization of nonequilibrium thermofield dynamics in Zubarev's nonequilibrium statistical operator method in the framework of Renyi statistics.

  8. Charge Density Engineering: A Feasibility Study

    DTIC Science & Technology

    2013-11-22

    15. Statistical Learning Guided Design of Materials Fritz Haber Institute – Theory Group Berlin , Germany June 17th 2013 16...Technology San Diego, CA June 6th 2013 15. Statistical Learning Guided Design of Materials Fritz Haber Institute – Theory Group Berlin

  9. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  10. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  11. Diamagnetic currents

    NASA Astrophysics Data System (ADS)

    Macris, N.; Martin, Ph. A.; Pulé, J. V.

    1988-06-01

    We study the diamagnetic surface currents of particles in thermal equilibrium submitted to a constant magnetic field. The current density of independent electrons with Boltzmann (respectively Fermi) statistics has a gaussian (respectively exponential) bound for its fall off into the bulk. For a system of interacting particles at low activity with Boltzmann statistics, the current density is localized near to the boundary and integrable when the two-body potential decays as |x|-α, α >4, α>4, in three dimensions. In all cases, the integral of the current density is independent of the nature of the confining wall and correctly related to the bulk magnetisation. The results hold for hard and soft walls and all field strength. The analysis relies on the Feynman-Kac-Ito representation of the Gibbs state and on specific properties of the Brownian bridge process.

  12. Nuclear level densities of 64 , 66 Zn from neutron evaporation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramirez, A. P. D.; Voinov, A. V.; Grimes, S. M.

    Double differential cross sections of neutrons from d+ 63,65Cu reactions have been measured at deuteron energies of 6 and 7.5 MeV. The cross sections measured at backward angles have been compared to theoretical calculations in the framework of the statistical Hauser-Feshbach model. Three different level density models were tested: the Fermi-gas model, the Gilbert-Cameron model, and the microscopic approach through the Hartree-Fock-Bogoliubov method (HFBM). The calculations using the Gilbert-Cameron model are in best agreement with our experimental data. Level densities of the residual nuclei 64Zn and 66Zn have been obtained from statistical neutron evaporation spectra. In conclusion, the angle-integrated crossmore » sections have been analyzed with the exciton model of nuclear reaction.« less

  13. Nuclear level densities of 64 , 66 Zn from neutron evaporation

    DOE PAGES

    Ramirez, A. P. D.; Voinov, A. V.; Grimes, S. M.; ...

    2013-12-26

    Double differential cross sections of neutrons from d+ 63,65Cu reactions have been measured at deuteron energies of 6 and 7.5 MeV. The cross sections measured at backward angles have been compared to theoretical calculations in the framework of the statistical Hauser-Feshbach model. Three different level density models were tested: the Fermi-gas model, the Gilbert-Cameron model, and the microscopic approach through the Hartree-Fock-Bogoliubov method (HFBM). The calculations using the Gilbert-Cameron model are in best agreement with our experimental data. Level densities of the residual nuclei 64Zn and 66Zn have been obtained from statistical neutron evaporation spectra. In conclusion, the angle-integrated crossmore » sections have been analyzed with the exciton model of nuclear reaction.« less

  14. Assessing Urban Streets Network Vulnerability against Earthquake Using GIS - Case Study: 6TH Zone of Tehran

    NASA Astrophysics Data System (ADS)

    Rastegar, A.

    2017-09-01

    Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population (3.6% of city population) and 20 km2 area (3.2% of city area), is one of the main municipal zones of Tehran (Iran center of statistics, 2006). Major land-uses, like ministries, embassies, universities, general hospitals and medical centers, big financial firms and so on, manifest the high importance of this region on local and national scale. In this paper, by employing indexes such as access to medical centers, street inclusion, building and population density, land-use, PGA and building quality, vulnerability degree of street networks in zone #6 against the earthquake is calculated through overlaying maps and data in combination with IHWP method and GIS. This article concludes that buildings alongside the streets with high population and building density, low building quality, far to rescue centers and high level of inclusion represent high rate of vulnerability, compared with other buildings. Also, by moving on from north to south of the zone, the vulnerability increases. Likewise, highways and streets with substantial width and low building and population density hold little values of vulnerability.

  15. Path Integrals for Electronic Densities, Reactivity Indices, and Localization Functions in Quantum Systems

    PubMed Central

    Putz, Mihai V.

    2009-01-01

    The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr’s quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions – all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems. PMID:20087467

  16. Path integrals for electronic densities, reactivity indices, and localization functions in quantum systems.

    PubMed

    Putz, Mihai V

    2009-11-10

    The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr's quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions - all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems.

  17. From creation and annihilation operators to statistics

    NASA Astrophysics Data System (ADS)

    Hoyuelos, M.

    2018-01-01

    A procedure to derive the partition function of non-interacting particles with exotic or intermediate statistics is presented. The partition function is directly related to the associated creation and annihilation operators that obey some specific commutation or anti-commutation relations. The cases of Gentile statistics, quons, Polychronakos statistics, and ewkons are considered. Ewkons statistics was recently derived from the assumption of free diffusion in energy space (Hoyuelos and Sisterna, 2016); an ideal gas of ewkons has negative pressure, a feature that makes them suitable for the description of dark energy.

  18. Cosmological Constraints from Fourier Phase Statistics

    NASA Astrophysics Data System (ADS)

    Ali, Kamran; Obreschkow, Danail; Howlett, Cullan; Bonvin, Camille; Llinares, Claudio; Oliveira Franco, Felipe; Power, Chris

    2018-06-01

    Most statistical inference from cosmic large-scale structure relies on two-point statistics, i.e. on the galaxy-galaxy correlation function (2PCF) or the power spectrum. These statistics capture the full information encoded in the Fourier amplitudes of the galaxy density field but do not describe the Fourier phases of the field. Here, we quantify the information contained in the line correlation function (LCF), a three-point Fourier phase correlation function. Using cosmological simulations, we estimate the Fisher information (at redshift z = 0) of the 2PCF, LCF and their combination, regarding the cosmological parameters of the standard ΛCDM model, as well as a Warm Dark Matter (WDM) model and the f(R) and Symmetron modified gravity models. The galaxy bias is accounted for at the level of a linear bias. The relative information of the 2PCF and the LCF depends on the survey volume, sampling density (shot noise) and the bias uncertainty. For a volume of 1h^{-3}Gpc^3, sampled with points of mean density \\bar{n} = 2× 10^{-3} h3 Mpc^{-3} and a bias uncertainty of 13%, the LCF improves the parameter constraints by about 20% in the ΛCDM cosmology and potentially even more in alternative models. Finally, since a linear bias only affects the Fourier amplitudes (2PCF), but not the phases (LCF), the combination of the 2PCF and the LCF can be used to break the degeneracy between the linear bias and σ8, present in 2-point statistics.

  19. Acoustic emission analysis for the detection of appropriate cutting operations in honing processes

    NASA Astrophysics Data System (ADS)

    Buj-Corral, Irene; Álvarez-Flórez, Jesús; Domínguez-Fernández, Alejandro

    2018-01-01

    In the present paper, acoustic emission was studied in honing experiments obtained with different abrasive densities, 15, 30, 45 and 60. In addition, 2D and 3D roughness, material removal rate and tool wear were determined. In order to treat the sound signal emitted during the machining process, two methods of analysis were compared: Fast Fourier Transform (FFT) and Hilbert Huang Transform (HHT). When density 15 is used, the number of cutting grains is insufficient to provide correct cutting, while clogging appears with densities 45 and 60. The results were confirmed by means of treatment of the sound signal. In addition, a new parameter S was defined as the relationship between energy in low and high frequencies contained within the emitted sound. The selected density of 30 corresponds to S values between 0.1 and 1. Correct cutting operations in honing processes are dependent on the density of the abrasive employed. The density value to be used can be selected by means of measurement and analysis of acoustic emissions during the honing operation. Thus, honing processes can be monitored without needing to stop the process.

  20. TVT-Exact and midurethral sling (SLING-IUFT) operative procedures: a randomized study

    PubMed Central

    Aniulis, Povilas; Skaudickas, Darijus

    2015-01-01

    Objectives The aim of the study is to compare results, effectiveness and complications of TVT exact and midurethral sling (SLING-IUFT) operations in the treatment of female stress urinary incontinence (SUI). Methods A single center nonblind, randomized study of women with SUI who were randomized to TVT-Exact and SLING-IUFT was performed by one surgeon from April 2009 to April 2011. SUI was diagnosed on coughing and Valsalva test and urodynamics (cystometry and uroflowmetry) were assessed before operation and 1 year after surgery. This was a prospective randomized study. The follow up period was 12 months. 76 patients were operated using the TVT-Exact operation and 78 patients – using the SLING-IUFT operation. There was no statistically significant differences between groups for BMI, parity, menopausal status and prolapsed stage (no patients had cystocele greater than stage II). Results Mean operative time was significantly shorter in the SLING-IUFT group (19 ± 5.6 min.) compared with the TVT-Exact group (27 ± 7.1 min.). There were statistically significant differences in the effectiveness of both procedures: TVT-Exact – at 94.5% and SLING-IUFT – at 61.2% after one year. Hospital stay was statistically significantly shorter in the SLING-IUFT group (1. 2 ± 0.5 days) compared with the TVT-Exact group (3.5 ± 1.5 days). Statistically significantly fewer complications occurred in the SLING-IUFT group. Conclusion the TVT-Exact and SLING-IUFT operations are both effective for surgical treatment of female stress urinary incontinence. The SLING-IUFT involved a shorter operation time and lower complications rate., the TVT-Exact procedure had statistically significantly more complications than the SLING-IUFT operation, but a higher effectiveness. PMID:28352711

  1. TVT-Exact and midurethral sling (SLING-IUFT) operative procedures: a randomized study.

    PubMed

    Aniuliene, Rosita; Aniulis, Povilas; Skaudickas, Darijus

    2015-01-01

    The aim of the study is to compare results, effectiveness and complications of TVT exact and midurethral sling (SLING-IUFT) operations in the treatment of female stress urinary incontinence (SUI). A single center nonblind, randomized study of women with SUI who were randomized to TVT-Exact and SLING-IUFT was performed by one surgeon from April 2009 to April 2011. SUI was diagnosed on coughing and Valsalva test and urodynamics (cystometry and uroflowmetry) were assessed before operation and 1 year after surgery. This was a prospective randomized study. The follow up period was 12 months. 76 patients were operated using the TVT-Exact operation and 78 patients - using the SLING-IUFT operation. There was no statistically significant differences between groups for BMI, parity, menopausal status and prolapsed stage (no patients had cystocele greater than stage II). Mean operative time was significantly shorter in the SLING-IUFT group (19 ± 5.6 min.) compared with the TVT-Exact group (27 ± 7.1 min.). There were statistically significant differences in the effectiveness of both procedures: TVT-Exact - at 94.5% and SLING-IUFT - at 61.2% after one year. Hospital stay was statistically significantly shorter in the SLING-IUFT group (1. 2 ± 0.5 days) compared with the TVT-Exact group (3.5 ± 1.5 days). Statistically significantly fewer complications occurred in the SLING-IUFT group. the TVT-Exact and SLING-IUFT operations are both effective for surgical treatment of female stress urinary incontinence. The SLING-IUFT involved a shorter operation time and lower complications rate., the TVT-Exact procedure had statistically significantly more complications than the SLING-IUFT operation, but a higher effectiveness.

  2. Quantification and Correlation of Angiogenesis with Macrophages by Histomorphometric Method in Central and Peripheral Giant Cell Granuloma: An Immunohistochemical Analysis.

    PubMed

    Kumar, Varsha Vimal; Krishanappa, Savita Jangal; Prakash, Smitha Gowdra; Channabasaviah, Girish Hemdal; Murgod, Sanjay; Pujari, Ravikumar; Kamat, Mamata Sharad

    2016-03-01

    Angiogenesis is a fundamental process that affects physiologic reactions and pathological processes such as tumour development and metastasis. It is the process of formation of new microvessel from the preexisting vessels. The purpose of this study was to evaluate angiogenesis, macrophage index and correlate the impact of macrophages on angiogenesis in the central and peripheral giant cell granulomas by evaluating immunohistochemically microvessel density, microvessel perimeter and macrophage index. Immunohistochemical analysis was carried on 20 cases of central and peripheral giant cell granulomas each for CD34 and CD68 proteins expression. Inferential statistical analysis was performed using Independent student t-test to assess the microvessel density, microvessel perimeter and macrophage index on continuous scale between Group I and Group II. Level of significance was determined at 5%. Further bivariate analysis using Pearson correlation test was carried out to see the relationship between microvessel density and macrophage index in each group. Microvessel density, micro vessel perimeter and macrophage index was higher in central giant cell granuloma compared to that of peripheral giant cell granuloma. Correlation between microvessel density and macrophage index among these two lesions was statistically insignificant. Angiogenesis as well as the number of macrophages appeared to increase in Central Giant Cell Granuloma in present study. These findings suggest that macrophages may up regulate the angiogenesis in these giant cell granulomas and angiogenesis do have a role in clinical behaviour. However, we could not establish a positive correlation between microvessel density and macrophage index as the values were statistically insignificant. This insignificance may be presumed due to fewer samples taken for study.

  3. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Robert N; White, Devin A; Urban, Marie L

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less

  4. Brain serotonin transporter density and aggression in abstinent methamphetamine abusers.

    PubMed

    Sekine, Yoshimoto; Ouchi, Yasuomi; Takei, Nori; Yoshikawa, Etsuji; Nakamura, Kazuhiko; Futatsubashi, Masami; Okada, Hiroyuki; Minabe, Yoshio; Suzuki, Katsuaki; Iwata, Yasuhide; Tsuchiya, Kenji J; Tsukada, Hideo; Iyo, Masaomi; Mori, Norio

    2006-01-01

    In animals, methamphetamine is known to have a neurotoxic effect on serotonin neurons, which have been implicated in the regulation of mood, anxiety, and aggression. It remains unknown whether methamphetamine damages serotonin neurons in humans. To investigate the status of brain serotonin neurons and their possible relationship with clinical characteristics in currently abstinent methamphetamine abusers. Case-control analysis. A hospital research center. Twelve currently abstinent former methamphetamine abusers (5 women and 7 men) and 12 age-, sex-, and education-matched control subjects recruited from the community. The brain regional density of the serotonin transporter, a structural component of serotonin neurons, was estimated using positron emission tomography and trans-1,2,3,5,6,10-beta-hexahydro-6-[4-(methylthio)phenyl]pyrrolo-[2,1-a]isoquinoline ([(11)C](+)McN-5652). Estimates were derived from region-of-interest and statistical parametric mapping methods, followed by within-case analysis using the measures of clinical variables. The duration of methamphetamine use, the magnitude of aggression and depressive symptoms, and changes in serotonin transporter density represented by the [(11)C](+)McN-5652 distribution volume. Methamphetamine abusers showed increased levels of aggression compared with controls. Region-of-interest and statistical parametric mapping analyses revealed that the serotonin transporter density in global brain regions (eg, the midbrain, thalamus, caudate, putamen, cerebral cortex, and cerebellum) was significantly lower in methamphetamine abusers than in control subjects, and this reduction was significantly inversely correlated with the duration of methamphetamine use. Furthermore, statistical parametric mapping analyses indicated that the density in the orbitofrontal, temporal, and anterior cingulate areas was closely associated with the magnitude of aggression in methamphetamine abusers. Protracted abuse of methamphetamine may reduce the density of the serotonin transporter in the brain, leading to elevated aggression, even in currently abstinent abusers.

  5. Visual classification of very fine-grained sediments: Evaluation through univariate and multivariate statistics

    USGS Publications Warehouse

    Hohn, M. Ed; Nuhfer, E.B.; Vinopal, R.J.; Klanderman, D.S.

    1980-01-01

    Classifying very fine-grained rocks through fabric elements provides information about depositional environments, but is subject to the biases of visual taxonomy. To evaluate the statistical significance of an empirical classification of very fine-grained rocks, samples from Devonian shales in four cored wells in West Virginia and Virginia were measured for 15 variables: quartz, illite, pyrite and expandable clays determined by X-ray diffraction; total sulfur, organic content, inorganic carbon, matrix density, bulk density, porosity, silt, as well as density, sonic travel time, resistivity, and ??-ray response measured from well logs. The four lithologic types comprised: (1) sharply banded shale, (2) thinly laminated shale, (3) lenticularly laminated shale, and (4) nonbanded shale. Univariate and multivariate analyses of variance showed that the lithologic classification reflects significant differences for the variables measured, difference that can be detected independently of stratigraphic effects. Little-known statistical methods found useful in this work included: the multivariate analysis of variance with more than one effect, simultaneous plotting of samples and variables on canonical variates, and the use of parametric ANOVA and MANOVA on ranked data. ?? 1980 Plenum Publishing Corporation.

  6. Statistical estimation of femur micro-architecture using optimal shape and density predictors.

    PubMed

    Lekadir, Karim; Hazrati-Marangalou, Javad; Hoogendoorn, Corné; Taylor, Zeike; van Rietbergen, Bert; Frangi, Alejandro F

    2015-02-26

    The personalization of trabecular micro-architecture has been recently shown to be important in patient-specific biomechanical models of the femur. However, high-resolution in vivo imaging of bone micro-architecture using existing modalities is still infeasible in practice due to the associated acquisition times, costs, and X-ray radiation exposure. In this study, we describe a statistical approach for the prediction of the femur micro-architecture based on the more easily extracted subject-specific bone shape and mineral density information. To this end, a training sample of ex vivo micro-CT images is used to learn the existing statistical relationships within the low and high resolution image data. More specifically, optimal bone shape and mineral density features are selected based on their predictive power and used within a partial least square regression model to estimate the unknown trabecular micro-architecture within the anatomical models of new subjects. The experimental results demonstrate the accuracy of the proposed approach, with average errors of 0.07 for both the degree of anisotropy and tensor norms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. K-SPAN: A lexical database of Korean surface phonetic forms and phonological neighborhood density statistics.

    PubMed

    Holliday, Jeffrey J; Turnbull, Rory; Eychenne, Julien

    2017-10-01

    This article presents K-SPAN (Korean Surface Phonetics and Neighborhoods), a database of surface phonetic forms and several measures of phonological neighborhood density for 63,836 Korean words. Currently publicly available Korean corpora are limited by the fact that they only provide orthographic representations in Hangeul, which is problematic since phonetic forms in Korean cannot be reliably predicted from orthographic forms. We describe the method used to derive the surface phonetic forms from a publicly available orthographic corpus of Korean, and report on several statistics calculated using this database; namely, segment unigram frequencies, which are compared to previously reported results, along with segment-based and syllable-based neighborhood density statistics for three types of representation: an "orthographic" form, which is a quasi-phonological representation, a "conservative" form, which maintains all known contrasts, and a "modern" form, which represents the pronunciation of contemporary Seoul Korean. These representations are rendered in an ASCII-encoded scheme, which allows users to query the corpus without having to read Korean orthography, and permits the calculation of a wide range of phonological measures.

  8. Formal Operations and Learning Style Predict Success in Statistics and Computer Science Courses.

    ERIC Educational Resources Information Center

    Hudak, Mary A.; Anderson, David E.

    1990-01-01

    Studies 94 undergraduate students in introductory statistics and computer science courses. Applies Formal Operations Reasoning Test (FORT) and Kolb's Learning Style Inventory (LSI). Finds that substantial numbers of students have not achieved the formal operation level of cognitive maturity. Emphasizes need to examine students learning style and…

  9. Application of an Online Reference for Reviewing Basic Statistical Principles of Operating Room Management

    ERIC Educational Resources Information Center

    Dexter, Franklin; Masursky, Danielle; Wachtel, Ruth E.; Nussmeier, Nancy A.

    2010-01-01

    Operating room (OR) management differs from clinical anesthesia in that statistical literacy is needed daily to make good decisions. Two of the authors teach a course in operations research for surgical services to anesthesiologists, anesthesia residents, OR nursing directors, hospital administration students, and analysts to provide them with the…

  10. Collision statistics, thermodynamics, and transport coefficients of hard hyperspheres in three, four, and five dimensions

    NASA Astrophysics Data System (ADS)

    Lue, L.

    2005-01-01

    The collision statistics of hard hyperspheres are investigated. An exact, analytical formula is developed for the distribution of speeds of a sphere on collision, which is shown to be related to the average time between collisions for a sphere with a particular velocity. In addition, the relationship between the collision rate and the compressibility factor is generalized to arbitrary dimensions. Molecular dynamics simulations are performed for d=3, 4, and 5 dimensional hard-hypersphere fluids. From these simulations, the equation of state of these systems, the self-diffusion coefficient, the shear viscosity, and the thermal conductivity are determined as a function of density. Various aspects of the collision statistics and their dependence on the density and dimensionality of the system are also studied.

  11. Understanding Short-Term Nonmigrating Tidal Variability in the Ionospheric Dynamo Region from SABER Using Information Theory and Bayesian Statistics

    NASA Astrophysics Data System (ADS)

    Kumari, K.; Oberheide, J.

    2017-12-01

    Nonmigrating tidal diagnostics of SABER temperature observations in the ionospheric dynamo region reveal a large amount of variability on time-scales of a few days to weeks. In this paper, we discuss the physical reasons for the observed short-term tidal variability using a novel approach based on Information theory and Bayesian statistics. We diagnose short-term tidal variability as a function of season, QBO, ENSO, and solar cycle and other drivers using time dependent probability density functions, Shannon entropy and Kullback-Leibler divergence. The statistical significance of the approach and its predictive capability is exemplified using SABER tidal diagnostics with emphasis on the responses to the QBO and solar cycle. Implications for F-region plasma density will be discussed.

  12. Statistical Irreversible Thermodynamics in the Framework of Zubarev's Nonequilibrium Statistical Operator Method

    NASA Astrophysics Data System (ADS)

    Luzzi, R.; Vasconcellos, A. R.; Ramos, J. G.; Rodrigues, C. G.

    2018-01-01

    We describe the formalism of statistical irreversible thermodynamics constructed based on Zubarev's nonequilibrium statistical operator (NSO) method, which is a powerful and universal tool for investigating the most varied physical phenomena. We present brief overviews of the statistical ensemble formalism and statistical irreversible thermodynamics. The first can be constructed either based on a heuristic approach or in the framework of information theory in the Jeffreys-Jaynes scheme of scientific inference; Zubarev and his school used both approaches in formulating the NSO method. We describe the main characteristics of statistical irreversible thermodynamics and discuss some particular considerations of several authors. We briefly describe how Rosenfeld, Bohr, and Prigogine proposed to derive a thermodynamic uncertainty principle.

  13. Density of Diadema antillarum (Echinodermata: Echinoidea) on live coral patch reefs and dead Acropora cervicornis rubble patches near Loggerhead Key, Dry Tortugas National Park, Florida, USA

    EPA Science Inventory

    Density of adult Diadema antillarum was assessed on live coral patch reefs and dead Acropora cervicornis rubble patches next to Loggerhead Key, Dry Tortugas National Park, Florida, USA in June 2009. Mean density on live coral patch reefs (0.49 individuals m-2) was not statistical...

  14. Mammographic Breast Density in a Cohort of Medically Underserved Women

    DTIC Science & Technology

    2012-10-01

    training, faculty from MMC and VUMC will conduct a case-control study of mammographic breast density to investigate its’ association with obesity and...hormones and growth factors, 4) to perform statistical analyses to determine the associations between obesity and insulin resistance and mammographic...on obesity and insulin resistance as they relate to mammographic breast density. We hypothesize that: 1) obesity and insulin resistance, defined

  15. [Correlative factors related to the density of Meriones unguiculatus in the Meriones unguiculatus plague foci of Hebei province, 2001-2013].

    PubMed

    Niu, Y F; Kang, X P; Yan, D; Zhang, Y H; Liu, G; Kang, D M; Liu, H Z; Shi, X M; Li, Y G

    2016-08-10

    To explore the yearly, monthly and habitat-related distribution and their relations with Meriones unguiculatus density in the Hebei Meriones unguiculatus plague foci, from 2001 to 2013. Data related to Meriones unguiculatus was gathered through the monitoring programs set up at the national and provincial Meriones unguiculatus plague foci in Hebei province, from 2001 to 2013. According to the yearly density of Meriones unguiculatus, criteria set for the three groups under study, were as follows:'high-risk group'-when the rodent density was≥1.00 under rodents/hm(2),'warning group'-when the rodents/hm(2)>rodent density> 0.20,'standard group'-when rodents/hm(2) rodent density≤0.20 rodents/hm(2). Differences of habitats and monthly distribution among the three groups were compared, under the Kruskal-Wallis H rank sum test while their relations were under the multiple correspondence analysis. The Meriones unguiculatus densities were higher than 1.00 rodents/hm(2), far above the set national standards, in the monitoring area, between 2001 and 2005. From 2005, though the rodent densities began to decrease, however, figures from 2008 to 2013 were still among 0.20 to 1.00 rodents/hm(2). The distribution of habitats in the three groups showed that the Meriones unguiculatus densities were all different in habitats and the difference was statistically significant (P<0.05). The highest median densities were all in the arable land, with maximum value of high-risk group appeared the highest (20.50 rodents/hm(2)) in the wasteland. Monthly distribution showed that the Meriones unguiculatus densities were different and the difference was statistically significant (P<0.05) in the high-risk and standard groups but not statistically significant in the warning group. Data from the multiple correspondence analysis showed that there was a strong aggregation among wasteland, in April and June, while the warning group was associated with weather in July and the arable land. When the density became higher than 1.00 rodents/hm(2), the risk on animal plague increased in Hebei Meriones unguiculatus plague foci. Based on the distribution of Meriones unguiculatus, programs should be set to monitor the rodent in arable land and wasteland, in April and June, to reduce the prevalence of animals plague.

  16. Autonomous Aerobraking Development Software: Phase 2 Summary

    NASA Technical Reports Server (NTRS)

    Cianciolo, Alicia D.; Maddock, Robert W.; Prince, Jill L.; Bowes, Angela; Powell, Richard W.; White, Joseph P.; Tolson, Robert; O'Shaughnessy, Daniel; Carrelli, David

    2013-01-01

    NASA has used aerobraking at Mars and Venus to reduce the fuel required to deliver a spacecraft into a desired orbit compared to an all-propulsive solution. Although aerobraking reduces the propellant, it does so at the expense of mission duration, large staff, and DSN coverage. These factors make aerobraking a significant cost element in the mission design. By moving on-board the current ground-based tasks of ephemeris determination, atmospheric density estimation, and maneuver sizing and execution, a flight project would realize significant cost savings. The NASA Engineering and Safety Center (NESC) sponsored Phase 1 and 2 of the Autonomous Aerobraking Development Software (AADS) study, which demonstrated the initial feasibility of moving these current ground-based functions to the spacecraft. This paper highlights key state-of-the-art advancements made in the Phase 2 effort to verify that the AADS algorithms are accurate, robust and ready to be considered for application on future missions that utilize aerobraking. The advancements discussed herein include both model updates and simulation and benchmark testing. Rigorous testing using observed flight atmospheres, operational environments and statistical analysis characterized the AADS operability in a perturbed environment.

  17. Test Operations Procedure (TOP) 02-2-603A Vehicle Fuel Consumption

    DTIC Science & Technology

    2012-05-10

    API) Hydrometer . The API Hydrometer is used for accurate determination of the density, relative density (specific gravity), or API gravity of... Hydrometer Method. 5. TOP 02-2-505, Inspection and Preliminary Operation of Vehicles, 4 February 1987. 6. TOP 02-1-003, Hybrid Electric

  18. Comparison of Breast Density Between Synthesized Versus Standard Digital Mammography.

    PubMed

    Haider, Irfanullah; Morgan, Matthew; McGow, Anna; Stein, Matthew; Rezvani, Maryam; Freer, Phoebe; Hu, Nan; Fajardo, Laurie; Winkler, Nicole

    2018-06-12

    To evaluate perceptual difference in breast density classification using synthesized mammography (SM) compared with standard or full-field digital mammography (FFDM) for screening. This institutional review board-approved, retrospective, multireader study evaluated breast density on 200 patients who underwent baseline screening mammogram during which both SM and FFDM were obtained contemporaneously from June 1, 2016, through November 30, 2016. Qualitative breast density was independently assigned by seven readers initially evaluating FFDM alone. Then, in a separate session, these same readers assigned breast density using synthetic views alone on the same 200 patients. The readers were again blinded to each other's assignment. Qualitative density assessment was based on BI-RADS fifth edition. Interreader agreement was evaluated with κ statistic using 95% confidence intervals. Testing for homogeneity in paired proportions was performed using McNemar's test with a level of significance of .05. For patients across the SM and standard 2-D data set, diagnostic testing with McNemar's test with P = 0.32 demonstrates that the minimal density transitions across FFDM and SM are not statistically significant density shifts. Taking clinical significance into account, only 8 of 200 (4%) patients had clinically significant transition (dense versus not dense). There was substantial interreader agreement with overall κ in FFDM of 0.71 (minimum 0.53, maximum 0.81) and overall SM κ average of 0.63 (minimum 0.56, maximum 0.87). Overall subjective breast density assignment by radiologists on SM is similar to density assignment on standard 2-D mammogram. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. The most intense electric currents in turbulent high speed solar wind

    NASA Astrophysics Data System (ADS)

    Podesta, J. J.

    2017-12-01

    Theory and simulations suggest that dissipation of turbulent energy in collisionless astrophysical plasmas occurs most rapidly in spatial regions where the current density is most intense. To advance understanding of plasma heating by turbulent dissipation in the solar corona and solar wind, it is of interest to characterize the properties of plasma regions where the current density takes exceptionally large values and to identify the operative dissipation processes. In the solar wind, the curl of the magnetic field cannot be measured using data from a single spacecraft, however, a suitable proxy for this quantity can be constructed from the spatial derivative of the magnetic field along the flow direction of the plasma. This new approach is used to study the properties of the most intense current carrying structures in a high speed solar wind stream near 1 AU. In this study, based on 11 Hz magnetometer data from the WIND spacecraft, the spatial resolution of the proxy technique is approximately equal to the proton inertial length. Intense current sheets or current carrying structures were identified as events where the magnitude of the current density exceeds μ+5σ, where μ and σ are the mean and standard deviation of the magnitude of the current density (or its proxy), respectively. Statistical studies show (1) the average size of these 5σ events is close to the smallest resolvable scale in the data set, the proton inertial length; (2) the linear distance between neighboring events follows a power law distribution; and (3) the average peak current density of 5σ events is around 1 pA/cm2. The analysis techniques used in these studies have been validated using simulated spacecraft data from three dimensional hybrid simulations which show that results based on the analysis of the proxy are qualitatively and quantitatively similar to results based on the analysis of the true current density.

  20. The X-ray Power Density Spectrum of the Seyfert 2 Galaxy NGC 4945: Analysis and Application of the Method of Light Curve Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Martin; /SLAC

    2010-12-16

    The study of the power density spectrum (PDS) of fluctuations in the X-ray flux from active galactic nuclei (AGN) complements spectral studies in giving us a view into the processes operating in accreting compact objects. An important line of investigation is the comparison of the PDS from AGN with those from galactic black hole binaries; a related area of focus is the scaling relation between time scales for the variability and the black hole mass. The PDS of AGN is traditionally modeled using segments of power laws joined together at so-called break frequencies; associations of the break time scales, i.e.,more » the inverses of the break frequencies, with time scales of physical processes thought to operate in these sources are then sought. I analyze the Method of Light Curve Simulations that is commonly used to characterize the PDS in AGN with a view to making the method as sensitive as possible to the shape of the PDS. I identify several weaknesses in the current implementation of the method and propose alternatives that can substitute for some of the key steps in the method. I focus on the complications introduced by uneven sampling in the light curve, the development of a fit statistic that is better matched to the distributions of power in the PDS, and the statistical evaluation of the fit between the observed data and the model for the PDS. Using archival data on one AGN, NGC 3516, I validate my changes against previously reported results. I also report new results on the PDS in NGC 4945, a Seyfert 2 galaxy with a well-determined black hole mass. This source provides an opportunity to investigate whether the PDS of Seyfert 1 and Seyfert 2 galaxies differ. It is also an attractive object for placement on the black hole mass-break time scale relation. Unfortunately, with the available data on NGC 4945, significant uncertainties on the break frequency in its PDS remain.« less

  1. Statistical properties of the radiation from SASE FEL operating in the linear regime

    NASA Astrophysics Data System (ADS)

    Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    1998-02-01

    The paper presents comprehensive analysis of statistical properties of the radiation from self amplified spontaneous emission (SASE) free electron laser operating in linear mode. The investigation has been performed in a one-dimensional approximation, assuming the electron pulse length to be much larger than a coherence length of the radiation. The following statistical properties of the SASE FEL radiation have been studied: field correlations, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and photoelectric counting statistics of SASE FEL radiation. It is shown that the radiation from SASE FEL operating in linear regime possesses all the features corresponding to completely chaotic polarized radiation.

  2. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    NASA Astrophysics Data System (ADS)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  3. A marked correlation function for constraining modified gravity models

    NASA Astrophysics Data System (ADS)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  4. Density-dependence at sea for coho salmon (Oncorhynchus kisutch)

    USGS Publications Warehouse

    Emlen, J.M.; Reisenbichler, R.R.; McGie, A.M.; Nickelson, T.E.

    1990-01-01

    The success of expanded salmon hatchery programs will depend strongly on the degree of density-induced diminishing returns per smolt released. Several authors have addressed the question of density-dependent mortality at sea in coho salmon (Oncorhynchus kisutch), but have come to conflicting conclusions. We believe there are compelling reasons to reinvestigate the data, and have done so for public hatchery fish, using a variety of approaches. The results provide evidence that survival of these public hatchery fish is negatively affected, directly by the number of public hatchery smolts and indirectly by the number of private hatchery smolts. These results are weak, statistically, and should be considered primarily as a caution to those who, on the basis of other published work, believe that density-dependence does not exist. The results reported here also re-emphasize the often overlooked point that inferences drawn from data are strongly biased by investigators' views of how the systems of interest work and by the statistical assumptions they make preparatory to the analysis of those data.

  5. Application of Accelerometer Data to Mars Odyssey Aerobraking and Atmospheric Modeling

    NASA Technical Reports Server (NTRS)

    Tolson, R. H.; Keating, G. M.; George, B. E.; Escalera, P. E.; Werner, M. R.; Dwyer, A. M.; Hanna, J. L.

    2002-01-01

    Aerobraking was an enabling technology for the Mars Odyssey mission even though it involved risk due primarily to the variability of the Mars upper atmosphere. Consequently, numerous analyses based on various data types were performed during operations to reduce these risk and among these data were measurements from spacecraft accelerometers. This paper reports on the use of accelerometer data for determining atmospheric density during Odyssey aerobraking operations. Acceleration was measured along three orthogonal axes, although only data from the component along the axis nominally into the flow was used during operations. For a one second count time, the RMS noise level varied from 0.07 to 0.5 mm/s2 permitting density recovery to between 0.15 and 1.1 kg per cu km or about 2% of the mean density at periapsis during aerobraking. Accelerometer data were analyzed in near real time to provide estimates of density at periapsis, maximum density, density scale height, latitudinal gradient, longitudinal wave variations and location of the polar vortex. Summaries are given of the aerobraking phase of the mission, the accelerometer data analysis methods and operational procedures, some applications to determining thermospheric properties, and some remaining issues on interpretation of the data. Pre-flight estimates of natural variability based on Mars Global Surveyor accelerometer measurements proved reliable in the mid-latitudes, but overestimated the variability inside the polar vortex.

  6. An Investigation of the Overlap Between the Statistical Discrete Gust and the Power Spectral Density Analysis Methods

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.

    1989-01-01

    The results of a NASA investigation of a claimed Overlap between two gust response analysis methods: the Statistical Discrete Gust (SDG) Method and the Power Spectral Density (PSD) Method are presented. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented for several different airplanes at several different flight conditions indicate that such an Overlap does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.

  7. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  8. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  9. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  10. Statistics of Statisticians: Critical Mass of Statistics and Operational Research Groups

    NASA Astrophysics Data System (ADS)

    Kenna, Ralph; Berche, Bertrand

    Using a recently developed model, inspired by mean field theory in statistical physics, and data from the UK's Research Assessment Exercise, we analyse the relationship between the qualities of statistics and operational research groups and the quantities of researchers in them. Similar to other academic disciplines, we provide evidence for a linear dependency of quality on quantity up to an upper critical mass, which is interpreted as the average maximum number of colleagues with whom a researcher can communicate meaningfully within a research group. The model also predicts a lower critical mass, which research groups should strive to achieve to avoid extinction. For statistics and operational research, the lower critical mass is estimated to be 9 ± 3. The upper critical mass, beyond which research quality does not significantly depend on group size, is 17 ± 6.

  11. Statistical density modification using local pattern matching

    DOEpatents

    Terwilliger, Thomas C.

    2007-01-23

    A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.

  12. Atmospheric statistics for aerospace vehicle operations

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Batts, G. W.

    1993-01-01

    Statistical analysis of atmospheric variables was performed for the Shuttle Transportation System (STS) design trade studies and the establishment of launch commit criteria. Atmospheric constraint statistics have been developed for the NASP test flight, the Advanced Launch System, and the National Launch System. The concepts and analysis techniques discussed in the paper are applicable to the design and operations of any future aerospace vehicle.

  13. Library Off-Site Shelving: Guide for High-Density Facilities.

    ERIC Educational Resources Information Center

    Nitecki, Danuta A., Ed.; Kendrick, Curtis L., Ed.

    This collection of essays addresses the planning, construction, and operating issues relating to high-density library shelving facilities. The volume covers essential topics that address issues relating to the building, its operations, and serving the collections. It begins with an introduction by the volume's editors, "The Paradox and…

  14. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    PubMed

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.

  15. The insertion torque-depth curve integral as a measure of implant primary stability: An in vitro study on polyurethane foam blocks.

    PubMed

    Di Stefano, Danilo Alessio; Arosio, Paolo; Gastaldi, Giorgio; Gherlone, Enrico

    2017-07-08

    Recent research has shown that dynamic parameters correlate with insertion energy-that is, the total work needed to place an implant into its site-might convey more reliable information concerning immediate implant primary stability at insertion than the commonly used insertion torque (IT), the reverse torque (RT), or the implant stability quotient (ISQ). Yet knowledge on these dynamic parameters is still limited. The purpose of this in vitro study was to evaluate whether an energy-related parameter, the torque-depth curve integral (I), could be a reliable measure of primary stability. This was done by assessing if (I) measurement was operator-independent, by investigating its correlation with other known primary stability parameters (IT, RT, or ISQ) by quantifying the (I) average error and correlating (I), IT, RT, and ISQ variations with bone density. Five operators placed 200 implants in polyurethane foam blocks of different densities using a micromotor that calculated the (I) during implant placement. Primary implant stability was assessed by measuring the ISQ, IT, and RT. ANOVA tests were used to evaluate whether measurements were operator independent (P>.05 in all cases). A correlation analysis was performed between (I) and IT, ISQ, and RT. The (I) average error was calculated and compared with that of the other parameters by ANOVA. (I)-density, IT-density, ISQ-density, and RT-density plots were drawn, and their slopes were compared by ANCOVA. The (I) measurements were operator independent and correlated with IT, ISQ, and RT. The average error of these parameters was not significantly different (P>.05 in all cases). The (I)-density, IT-density, ISQ-density, and RT-density curves were linear in the 0.16 to 0.49 g/cm³ range, with the (I)-density curves having a significantly greater slope than those regarding the other parameters (P≤.001 in all cases). The torque-depth curve integral (I) provides a reliable assessment of primary stability and shows a greater sensitivity to density variations than other known primary stability parameters. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  16. Stochastic characteristics and Second Law violations of atomic fluids in Couette flow

    NASA Astrophysics Data System (ADS)

    Raghavan, Bharath V.; Karimi, Pouyan; Ostoja-Starzewski, Martin

    2018-04-01

    Using Non-equilibrium Molecular Dynamics (NEMD) simulations, we study the statistical properties of an atomic fluid undergoing planar Couette flow, in which particles interact via a Lennard-Jones potential. We draw a connection between local density contrast and temporal fluctuations in the shear stress, which arise naturally through the equivalence between the dissipation function and entropy production according to the fluctuation theorem. We focus on the shear stress and the spatio-temporal density fluctuations and study the autocorrelations and spectral densities of the shear stress. The bispectral density of the shear stress is used to measure the degree of departure from a Gaussian model and the degree of nonlinearity induced in the system owing to the applied strain rate. More evidence is provided by the probability density function of the shear stress. We use the Information Theory to account for the departure from Gaussian statistics and to develop a more general probability distribution function that captures this broad range of effects. By accounting for negative shear stress increments, we show how this distribution preserves the violations of the Second Law of Thermodynamics observed in planar Couette flow of atomic fluids, and also how it captures the non-Gaussian nature of the system by allowing for non-zero higher moments. We also demonstrate how the temperature affects the band-width of the shear-stress and how the density affects its Power Spectral Density, thus determining the conditions under which the shear-stress acts is a narrow-band or wide-band random process. We show that changes in the statistical characteristics of the parameters of interest occur at a critical strain rate at which an ordering transition occurs in the fluid causing shear thinning and affecting its stability. A critical strain rate of this kind is also predicted by the Loose-Hess stability criterion.

  17. An evaluation of homocysteine, C-reactive protein, lipid levels, neutrophils to lymphocyte ratio in postmenopausal osteopenic women.

    PubMed

    Liu, Wenhua; Huang, Zheren; Tang, Shanshan; Wei, Shuangshuang; Zhang, Zhifen

    2016-06-01

    In the present study, the risk coefficients of serum homocysteine (hcy), lipid levels, C-reactive protein (CRP), neutrophils to lymphocyte ratio (NLR) in postmenopausal osteopenic women were determined. We enrolled 269 patients with postmenopausal women from Hangzhou No.1 Hospital gynecological clinic, who aged 45 to 60 years old and never received menopause hormone therapy. According to the bone mineral density determination results, subjects were divided into normal group (n  =  128), osteopenia group (n  =  141). Bone mineral density (BMD) was measured by dual-energy X-ray absorptiometry (DXA). Serum hcy, CRP and lipid indexes were determined by enzyme chemiluminescence immunoassay. The odds ratios (OR) and 95% confidence intervals (CI) of those variables (menopausal age, duration of menopause, LDL, CRP, hcy and NLR) were found significant (p  <  0.05). Menopausal age, duration of menopause, LDL, CRP, hcy and NLR variables were found statistically significant in the analysis of receiver operating characteristic (ROCs). The present study shows that menopause age, duration of menopause, serum LDL, CRP, hcy and NLR levels are risk factors for postmenopausal osteopenic women, which may be used as the indicators of bone loss in postmenopausal women.

  18. Cross-correlation limit of a SQUID-based noise thermometer of the pMFFT type

    NASA Astrophysics Data System (ADS)

    Kirste, A.; Engert, J.

    2018-03-01

    The primary magnetic field fluctuation thermometer (pMFFT) is a SQUID-based noise thermometer for temperatures below 1 K, which complies with metrological requirements. It combines two signal channels in order to apply the cross-correlation technique, but it requires statistically independent noise signals for proper operation. In order to check the limit of the cross-correlation readout, we have performed zero measurements in the millikelvin range in a setup that is identical to the pMFFT, except for the removed temperature sensor. We examined the influence of different parameters such as SQUID working point or flux-lock loop parameters on the minimum cross-correlation signal down to 24 mK and below 100 kHz. Depending on the configuration, typical minimum SQUID-referred cross-power spectral densities of 1.5 × 10‑15 Φ _0^2/Hz or even smaller values were observed. For the pMFFT, considering its thermal noise spectrum, these flux densities correspond to a device noise temperature of ≤2.5 µK, thereby ensuring a negligible uncertainty contribution at the lower end of the PLTS-2000 (0.9 mK).

  19. Comparative study of the thermal effects of four semiconductor lasers on the enamel and pulp chamber of a human tooth.

    PubMed

    Arrastia, A M; Machida, T; Smith, P W; Matsumoto, K

    1994-01-01

    An in vitro thermometric study was conducted on various GaAlAs semiconductor lasers emitting at wavelengths between 750 nm and 905 nm, to verify whether these lasers produce significant heating during application to tooth structure. Measurements were conducted in vitro, using a thermal camera and a thermocouple during a 60, 120, and 180 s laser exposure at energy densities between 1.5 and 2,400 J/cm2. Mean temperature changes on surface enamel were statistically significant in all groups at P < or = .05 and P < or = .01. The higher the energy density applied to a surface area, the greater the temperature rise observed using the same spot size, operation mode, and wavelength. Intrapulpal temperature elevations measured > or = 3 degrees C. An in vivo study was also conducted to determine whether perceptible stimuli are experienced by patients during this time of laser treatment and to verify results of the in vitro study. The results did not conform well with the in vitro study because of uncontrollable variables. None of the patients who received irradiation treatment described any perceptible stimuli.

  20. Assessment of the ability of the triglyceride to high density lipoprotein cholesterol ratio to discriminate insulin resistance among Caribbean-born black persons with and without Hispanic ethnicity.

    PubMed

    Tull, E S

    2013-02-01

    The objective of this research was to determine if the triglyceride (TG) to high density lipoprotein (HDL) cholesterol (TG/HDL) ratio has similar utility for discriminating insulin resistance in Caribbean-born black persons with and without Hispanic ethnicity. Serum lipids, glucose and insulin were determined and compared for 144 Hispanic blacks and 655 non-Hispanic blacks living in the US Virgin Islands. Area under the receiver operating characteristics (AUROC) curve statistics were used to evaluate the ability of the TG/HDL ratio to discriminate insulin resistance in the two ethnic groups. Hispanic blacks had significantly higher levels of triglycerides and insulin resistance and a lower level of HDL cholesterol than non-Hispanic blacks. The AUROC curve for the ability of the TG/HDL to discriminate insulin resistance was 0.71 (95% CI = 0.62, 0.79) for Hispanic blacks and 0.64 (95% CI = 0.59, 0.69) for non-Hispanic blacks. Among Caribbean-born black persons living in the US Virgin Islands, the TG/HDL ratio is a useful screening measure for discriminating insulin resistance in those with Hispanic ethnicity but not in those without Hispanic ethnicity.

  1. An Operator Method for Field Moments from the Extended Parabolic Wave Equation and Analytical Solutions of the First and Second Moments for Atmospheric Electromagnetic Wave Propagation

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    2004-01-01

    The extended wide-angle parabolic wave equation applied to electromagnetic wave propagation in random media is considered. A general operator equation is derived which gives the statistical moments of an electric field of a propagating wave. This expression is used to obtain the first and second order moments of the wave field and solutions are found that transcend those which incorporate the full paraxial approximation at the outset. Although these equations can be applied to any propagation scenario that satisfies the conditions of application of the extended parabolic wave equation, the example of propagation through atmospheric turbulence is used. It is shown that in the case of atmospheric wave propagation and under the Markov approximation (i.e., the delta-correlation of the fluctuations in the direction of propagation), the usual parabolic equation in the paraxial approximation is accurate even at millimeter wavelengths. The comprehensive operator solution also allows one to obtain expressions for the longitudinal (generalized) second order moment. This is also considered and the solution for the atmospheric case is obtained and discussed. The methodology developed here can be applied to any qualifying situation involving random propagation through turbid or plasma environments that can be represented by a spectral density of permittivity fluctuations.

  2. Football fever: goal distributions and non-Gaussian statistics

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2009-02-01

    Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.

  3. Matching-pursuit/split-operator-Fourier-transform computations of thermal correlation functions.

    PubMed

    Chen, Xin; Wu, Yinghua; Batista, Victor S

    2005-02-08

    A rigorous and practical methodology for evaluating thermal-equilibrium density matrices, finite-temperature time-dependent expectation values, and time-correlation functions is described. The method involves an extension of the matching-pursuit/split-operator-Fourier-transform method to the solution of the Bloch equation via imaginary-time propagation of the density matrix and the evaluation of Heisenberg time-evolution operators through real-time propagation in dynamically adaptive coherent-state representations.

  4. Intermittent electron density and temperature fluctuations and associated fluxes in the Alcator C-Mod scrape-off layer

    NASA Astrophysics Data System (ADS)

    Kube, R.; Garcia, O. E.; Theodorsen, A.; Brunner, D.; Kuang, A. Q.; LaBombard, B.; Terry, J. L.

    2018-06-01

    The Alcator C-Mod mirror Langmuir probe system has been used to sample data time series of fluctuating plasma parameters in the outboard mid-plane far scrape-off layer. We present a statistical analysis of one second long time series of electron density, temperature, radial electric drift velocity and the corresponding particle and electron heat fluxes. These are sampled during stationary plasma conditions in an ohmically heated, lower single null diverted discharge. The electron density and temperature are strongly correlated and feature fluctuation statistics similar to the ion saturation current. Both electron density and temperature time series are dominated by intermittent, large-amplitude burst with an exponential distribution of both burst amplitudes and waiting times between them. The characteristic time scale of the large-amplitude bursts is approximately 15 μ {{s}}. Large-amplitude velocity fluctuations feature a slightly faster characteristic time scale and appear at a faster rate than electron density and temperature fluctuations. Describing these time series as a superposition of uncorrelated exponential pulses, we find that probability distribution functions, power spectral densities as well as auto-correlation functions of the data time series agree well with predictions from the stochastic model. The electron particle and heat fluxes present large-amplitude fluctuations. For this low-density plasma, the radial electron heat flux is dominated by convection, that is, correlations of fluctuations in the electron density and radial velocity. Hot and dense blobs contribute only a minute fraction of the total fluctuation driven heat flux.

  5. The Nosé–Hoover looped chain thermostat for low temperature thawed Gaussian wave-packet dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coughtrie, David J.; Tew, David P.

    2014-05-21

    We have used a generalised coherent state resolution of the identity to map the quantum canonical statistical average for a general system onto a phase-space average over the centre and width parameters of a thawed Gaussian wave packet. We also propose an artificial phase-space density that has the same behaviour as the canonical phase-space density in the low-temperature limit, and have constructed a novel Nosé–Hoover looped chain thermostat that generates this density in conjunction with variational thawed Gaussian wave-packet dynamics. This forms a new platform for evaluating statistical properties of quantum condensed-phase systems that has an explicit connection to themore » time-dependent Schrödinger equation, whilst retaining many of the appealing features of path-integral molecular dynamics.« less

  6. Weak lensing shear and aperture mass from linear to non-linear scales

    NASA Astrophysics Data System (ADS)

    Munshi, Dipak; Valageas, Patrick; Barber, Andrew J.

    2004-05-01

    We describe the predictions for the smoothed weak lensing shear, γs, and aperture mass,Map, of two simple analytical models of the density field: the minimal tree model and the stellar model. Both models give identical results for the statistics of the three-dimensional density contrast smoothed over spherical cells and only differ by the detailed angular dependence of the many-body density correlations. We have shown in previous work that they also yield almost identical results for the probability distribution function (PDF) of the smoothed convergence, κs. We find that the two models give rather close results for both the shear and the positive tail of the aperture mass. However, we note that at small angular scales (θs<~ 2 arcmin) the tail of the PDF, , for negative Map shows a strong variation between the two models, and the stellar model actually breaks down for θs<~ 0.4 arcmin and Map < 0. This shows that the statistics of the aperture mass provides a very precise probe of the detailed structure of the density field, as it is sensitive to both the amplitude and the detailed angular behaviour of the many-body correlations. On the other hand, the minimal tree model shows good agreement with numerical simulations over all the scales and redshifts of interest, while both models provide a good description of the PDF, , of the smoothed shear components. Therefore, the shear and the aperture mass provide robust and complementary tools to measure the cosmological parameters as well as the detailed statistical properties of the density field.

  7. The influence of gender and age on the thickness and echo-density of skin.

    PubMed

    Firooz, A; Rajabi-Estarabadi, A; Zartab, H; Pazhohi, N; Fanian, F; Janani, L

    2017-02-01

    The more recent use of ultrasound scanning allows a direct measurement on unmodified skin, and is considered to be a reliable method for in vivo measurement of epidermal and dermal thickness. The objective of this study was to assess the influence of gender and age on the thickness and echo-density of skin measured by high frequency ultrasonography (HFUS). This study was carried out on 30 healthy volunteers (17 female, 13 male) with age range of 24-61 years old. The thickness and echo-density of dermis as well as epidermal entrance echo thickness in five anatomic sites (cheek, neck, palm, dorsal foot, and sole) were measured using two different types of B mode HFUS, 22 and 50 MHz frequencies. The epidermal entrance echo thickness and thickness of dermis in males were higher than females, which was statistically significant on neck and dorsum of foot. The echo-density of dermis was higher in females on all sites, but was only statistically significant on neck. The epidermal entrance echo thickness and thickness of dermis in young age group was statistically higher than old group on sole and dorsal of the foot respectively. Overall, the skin thickness decreased with age. High frequency ultrasonography method provides a simple non-invasive method for evaluating the skin thickness and echo-density. Gender and age have significant effect on these parameters. Differences in study method, population, and body site likely account for different results previously reported. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Pasta nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium

    NASA Astrophysics Data System (ADS)

    Caplan, M. E.; Schneider, A. S.; Horowitz, C. J.; Berry, D. K.

    2015-06-01

    Background: Exotic nonspherical nuclear pasta shapes are expected in nuclear matter at just below saturation density because of competition between short-range nuclear attraction and long-range Coulomb repulsion. Purpose: We explore the impact nuclear pasta may have on nucleosynthesis during neutron star mergers when cold dense nuclear matter is ejected and decompressed. Methods: We use a hybrid CPU/GPU molecular dynamics (MD) code to perform decompression simulations of cold dense matter with 51 200 and 409 600 nucleons from 0.080 fm-3 down to 0.00125 fm-3 . Simulations are run for proton fractions YP= 0.05, 0.10, 0.20, 0.30, and 0.40 at temperatures T = 0.5, 0.75, and 1.0 MeV. The final composition of each simulation is obtained using a cluster algorithm and compared to a constant density run. Results: Size of nuclei in the final state of decompression runs are in good agreement with nuclear statistical equilibrium (NSE) models for temperatures of 1 MeV while constant density runs produce nuclei smaller than the ones obtained with NSE. Our MD simulations produces unphysical results with large rod-like nuclei in the final state of T =0.5 MeV runs. Conclusions: Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.

  9. Modulation Doped GaAs/Al sub xGA sub (1-x)As Layered Structures with Applications to Field Effect Transistors.

    DTIC Science & Technology

    1982-02-15

    function of the doping density at 300 and 77 K for the classical Boltzmann statistics or depletion approximation (solid line) and for the approximate...Fermi-Dirac statistics (equation (19) dotted line)• This comparison demonstrates that the deviation from Boltzmann statistics is quite noticeable...tunneling Schottky barriers cannot be obtained at these doping levels. The dotted lines are obtained when Boltzmann statistics are used in the Al Ga

  10. A challenge for probing the statistics of interstellar magnetic fields: beyond the Planck resolution with Herschel

    NASA Astrophysics Data System (ADS)

    Bracco, Andrea; André, Philippe; Boulanger, Francois

    2015-08-01

    The recent Planck results in polarization at sub-mm wavelengths allow us to gain insight into the Galactic magnetic field topology, revealing its statistical correlation with matter, from the diffuse interstellar medium (ISM), to molecular clouds (MCs) (Planck intermediate results. XXXII, XXXIII, XXXV). This correlation has a lot to tell us about the dynamics of the turbulent ISM, stressing the importance of considering magnetic fields in the formation of structures, some of which eventually undergo gravitational collapse producing new star-forming cores.Investigating the early phases of star formation has been a fundamental scope of the Herschel Gould Belt survey collaboration (http://gouldbelt-herschel.cea.fr), which, in the last years, has thoroughly characterized, at a resolution of few tens of arcseconds, the statistics of MCs, such as their filamentary structure, kinematics and column density.Although at lower angular resolution, the Planck maps of dust emission at 353GHz, in intensity and polarization, show that all MCs are complex environments, where we observe a non-trivial correlation between the magnetic field and their density structure. This result opens new perspectives on their formation and evolution, which we have started to explore.In this talk, I will present first results of a comparative analysis of the Herschel-Planck data, where we combine the high resolution Herschel maps of some MCs of the Gould Belt with the Planck polarization data, which sample the structure of the field weighted by the density.In particular, I will discuss the large-scale envelopes of the selected MCs, and, given the correlation between magnetic field and matter, I will show how to make use of the high resolution information of the density structure provided by Herschel to investigate the statistics of interstellar magnetic fields in the Planck data.

  11. Rare-event statistics and modular invariance

    NASA Astrophysics Data System (ADS)

    Nechaev, S. K.; Polovnikov, K.

    2018-01-01

    Simple geometric arguments based on constructing the Euclid orchard are presented, which explain the equivalence of various types of distributions that result from rare-event statistics. In particular, the spectral density of the exponentially weighted ensemble of linear polymer chains is examined for its number-theoretic properties. It can be shown that the eigenvalue statistics of the corresponding adjacency matrices in the sparse regime show a peculiar hierarchical structure and are described by the popcorn (Thomae) function discontinuous in the dense set of rational numbers. Moreover, the spectral edge density distribution exhibits Lifshitz tails, reminiscent of 1D Anderson localization. Finally, a continuous approximation for the popcorn function is suggested based on the Dedekind η-function, and the hierarchical ultrametric structure of the popcorn-like distributions is demonstrated to be related to hidden SL(2,Z) modular symmetry.

  12. Statistical analysis and modeling of intermittent transport events in the tokamak scrape-off layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Johan, E-mail: anderson.johan@gmail.com; Halpern, Federico D.; Ricci, Paolo

    The turbulence observed in the scrape-off-layer of a tokamak is often characterized by intermittent events of bursty nature, a feature which raises concerns about the prediction of heat loads on the physical boundaries of the device. It appears thus necessary to delve into the statistical properties of turbulent physical fields such as density, electrostatic potential, and temperature, focusing on the mathematical expression of tails of the probability distribution functions. The method followed here is to generate statistical information from time-traces of the plasma density stemming from Braginskii-type fluid simulations and check this against a first-principles theoretical model. The analysis ofmore » the numerical simulations indicates that the probability distribution function of the intermittent process contains strong exponential tails, as predicted by the analytical theory.« less

  13. Topology of Neutral Hydrogen within the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Chepurnov, A.; Gordon, J.; Lazarian, A.; Stanimirovic, S.

    2008-12-01

    In this paper, genus statistics have been applied to an H I column density map of the Small Magellanic Cloud in order to study its topology. To learn how topology changes with the scale of the system, we provide topology studies for column density maps at varying resolutions. To evaluate the statistical error of the genus, we randomly reassign the phases of the Fourier modes while keeping the amplitudes. We find that at the smallest scales studied (40 pc <= λ <= 80 pc), the genus shift is negative in all regions, implying a clump topology. At the larger scales (110 pc <= λ <= 250 pc), the topology shift is detected to be negative (a "meatball" topology) in four cases and positive (a "swiss cheese" topology) in two cases. In four regions, there is no statistically significant topology shift at large scales.

  14. Quantum Glass of Interacting Bosons with Off-Diagonal Disorder

    NASA Astrophysics Data System (ADS)

    Piekarska, A. M.; Kopeć, T. K.

    2018-04-01

    We study disordered interacting bosons described by the Bose-Hubbard model with Gaussian-distributed random tunneling amplitudes. It is shown that the off-diagonal disorder induces a spin-glass-like ground state, characterized by randomly frozen quantum-mechanical U(1) phases of bosons. To access criticality, we employ the "n -replica trick," as in the spin-glass theory, and the Trotter-Suzuki method for decomposition of the statistical density operator, along with numerical calculations. The interplay between disorder, quantum, and thermal fluctuations leads to phase diagrams exhibiting a glassy state of bosons, which are studied as a function of model parameters. The considered system may be relevant for quantum simulators of optical-lattice bosons, where the randomness can be introduced in a controlled way. The latter is supported by a proposition of experimental realization of the system in question.

  15. Development of the Space Debris Sensor (SDS)

    NASA Technical Reports Server (NTRS)

    Hamilton, J.; Liou, J.-C.; Anz-Meador, P. D.; Corsaro, B.; Giovane, F.; Matney, M.; Christiansen, E.

    2017-01-01

    The Space Debris Sensor (SDS) is a NASA experiment scheduled to fly aboard the International Space Station (ISS) starting in 2018. The SDS is the first flight demonstration of the Debris Resistive/Acoustic Grid Orbital NASA-Navy Sensor (DRAGONS) developed and matured at NASA Johnson Space Center's Orbital Debris Program Office. The DRAGONS concept combines several technologies to characterize the size, speed, direction, and density of small impacting objects. With a minimum two-year operational lifetime, SDS is anticipated to collect statistically significant information on orbital debris ranging from 50 microns to 500 microns in size. This paper describes the features of SDS and how data from the ISS mission may be used to update debris environment models. Results of hypervelocity impact testing during the development of SDS and the potential for improvement on future sensors at higher altitudes will be reviewed.

  16. Operational use of neem oil as an alternative anopheline larvicide. Part A: Laboratory and field efficacy.

    PubMed

    Awad, O M; Shimaila, A

    2003-07-01

    We conducted a study to determine the laboratory and field efficacy of neem oil towards anopheline larvae. No difference in LC50 was observed between laboratory and field strains for temephos, chlorpyriphos-methyl/fenitrothion and neem oil. No difference in susceptibility was found after 3 months of application every 2 weeks. Water treated with a single application of traditional larvicides was free of larvae after 4 weeks; neem oil-treated water, however, was free after 2 weeks but not at 4 weeks. Application of chlorpyriphos-methyl/fenitrothion and neem oil every 2 weeks for 7 rounds resulted in dramatic reduction in larval density with no statistically significant differences. An adult survey after larviciding also showed no significant difference. The efficacy of crude neem oil appears to be below that of conventional larvicides.

  17. Received response based heuristic LDPC code for short-range non-line-of-sight ultraviolet communication.

    PubMed

    Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian

    2017-03-06

    Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.

  18. Development of the Space Debris Sensor

    NASA Technical Reports Server (NTRS)

    Hamilton, J.; Liou, J.-C.; Anz-Meador, P. D.; Corsaro, B.; Giovane, F.; Matney, M.; Christiansen, E.

    2017-01-01

    The Space Debris Sensor (SDS) is a NASA experiment scheduled to fly aboard the International Space Station (ISS) starting in 2017. The SDS is the first flight demonstration of the Debris Resistive/Acoustic Grid Orbital NASA-Navy Sensor (DRAGONS) developed and matured by the NASA Orbital Debris Program Office. The DRAGONS concept combines several technologies to characterize the size, speed, direction, and density of small impacting objects. With a minimum two-year operational lifetime, SDS is anticipated to collect statistically significant information on orbital debris ranging from 50 micron to 500 micron in size. This paper describes the SDS features and how data from the ISS mission may be used to update debris environment models. Results of hypervelocity impact testing during the development of SDS and the potential for improvement on future sensors at higher altitudes will be reviewed.

  19. Integrating prior knowledge in multiple testing under dependence with applications to detecting differential DNA methylation.

    PubMed

    Kuan, Pei Fen; Chiang, Derek Y

    2012-09-01

    DNA methylation has emerged as an important hallmark of epigenetics. Numerous platforms including tiling arrays and next generation sequencing, and experimental protocols are available for profiling DNA methylation. Similar to other tiling array data, DNA methylation data shares the characteristics of inherent correlation structure among nearby probes. However, unlike gene expression or protein DNA binding data, the varying CpG density which gives rise to CpG island, shore and shelf definition provides exogenous information in detecting differential methylation. This article aims to introduce a robust testing and probe ranking procedure based on a nonhomogeneous hidden Markov model that incorporates the above-mentioned features for detecting differential methylation. We revisit the seminal work of Sun and Cai (2009, Journal of the Royal Statistical Society: Series B (Statistical Methodology)71, 393-424) and propose modeling the nonnull using a nonparametric symmetric distribution in two-sided hypothesis testing. We show that this model improves probe ranking and is robust to model misspecification based on extensive simulation studies. We further illustrate that our proposed framework achieves good operating characteristics as compared to commonly used methods in real DNA methylation data that aims to detect differential methylation sites. © 2012, The International Biometric Society.

  20. Communication Limits Due to Photon-Detector Jitter

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Farr, William H.

    2008-01-01

    A theoretical and experimental study was conducted of the limit imposed by photon-detector jitter on the capacity of a pulse-position-modulated optical communication system in which the receiver operates in a photon-counting (weak-signal) regime. Photon-detector jitter is a random delay between impingement of a photon and generation of an electrical pulse by the detector. In the study, jitter statistics were computed from jitter measurements made on several photon detectors. The probability density of jitter was mathematically modeled by use of a weighted sum of Gaussian functions. Parameters of the model were adjusted to fit histograms representing the measured-jitter statistics. Likelihoods of assigning detector-output pulses to correct pulse time slots in the presence of jitter were derived and used to compute channel capacities and corresponding losses due to jitter. It was found that the loss, expressed as the ratio between the signal power needed to achieve a specified capacity in the presence of jitter and that needed to obtain the same capacity in the absence of jitter, is well approximated as a quadratic function of the standard deviation of the jitter in units of pulse-time-slot duration.

  1. Spatial analysis of electricity demand patterns in Greece: Application of a GIS-based methodological framework

    NASA Astrophysics Data System (ADS)

    Tyralis, Hristos; Mamassis, Nikos; Photis, Yorgos N.

    2016-04-01

    We investigate various uses of electricity demand in Greece (agricultural, commercial, domestic, industrial use as well as use for public and municipal authorities and street lightning) and we examine their relation with variables such as population, total area, population density and the Gross Domestic Product. The analysis is performed on data which span from 2008 to 2012 and have annual temporal resolution and spatial resolution down to the level of prefecture. We both visualize the results of the analysis and we perform cluster and outlier analysis using the Anselin local Moran's I statistic as well as hot spot analysis using the Getis-Ord Gi* statistic. The definition of the spatial patterns and relationships of the aforementioned variables in a GIS environment provides meaningful insight and better understanding of the regional development model in Greece and justifies the basis for an energy demand forecasting methodology. Acknowledgement: This research has been partly financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARISTEIA II: Reinforcement of the interdisciplinary and/ or inter-institutional research and innovation (CRESSENDO project; grant number 5145).

  2. Unleashing the Power and Energy of LiFePO4-Based Redox Flow Lithium Battery with a Bifunctional Redox Mediator.

    PubMed

    Zhu, Yun Guang; Du, Yonghua; Jia, Chuankun; Zhou, Mingyue; Fan, Li; Wang, Xingzhu; Wang, Qing

    2017-05-10

    Redox flow batteries, despite great operation flexibility and scalability for large-scale energy storage, suffer from low energy density and relatively high cost as compared to the state-of-the-art Li-ion batteries. Here we report a redox flow lithium battery, which operates via the redox targeting reactions of LiFePO 4 with a bifunctional redox mediator, 2,3,5,6-tetramethyl-p-phenylenediamine, and presents superb energy density as the Li-ion battery and system flexibility as the redox flow battery. The battery has achieved a tank energy density as high as 1023 Wh/L, power density of 61 mW/cm 2 , and voltage efficiency of 91%. Operando X-ray absorption near-edge structure measurements were conducted to monitor the evolution of LiFePO 4 , which provides insightful information on the redox targeting process, critical to the device operation and optimization.

  3. High density operation for reactor-relevant power exhaust

    NASA Astrophysics Data System (ADS)

    Wischmeier, M.; ASDEX Upgrade Team; Jet Efda Contributors

    2015-08-01

    With increasing size of a tokamak device and associated fusion power gain an increasing power flux density towards the divertor needs to be handled. A solution for handling this power flux is crucial for a safe and economic operation. Using purely geometric arguments in an ITER-like divertor this power flux can be reduced by approximately a factor 100. Based on a conservative extrapolation of current technology for an integrated engineering approach to remove power deposited on plasma facing components a further reduction of the power flux density via volumetric processes in the plasma by up to a factor of 50 is required. Our current ability to interpret existing power exhaust scenarios using numerical transport codes is analyzed and an operational scenario as a potential solution for ITER like divertors under high density and highly radiating reactor-relevant conditions is presented. Alternative concepts for risk mitigation as well as strategies for moving forward are outlined.

  4. Are breast density and bone mineral density independent risk factors for breast cancer?

    PubMed

    Kerlikowske, Karla; Shepherd, John; Creasman, Jennifer; Tice, Jeffrey A; Ziv, Elad; Cummings, Steve R

    2005-03-02

    Mammographic breast density and bone mineral density (BMD) are markers of cumulative exposure to estrogen. Previous studies have suggested that women with high mammographic breast density or high BMD are at increased risk of breast cancer. We determined whether mammographic breast density and BMD of the hip and spine are correlated and independently associated with breast cancer risk. We conducted a cross-sectional study (N = 15,254) and a nested case-control study (of 208 women with breast cancer and 436 control subjects) among women aged 28 years or older who had a screening mammography examination and hip BMD measurement within 2 years. Breast density for 3105 of the women was classified using the American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) categories, and percentage mammographic breast density among the case patients and control subjects was quantified with a computer-based threshold method. Spearman rank partial correlation coefficient and Pearson's correlation coefficient were used to examine correlations between BI-RADS breast density and BMD and between percentage mammographic breast density and BMD, respectively, in women without breast cancer. Logistic regression was used to examine the association of breast cancer with percentage mammographic breast density and BMD. All statistical tests were two-sided. Neither BI-RADS breast density nor percentage breast density was correlated with hip or spine BMD (correlation coefficient = -.02 and -.01 for BI-RADS, respectively, and -.06 and .01 for percentage breast density, respectively). Neither hip BMD nor spine BMD had a statistically significant relationship with breast cancer risk. Women with breast density in the highest sextile had an approximately threefold increased risk of breast cancer compared with women in the lowest sextile (odds ratio = 2.7, 95% confidence interval = 1.4 to 5.4); adjusting for hip or spine BMD did not change the association between breast density and breast cancer risk. Breast density is strongly associated with increased risk of breast cancer, even after taking into account reproductive and hormonal risk factors, whereas BMD, although a possible marker of lifetime exposure to estrogen, is not. Thus, a component of breast density that is independent of estrogen-mediated effects may contribute to breast cancer risk.

  5. The Statistical Consulting Center for Astronomy (SCCA)

    NASA Technical Reports Server (NTRS)

    Akritas, Michael

    2001-01-01

    The process by which raw astronomical data acquisition is transformed into scientifically meaningful results and interpretation typically involves many statistical steps. Traditional astronomy limits itself to a narrow range of old and familiar statistical methods: means and standard deviations; least-squares methods like chi(sup 2) minimization; and simple nonparametric procedures such as the Kolmogorov-Smirnov tests. These tools are often inadequate for the complex problems and datasets under investigations, and recent years have witnessed an increased usage of maximum-likelihood, survival analysis, multivariate analysis, wavelet and advanced time-series methods. The Statistical Consulting Center for Astronomy (SCCA) assisted astronomers with the use of sophisticated tools, and to match these tools with specific problems. The SCCA operated with two professors of statistics and a professor of astronomy working together. Questions were received by e-mail, and were discussed in detail with the questioner. Summaries of those questions and answers leading to new approaches were posted on the Web (www.state.psu.edu/ mga/SCCA). In addition to serving individual astronomers, the SCCA established a Web site for general use that provides hypertext links to selected on-line public-domain statistical software and services. The StatCodes site (www.astro.psu.edu/statcodes) provides over 200 links in the areas of: Bayesian statistics; censored and truncated data; correlation and regression, density estimation and smoothing, general statistics packages and information; image analysis; interactive Web tools; multivariate analysis; multivariate clustering and classification; nonparametric analysis; software written by astronomers; spatial statistics; statistical distributions; time series analysis; and visualization tools. StatCodes has received a remarkable high and constant hit rate of 250 hits/week (over 10,000/year) since its inception in mid-1997. It is of interest to scientists both within and outside of astronomy. The most popular sections are multivariate techniques, image analysis, and time series analysis. Hundreds of copies of the ASURV, SLOPES and CENS-TAU codes developed by SCCA scientists were also downloaded from the StatCodes site. In addition to formal SCCA duties, SCCA scientists continued a variety of related activities in astrostatistics, including refereeing of statistically oriented papers submitted to the Astrophysical Journal, talks in meetings including Feigelson's talk to science journalists entitled "The reemergence of astrostatistics" at the American Association for the Advancement of Science meeting, and published papers of astrostatistical content.

  6. Fast Radio Bursts’ Recipes for the Distributions of Dispersion Measures, Flux Densities, and Fluences

    NASA Astrophysics Data System (ADS)

    Niino, Yuu

    2018-05-01

    We investigate how the statistical properties of dispersion measure (DM) and apparent flux density/fluence of (nonrepeating) fast radio bursts (FRBs) are determined by unknown cosmic rate density history [ρ FRB(z)] and luminosity function (LF) of the transient events. We predict the distributions of DMs, flux densities, and fluences of FRBs taking account of the variation of the receiver efficiency within its beam, using analytical models of ρ FRB(z) and LF. Comparing the predictions with the observations, we show that the cumulative distribution of apparent fluences suggests that FRBs originate at cosmological distances and ρ FRB increases with redshift resembling the cosmic star formation history (CSFH). We also show that an LF model with a bright-end cutoff at log10 L ν (erg s‑1 Hz‑1) ∼ 34 are favored to reproduce the observed DM distribution if ρ FRB(z) ∝ CSFH, although the statistical significance of the constraints obtained with the current size of the observed sample is not high. Finally, we find that the correlation between DM and flux density of FRBs is potentially a powerful tool to distinguish whether FRBs are at cosmological distances or in the local universe more robustly with future observations.

  7. Effects of rearing temperature and density on growth, survival and development of sea cucumber larvae, Apostichopus japonicus (Selenka)

    NASA Astrophysics Data System (ADS)

    Liu, Guangbin; Yang, Hongsheng; Liu, Shilin

    2010-07-01

    In laboratory conditions, effects of rearing temperature and stocking density were examined on hatching of fertilized egg and growth of auricularia larvae of Apostichopus japonicus respectively. Data series like larval length and density, metamorphic time, and survival rate of the larvae were recorded. Statistics showed that for A. japonicus, survival rate (from fertilized egg to late auricularia) decreased significantly with the increasing rearing temperature ( P<0.05). At different temperatures SGR was statistically significant as well ( P<0.05) from day 1, and maximal SGR was found on day 9 at 24°C (159.26±3.28). This study clearly indicated that at low temperature (<24°C), metamorphic rate was remarkably higher than at higher temperature (>26°C). Hatching rate was significantly different between 0.2-5 ind./ml groups and 20-50 ind./ml groups. Rearing larvae at the higher density had the smaller maximal-length, whereas needed longer time to complete metamorphosis. This study suggested that 21°C and 0.4 ind./ml can be used as the most suitable rearing temperature and stocking density for large -scale artificial breeding of A. japonicus’s larvae.

  8. Bisphosphonate therapy for osteogenesis imperfecta.

    PubMed

    Dwan, Kerry; Phillipi, Carrie A; Steiner, Robert D; Basel, Donald

    2016-10-19

    Osteogenesis imperfecta is caused by a genetic defect resulting in an abnormal type I collagen bone matrix which typically results in multiple fractures with little or no trauma. Bisphosphonates are used in an attempt to increase bone mineral density and reduce these fractures in people with osteogenesis imperfecta. This is an update of a previously published Cochrane Review. To assess the effectiveness and safety of bisphosphonates in increasing bone mineral density, reducing fractures and improving clinical function in people with osteogenesis imperfecta. We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Inborn Errors of Metabolism Trials Register which comprises references identified from comprehensive electronic database searches, handsearches of journals and conference proceedings. We additionally searched PubMed and major conference proceedings.Date of the most recent search of the Cochrane Cystic Fibrosis and Genetic Disorders Group's Inborn Errors of Metabolism Register: 28 April 2016. Randomised and quasi-randomised controlled trials comparing bisphosphonates to placebo, no treatment, or comparator interventions in all types of osteogenesis imperfecta. Two authors independently extracted data and assessed the risk of bias of the included trials. Fourteen trials (819 participants) were included. Overall, the trials were mainly at a low risk of bias, although selective reporting was an issue in several of the trials. Data for oral bisphosphonates versus placebo could not be aggregated; a statistically significant difference favouring oral bisphosphonates in fracture risk reduction and number of fractures was noted in two trials. No differences were reported in the remaining three trials which commented on fracture incidence. Five trials reported data for spine bone mineral density; all found statistically significant increased lumbar spine density z scores for at least one time point studied. For intravenous bisphosphonates versus placebo, aggregated data from two trials showed no statistically significant difference for the number of participants with at least one fracture, risk ratio 0.56 (95% confidence interval 0.30 to 1.06). In the remaining trial no statistically significant difference was noted in fracture incidence. For spine bone mineral density, no statistically significant difference was noted in the aggregated data from two trials, mean difference 9.96 (95% confidence interval -2.51 to 22.43). In the remaining trial a statistically significant difference in mean per cent change in spine bone mineral density z score favoured intravenous bisphosphonates at six and 12 months. Data describing growth, bone pain, and functional outcomes after oral or intravenous bisphosphonate therapy, or both, as compared to placebo were incomplete among all studies, but do not show consistent improvements in these outcomes. Two studies compared different doses of bisphosphonates. No differences were found between doses when bone mineral density, fractures, and height or length z score were assessed. One trial compared oral versus intravenous bisphosphonates and found no differences in primary outcomes. Two studies compared the intravenous bisphosphonates zoledronic acid and pamidronate. There were no significant differences in primary outcome. However, the studies were at odds as to the relative benefit of zoledronic acid over pamidronate for lumbosacral bone mineral density at 12 months. Bisphophonates are commonly prescribed to individuals with osteogenesis imperfecta. Current evidence, albeit limited, demonstrates oral or intravenous bisphosphonates increase bone mineral density in children and adults with this condition. These were not shown to be different in their ability to increase bone mineral density. It is unclear whether oral or intravenous bisphosphonate treatment consistently decreases fractures, though multiple studies report this independently and no studies report an increased fracture rate with treatment. The studies included here do not show bisphosphonates conclusively improve clinical status (reduce pain; improve growth and functional mobility) in people with osteogenesis imperfecta. Given their current widespread and expected continued use, the optimal method, duration of therapy and long-term safety of bisphosphonate therapy require further investigation. In addition, attention should be given to long-term fracture reduction and improvement in quality of life indicators.

  9. Bisphosphonate therapy for osteogenesis imperfecta.

    PubMed

    Dwan, Kerry; Phillipi, Carrie A; Steiner, Robert D; Basel, Donald

    2014-07-23

    Osteogenesis imperfecta is caused by a genetic defect resulting in an abnormal type I collagen bone matrix which typically results in multiple fractures with little or no trauma. Bisphosphonates are used in an attempt to increase bone mineral density and reduce these fractures in people with osteogenesis imperfecta. To assess the effectiveness and safety of bisphosphonates in increasing bone mineral density, reducing fractures and improving clinical function in people with osteogenesis imperfecta. We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Inborn Errors of Metabolism Trials Register which comprises references identified from comprehensive electronic database searches, handsearches of journals and conference proceedings. We additionally searched PubMed and major conference proceedings.Date of the most recent search: 07 April 2014. Randomised and quasi-randomised controlled trials comparing bisphosphonates to placebo, no treatment, or comparator interventions in all types of osteogenesis imperfecta. Two authors independently extracted data and assessed the risk of bias of the included trials. Fourteen trials (819 participants) were included. Overall, the trials were mainly at a low risk of bias, although selective reporting was an issue in several of the trials. Data for oral bisphosphonates versus placebo could not be aggregated; a statistically significant difference favouring oral bisphosphonates in fracture risk reduction and number of fractures was noted in two trials. No differences were reported in the remaining three trials which commented on fracture incidence. Five trials reported data for spine bone mineral density; all found statistically significant increased lumbar spine density z scores for at least one time point studied. For intravenous bisphosphonates versus placebo, aggregated data from two trials showed no statistically significant difference for the number of participants with at least one fracture, risk ratio 0.56 (95% confidence interval 0.30 to 1.06). In the remaining trial no statistically significant difference was noted in fracture incidence. For spine bone mineral density, no statistically significant difference was noted in the aggregated data from two trials, mean difference 9.96 (95% confidence interval -2.51 to 22.43). In the remaining trial a statistically significant difference in mean per cent change in spine bone mineral density z score favoured intravenous bisphosphonates at six and 12 months. Data describing growth, bone pain, and functional outcomes after oral or intravenous bisphosphonate therapy, or both, as compared to placebo were incomplete among all studies, but do not show consistent improvements in these outcomes. Two studies compared different doses of bisphosphonates. No differences were found between doses when bone mineral density, fractures, and height or length z score were assessed. One study compared oral versus intravenous bisphosphonates and found no differences in primary outcomes. Two studies compared the intravenous bisphosphonates zoledronic acid and pamidronate. There were no significant differences in primary outcome. However, the studies were at odds as to the relative benefit of zoledronic acid over pamidronate for lumbosacral bone mineral density at 12 months. Bisphophonates are commonly prescribed to individuals with osteogenesis imperfecta. Current evidence, albeit limited, demonstrates oral or intravenous bisphosphonates increase bone mineral density in children and adults with this condition. These were not shown to be different in their ability to increase bone mineral density. It is unclear whether oral or intravenous bisphosphonate treatment consistently decreases fractures, though multiple studies report this independently and no studies report an increased fracture rate with treatment. The studies included here do not show bisphosphonates conclusively improve clinical status (reduce pain; improve growth and functional mobility) in people with osteogenesis imperfecta. Given their current widespread and expected continued use, the optimal method, duration of therapy and long-term safety of bisphosphonate therapy require further investigation. In addition, attention should be given to long-term fracture reduction and improvement in quality of life indicators.

  10. A stepwedge-based method for measuring breast density: observer variability and comparison with human reading

    NASA Astrophysics Data System (ADS)

    Diffey, Jenny; Berks, Michael; Hufton, Alan; Chung, Camilla; Verow, Rosanne; Morrison, Joanna; Wilson, Mary; Boggis, Caroline; Morris, Julie; Maxwell, Anthony; Astley, Susan

    2010-04-01

    Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated, stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening programme to measure breast density by volume and area. 116 images were analysed by three independent operators to assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of glandular tissue is between 10 and 20% of the total thickness of tissue.

  11. Determining Usability Versus Cost and Yields of a Regional Transport

    NASA Technical Reports Server (NTRS)

    Gvozdenovic, Slobodan

    1999-01-01

    Regional transports are designed to operate on air networks having the basic characteristics of short trip distances and low density passengers/cargo, i.e. small numbers of passengers per flight. Regional transports passenger capacity is from 10 to 100 seats and operate on routes from 350 to 1000 nautical miles (nm). An air network operated by regional transports has the following characteristics: (1) connecting regional centers; (2) operating on low density passengers/cargo flow services with minimum two frequencies per day; (3) operating on high density passengers/cargo flow with more than two frequencies per day; and (4) operating supplemental services whenever market demands in order to help bigger capacity aircraft already operating the same routes. In order to meet passenger requirements providing low fares and high or required number of frequencies, airlines must constantly monitor operational costs and keep them low. It is obvious that costs of operating aircraft must be lower than yield obtained by transporting passengers and cargo. The requirement to achieve favorable yield/cost ratio must provide the answer to the question of which aircraft will best meet a specific air network. An air network is defined by the number of services, the trip distance of each service, and the number of flights (frequencies) per day and week.

  12. The Cosmological Dependence of Galaxy Cluster Morphologies

    NASA Astrophysics Data System (ADS)

    Crone, Mary Margaret

    1995-01-01

    Measuring the density of the universe has been a fundamental problem in cosmology ever since the "Big Bang" model was developed over sixty years ago. In this simple and successful model, the age and eventual fate of the universe are determined by its density, its rate of expansion, and the value of a universal "cosmological constant". Analytic models suggest that many properties of galaxy clusters are sensitive to cosmological parameters. In this thesis, I use N-body simulations to examine cluster density profiles, abundances, and degree of subclustering to test the feasibility of using them as cosmological tests. The dependence on both cosmology and initial density field is examined, using a grid of cosmologies and scale-free initial power spectra P(k)~ k n. Einstein-deSitter ( Omegao=1), open ( Omegao=0.2 and 0.1) and flat, low density (Omegao=0.2, lambdao=0.8) models are studied, with initial spectral indices n=-2, -1 and 0. Of particular interest are the results for cluster profiles and substructure. The average density profiles are well fit by a power law p(r)~ r ^{-alpha} for radii where the local density contrast is between 100 and 3000. There is a clear trend toward steeper slopes with both increasing n and decreasing Omegao, with profile slopes in the open models consistently higher than Omega=1 values for the range of n examined. The amount of substructure in each model is quantified and explained in terms of cluster merger histories and the behavior of substructure statistics. The statistic which best distinguishes models is a very simple measure of deviations from symmetry in the projected mass distribution --the "Center-of-Mass Shift" as a function of overdensity. Some statistics which are quite sensitive to substructure perform relatively poorly as cosmological indicators. Density profiles and the Center-of-Mass test are both well-suited for comparison with weak lensing data and galaxy distributions. Such data are currently being collected and should be available within the next few years. At that time the predictions described here can be used to set useful cosmological constraints.

  13. Comparisons of thermospheric density data sets and models

    NASA Astrophysics Data System (ADS)

    Doornbos, Eelco; van Helleputte, Tom; Emmert, John; Drob, Douglas; Bowman, Bruce R.; Pilinski, Marcin

    During the past decade, continuous long-term data sets of thermospheric density have become available to researchers. These data sets have been derived from accelerometer measurements made by the CHAMP and GRACE satellites and from Space Surveillance Network (SSN) tracking data and related Two-Line Element (TLE) sets. These data have already resulted in a large number of publications on physical interpretation and improvement of empirical density modelling. This study compares four different density data sets and two empirical density models, for the period 2002-2009. These data sources are the CHAMP (1) and GRACE (2) accelerometer measurements, the long-term database of densities derived from TLE data (3), the High Accuracy Satellite Drag Model (4) run by Air Force Space Command, calibrated using SSN data, and the NRLMSISE-00 (5) and Jacchia-Bowman 2008 (6) empirical models. In describing these data sets and models, specific attention is given to differences in the geo-metrical and aerodynamic satellite modelling, applied in the conversion from drag to density measurements, which are main sources of density biases. The differences in temporal and spa-tial resolution of the density data sources are also described and taken into account. With these aspects in mind, statistics of density comparisons have been computed, both as a function of solar and geomagnetic activity levels, and as a function of latitude and local solar time. These statistics give a detailed view of the relative accuracy of the different data sets and of the biases between them. The differences are analysed with the aim at providing rough error bars on the data and models and pinpointing issues which could receive attention in future iterations of data processing algorithms and in future model development.

  14. Inventory of Electric Utility Power Plants in the United States

    EIA Publications

    2002-01-01

    Final issue of this report. Provides detailed statistics on existing generating units operated by electric utilities as of December 31, 2000, and certain summary statistics about new generators planned for operation by electric utilities during the next 5 years.

  15. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  16. Better Than Counting: Density Profiles from Force Sampling

    NASA Astrophysics Data System (ADS)

    de las Heras, Daniel; Schmidt, Matthias

    2018-05-01

    Calculating one-body density profiles in equilibrium via particle-based simulation methods involves counting of events of particle occurrences at (histogram-resolved) space points. Here, we investigate an alternative method based on a histogram of the local force density. Via an exact sum rule, the density profile is obtained with a simple spatial integration. The method circumvents the inherent ideal gas fluctuations. We have tested the method in Monte Carlo, Brownian dynamics, and molecular dynamics simulations. The results carry a statistical uncertainty smaller than that of the standard counting method, reducing therefore the computation time.

  17. The role of predictive uncertainty in the operational management of reservoirs

    NASA Astrophysics Data System (ADS)

    Todini, E.

    2014-09-01

    The present work deals with the operational management of multi-purpose reservoirs, whose optimisation-based rules are derived, in the planning phase, via deterministic (linear and nonlinear programming, dynamic programming, etc.) or via stochastic (generally stochastic dynamic programming) approaches. In operation, the resulting deterministic or stochastic optimised operating rules are then triggered based on inflow predictions. In order to fully benefit from predictions, one must avoid using them as direct inputs to the reservoirs, but rather assess the "predictive knowledge" in terms of a predictive probability density to be operationally used in the decision making process for the estimation of expected benefits and/or expected losses. Using a theoretical and extremely simplified case, it will be shown why directly using model forecasts instead of the full predictive density leads to less robust reservoir management decisions. Moreover, the effectiveness and the tangible benefits for using the entire predictive probability density instead of the model predicted values will be demonstrated on the basis of the Lake Como management system, operational since 1997, as well as on the basis of a case study on the lake of Aswan.

  18. Detailed modeling of the statistical uncertainty of Thomson scattering measurements

    NASA Astrophysics Data System (ADS)

    Morton, L. A.; Parke, E.; Den Hartog, D. J.

    2013-11-01

    The uncertainty of electron density and temperature fluctuation measurements is determined by statistical uncertainty introduced by multiple noise sources. In order to quantify these uncertainties precisely, a simple but comprehensive model was made of the noise sources in the MST Thomson scattering system and of the resulting variance in the integrated scattered signals. The model agrees well with experimental and simulated results. The signal uncertainties are then used by our existing Bayesian analysis routine to find the most likely electron temperature and density, with confidence intervals. In the model, photonic noise from scattered light and plasma background light is multiplied by the noise enhancement factor (F) of the avalanche photodiode (APD). Electronic noise from the amplifier and digitizer is added. The amplifier response function shapes the signal and induces correlation in the noise. The data analysis routine fits a characteristic pulse to the digitized signals from the amplifier, giving the integrated scattered signals. A finite digitization rate loses information and can cause numerical integration error. We find a formula for the variance of the scattered signals in terms of the background and pulse amplitudes, and three calibration constants. The constants are measured easily under operating conditions, resulting in accurate estimation of the scattered signals' uncertainty. We measure F ≈ 3 for our APDs, in agreement with other measurements for similar APDs. This value is wavelength-independent, simplifying analysis. The correlated noise we observe is reproduced well using a Gaussian response function. Numerical integration error can be made negligible by using an interpolated characteristic pulse, allowing digitization rates as low as the detector bandwidth. The effect of background noise is also determined.

  19. The urban heat island in Rio de Janeiro, Brazil, in the last 30 years using remote sensing data

    NASA Astrophysics Data System (ADS)

    Peres, Leonardo de Faria; Lucena, Andrews José de; Rotunno Filho, Otto Corrêa; França, José Ricardo de Almeida

    2018-02-01

    The aim of this work is to study urban heat island (UHI) in Metropolitan Area of Rio de Janeiro (MARJ) based on the analysis of land-surface temperature (LST) and land-use patterns retrieved from Landsat-5/Thematic Mapper (TM), Landsat-7/Enhanced Thematic Mapper Plus (ETM+) and Landsat-8/Operational Land Imager (OLI) and Thermal Infrared Sensors (TIRS) data covering a 32-year period between 1984 and 2015. LST temporal evolution is assessed by comparing the average LST composites for 1984-1999 and 2000-2015 where the parametric Student t-test was conducted at 5% significance level to map the pixels where LST for the more recent period is statistically significantly greater than the previous one. The non-parametric Mann-Whitney-Wilcoxon rank sum test has also confirmed at the same 5% significance level that the more recent period (2000-2015) has higher LST values. UHI intensity between ;urban; and ;rural/urban low density; (;vegetation;) areas for 1984-1999 and 2000-2015 was established and confirmed by both parametric and non-parametric tests at 1% significance level as 3.3 °C (5.1 °C) and 4.4 °C (7.1 °C), respectively. LST has statistically significantly (p-value < 0.01) increased over time in two of three land cover classes (;urban; and ;urban low density;), respectively by 1.9 °C and 0.9 °C, except in ;vegetation; class. A spatial analysis was also performed to identify the urban pixels within MARJ where UHI is more intense by subtracting the LST of these pixels from the LST mean value of ;vegetation; land-use class.

  20. The Longterm Centimeter-band Total Flux and Linear Polarization Properties of the Pearson-Readhead Survey Sources

    NASA Astrophysics Data System (ADS)

    Aller, M. F.; Aller, H. D.; Hughes, P. A.

    2001-12-01

    Using centimeter-band total flux and linear polarization observations of the Pearson-Readhead sample sources systematically obtained with the UMRAO 26-m radio telescope during the past 16 years, we identify the range of variability properties and their temporal changes as functions of both optical and radio morphological classification. We find that our earlier statistical analysis, based on a time window of 6.4 years, did not delineate the full amplitude range of the total flux variability; further, several galaxies exhibit longterm, systematic changes or rather infrequent outbursts requiring long term observations for detection. Using radio classification as a delineator, we confirm, and find additional evidence, that significant changes in flux density can occur in steep spectrum and lobe-dominated objects as well as in compact, flat-spectrum objects. We find that statistically the time-averaged total flux density spectra steepen when longer time windows are included, which we attribute to a selection effect in the source sample. We have identified preferred orientations of the electric vector of the polarized emission (EVPA) in an unbiased manner in several sources, including several QSOs which have exhibited large variations in total flux while maintaining stable EVPAs, and compared these with orientations of the flow direction indicated by VLB morphology. We have looked for systematic, monotonic changes in EVPA which might be expected in the emission from a precessing jet, but none were identified. A Scargle periodogram analysis found no strong evidence for periodicity in any of the sample sources. We thank the NSF for grants AST-8815678, AST-9120224, AST-9421979, and AST-9900723 which provided partial support for this research. The operation of the 26-meter telescope is supported by the University of Michigan Department of Astronomy.

  1. Estimation of background noise level on seismic station using statistical analysis for improved analysis accuracy

    NASA Astrophysics Data System (ADS)

    Han, S. M.; Hahm, I.

    2015-12-01

    We evaluated the background noise level of seismic stations in order to collect the observation data of high quality and produce accurate seismic information. Determining of the background noise level was used PSD (Power Spectral Density) method by McNamara and Buland (2004) in this study. This method that used long-term data is influenced by not only innate electronic noise of sensor and a pulse wave resulting from stabilizing but also missing data and controlled by the specified frequency which is affected by the irregular signals without site characteristics. It is hard and inefficient to implement process that filters out the abnormal signal within the automated system. To solve these problems, we devised a method for extracting the data which normally distributed with 90 to 99% confidence intervals at each period. The availability of the method was verified using 62-seismic stations with broadband and short-period sensors operated by the KMA (Korea Meteorological Administration). Evaluation standards were NHNM (New High Noise Model) and NLNM (New Low Noise Model) published by the USGS (United States Geological Survey). It was designed based on the western United States. However, Korean Peninsula surrounded by the ocean on three sides has a complicated geological structure and a high population density. So, we re-designed an appropriate model in Korean peninsula by statistically combined result. The important feature is that secondary-microseism peak appeared at a higher frequency band. Acknowledgements: This research was carried out as a part of "Research for the Meteorological and Earthquake Observation Technology and Its Application" supported by the 2015 National Institute of Meteorological Research (NIMR) in the Korea Meteorological Administration.

  2. Computational Characterization of Impact Induced Multi-Scale Dissipation in Reactive Solid Composites

    DTIC Science & Technology

    2016-07-01

    Predicted variation in (a) hot-spot number density , (b) hot-spot volume fraction, and (c) hot-spot specific surface area for each ensemble with piston speed...packing density , characterized by its effective solid volume fraction φs,0, affects hot-spot statistics for pressure dominated waves corresponding to...distribution in solid volume fraction within each ensemble was nearly Gaussian, and its standard deviation decreased with increasing density . Analysis of

  3. Active Structural Acoustic Control as an Approach to Acoustic Optimization of Lightweight Structures

    DTIC Science & Technology

    2001-06-01

    appropriate approach based on Statistical Energy Analysis (SEA) would facilitate investigations of the structural behavior at a high modal density. On the way...higher frequency investigations an approach based on the Statistical Energy Analysis (SEA) is recommended to describe the structural dynamic behavior

  4. Correlation Between Bone Density and Instantaneous Torque at Implant Site Preparation: A Validation on Polyurethane Foam Blocks of a Device Assessing Density of Jawbones.

    PubMed

    Di Stefano, Danilo Alessio; Arosio, Paolo

    2016-01-01

    Bone density at implant placement sites is one of the key factors affecting implant primary stability, which is a determinant for implant osseointegration and rehabilitation success. Site-specific bone density assessment is, therefore, of paramount importance. Recently, an implant micromotor endowed with an instantaneous torque-measuring system has been introduced. The aim of this study was to assess the reliability of this system. Five blocks with different densities (0.16, 0.26, 0.33, 0.49, and 0.65 g/cm(3)) were used. A single trained operator measured the density of one of them (0.33 g/cm(3)), by means of five different devices (20 measurements/device). The five resulting datasets were analyzed through the analysis of variance (ANOVA) model to investigate interdevice variability. As differences were not significant (P = .41), the five devices were each assigned to a different operator, who collected 20 density measurements for each block, both under irrigation (I) and without irrigation (NI). Measurements were pooled and averaged for each block, and their correlation with the actual block-density values was investigated using linear regression analysis. The possible effect of irrigation on density measurement was additionally assessed. Different devices provided reproducible, homogenous results. No significant interoperator variability was observed. Within the physiologic range of densities (> 0.30 g/cm(3)), the linear regression analysis showed a significant linear correlation between the mean torque measurements and the actual bone densities under both drilling conditions (r = 0.990 [I], r = 0.999 [NI]). Calibration lines were drawn under both conditions. Values collected under irrigation were lower than those collected without irrigation at all densities. The NI/I mean torque ratio was shown to decrease linearly with density (r = 0.998). The mean error introduced by the device-operator system was less than 10% in the range of normal jawbone density. Measurements performed with the device were linearly correlated with the blocks' bone densities. The results validate the device as an objective intraoperative tool for bone-density assessment that may contribute to proper jawbone-density evaluation and implant-insertion planning.

  5. Relative mass distributions of neutron-rich thermally fissile nuclei within a statistical model

    NASA Astrophysics Data System (ADS)

    Kumar, Bharat; Kannan, M. T. Senthil; Balasubramaniam, M.; Agrawal, B. K.; Patra, S. K.

    2017-09-01

    We study the binary mass distribution for the recently predicted thermally fissile neutron-rich uranium and thorium nuclei using a statistical model. The level density parameters needed for the study are evaluated from the excitation energies of the temperature-dependent relativistic mean field formalism. The excitation energy and the level density parameter for a given temperature are employed in the convolution integral method to obtain the probability of the particular fragmentation. As representative cases, we present the results for the binary yields of 250U and 254Th. The relative yields are presented for three different temperatures: T =1 , 2, and 3 MeV.

  6. Automated Reporting of DXA Studies Using a Custom-Built Computer Program.

    PubMed

    England, Joseph R; Colletti, Patrick M

    2018-06-01

    Dual-energy x-ray absorptiometry (DXA) scans are a critical population health tool and relatively simple to interpret but can be time consuming to report, often requiring manual transfer of bone mineral density and associated statistics into commercially available dictation systems. We describe here a custom-built computer program for automated reporting of DXA scans using Pydicom, an open-source package built in the Python computer language, and regular expressions to mine DICOM tags for patient information and bone mineral density statistics. This program, easy to emulate by any novice computer programmer, has doubled our efficiency at reporting DXA scans and has eliminated dictation errors.

  7. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  8. An investigation of the 'Overlap' between the Statistical-Discrete-Gust and the Power-Spectral-Density analysis methods

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.

    1989-01-01

    This paper presents the results of a NASA investigation of a claimed 'Overlap' between two gust response analysis methods: the Statistical Discrete Gust (SDG) method and the Power Spectral Density (PSD) method. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented in this paper for several different airplanes at several different flight conditions indicate that such an 'Overlap' does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.

  9. Constraints on cosmological parameters from the analysis of the cosmic lens all sky survey radio-selected gravitational lens statistics.

    PubMed

    Chae, K-H; Biggs, A D; Blandford, R D; Browne, I W A; De Bruyn, A G; Fassnacht, C D; Helbig, P; Jackson, N J; King, L J; Koopmans, L V E; Mao, S; Marlow, D R; McKean, J P; Myers, S T; Norbury, M; Pearson, T J; Phillips, P M; Readhead, A C S; Rusin, D; Sykes, C M; Wilkinson, P N; Xanthopoulos, E; York, T

    2002-10-07

    We derive constraints on cosmological parameters and the properties of the lensing galaxies from gravitational lens statistics based on the final Cosmic Lens All Sky Survey data. For a flat universe with a classical cosmological constant, we find that the present matter fraction of the critical density is Omega(m)=0.31(+0.27)(-0.14) (68%)+0.12-0.10 (syst). For a flat universe with a constant equation of state for dark energy w=p(x)(pressure)/rho(x)(energy density), we find w<-0.55(+0.18)(-0.11) (68%).

  10. Pasta Nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium

    NASA Astrophysics Data System (ADS)

    Caplan, Matthew; Horowitz, Charles; da Silva Schneider, Andre; Berry, Donald

    2014-09-01

    We simulate the decompression of cold dense nuclear matter, near the nuclear saturation density, in order to study the role of nuclear pasta in r-process nucleosynthesis in neutron star mergers. Our simulations are performed using a classical molecular dynamics model with 51 200 and 409 600 nucleons, and are run on GPUs. We expand our simulation region to decompress systems from initial densities of 0.080 fm-3 down to 0.00125 fm-3. We study proton fractions of YP = 0.05, 0.10, 0.20, 0.30, and 0.40 at T = 0.5, 0.75, and 1 MeV. We calculate the composition of the resulting systems using a cluster algorithm. This composition is in good agreement with nuclear statistical equilibrium models for temperatures of 0.75 and 1 MeV. However, for proton fractions greater than YP = 0.2 at a temperature of T = 0.5 MeV, the MD simulations produce non-equilibrium results with large rod-like nuclei. Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.

  11. MSUSTAT.

    ERIC Educational Resources Information Center

    Mauriello, David

    1984-01-01

    Reviews an interactive statistical analysis package (designed to run on 8- and 16-bit machines that utilize CP/M 80 and MS-DOS operating systems), considering its features and uses, documentation, operation, and performance. The package consists of 40 general purpose statistical procedures derived from the classic textbook "Statistical…

  12. Polymer electrolyte membrane water electrolysis: Restraining degradation in the presence of fluctuating power

    NASA Astrophysics Data System (ADS)

    Rakousky, Christoph; Reimer, Uwe; Wippermann, Klaus; Kuhri, Susanne; Carmo, Marcelo; Lueke, Wiebke; Stolten, Detlef

    2017-02-01

    Polymer electrolyte membrane (PEM) water electrolysis generates 'green' hydrogen when conducted with electricity from renewable - but fluctuating - sources like wind or solar photovoltaic. Unfortunately, the long-term stability of the electrolyzer performance is still not fully understood under these input power profiles. In this study, we contrast the degradation behavior of our PEM water electrolysis single cells that occurs under operation with constant and intermittent power and derive preferable operating states. For this purpose, five different current density profiles are used, of which two were constant and three dynamic. Cells operated at 1 A cm-2 show no degradation. However, degradation was observed for the remaining four profiles, all of which underwent periods of high current density (2 A cm-2). Hereby, constant operation at 2 A cm-2 led to the highest degradation rate (194 μV h-1). Degradation can be greatly reduced when the cells are operated with an intermittent profile. Current density switching has a positive effect on durability, as it causes reversible parts of degradation to recover and results in a substantially reduced degradation per mole of hydrogen produced. Two general degradation phenomena were identified, a decreased anode exchange current density and an increased contact resistance at the titanium porous transport layer (Ti-PTL).

  13. Breast Density and Benign Breast Disease: Risk Assessment to Identify Women at High Risk of Breast Cancer.

    PubMed

    Tice, Jeffrey A; Miglioretti, Diana L; Li, Chin-Shang; Vachon, Celine M; Gard, Charlotte C; Kerlikowske, Karla

    2015-10-01

    Women with proliferative breast lesions are candidates for primary prevention, but few risk models incorporate benign findings to assess breast cancer risk. We incorporated benign breast disease (BBD) diagnoses into the Breast Cancer Surveillance Consortium (BCSC) risk model, the only breast cancer risk assessment tool that uses breast density. We developed and validated a competing-risk model using 2000 to 2010 SEER data for breast cancer incidence and 2010 vital statistics to adjust for the competing risk of death. We used Cox proportional hazards regression to estimate the relative hazards for age, race/ethnicity, family history of breast cancer, history of breast biopsy, BBD diagnoses, and breast density in the BCSC. We included 1,135,977 women age 35 to 74 years undergoing mammography with no history of breast cancer; 17% of the women had a prior breast biopsy. During a mean follow-up of 6.9 years, 17,908 women were diagnosed with invasive breast cancer. The BCSC BBD model slightly overpredicted risk (expected-to-observed ratio, 1.04; 95% CI, 1.03 to 1.06) and had modest discriminatory accuracy (area under the receiver operator characteristic curve, 0.665). Among women with proliferative findings, adding BBD to the model increased the proportion of women with an estimated 5-year risk of 3% or higher from 9.3% to 27.8% (P<.001). The BCSC BBD model accurately estimates women's risk for breast cancer using breast density and BBD diagnoses. Greater numbers of high-risk women eligible for primary prevention after BBD diagnosis are identified using the BCSC BBD model. © 2015 by American Society of Clinical Oncology.

  14. Breast Density and Benign Breast Disease: Risk Assessment to Identify Women at High Risk of Breast Cancer

    PubMed Central

    Tice, Jeffrey A.; Miglioretti, Diana L.; Li, Chin-Shang; Vachon, Celine M.; Gard, Charlotte C.; Kerlikowske, Karla

    2015-01-01

    Purpose Women with proliferative breast lesions are candidates for primary prevention, but few risk models incorporate benign findings to assess breast cancer risk. We incorporated benign breast disease (BBD) diagnoses into the Breast Cancer Surveillance Consortium (BCSC) risk model, the only breast cancer risk assessment tool that uses breast density. Methods We developed and validated a competing-risk model using 2000 to 2010 SEER data for breast cancer incidence and 2010 vital statistics to adjust for the competing risk of death. We used Cox proportional hazards regression to estimate the relative hazards for age, race/ethnicity, family history of breast cancer, history of breast biopsy, BBD diagnoses, and breast density in the BCSC. Results We included 1,135,977 women age 35 to 74 years undergoing mammography with no history of breast cancer; 17% of the women had a prior breast biopsy. During a mean follow-up of 6.9 years, 17,908 women were diagnosed with invasive breast cancer. The BCSC BBD model slightly overpredicted risk (expected-to-observed ratio, 1.04; 95% CI, 1.03 to 1.06) and had modest discriminatory accuracy (area under the receiver operator characteristic curve, 0.665). Among women with proliferative findings, adding BBD to the model increased the proportion of women with an estimated 5-year risk of 3% or higher from 9.3% to 27.8% (P < .001). Conclusion The BCSC BBD model accurately estimates women's risk for breast cancer using breast density and BBD diagnoses. Greater numbers of high-risk women eligible for primary prevention after BBD diagnosis are identified using the BCSC BBD model. PMID:26282663

  15. Very large database of lipids: rationale and design.

    PubMed

    Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R

    2013-11-01

    Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.

  16. Nonequilibrium electromagnetics: Local and macroscopic fields and constitutive relationships

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker-Jarvis, James; Kabos, Pavel; Holloway, Christopher L.

    We study the electrodynamics of materials using a Liouville-Hamiltonian-based statistical-mechanical theory. Our goal is to develop electrodynamics from an ensemble-average viewpoint that is valid for microscopic and nonequilibrium systems at molecular to submolecular scales. This approach is not based on a Taylor series expansion of the charge density to obtain the multipoles. Instead, expressions of the molecular multipoles are used in an inverse problem to obtain the averaging statistical-density function that is used to obtain the macroscopic fields. The advantages of this method are that the averaging function is constructed in a self-consistent manner and the molecules can either bemore » treated as point multipoles or contain more microstructure. Expressions for the local and macroscopic fields are obtained, and evolution equations for the constitutive parameters are developed. We derive equations for the local field as functions of the applied, polarization, magnetization, strain density, and macroscopic fields.« less

  17. The Meyer-Neldel rule and the statistical shift of the Fermi level in amorphous semiconductors

    NASA Astrophysics Data System (ADS)

    Kikuchi, Minoru

    1988-11-01

    The statistical model is used to study the origin of the Meyer-Neldel (MN) rule [σ0∝exp(AEσ)] in a tetrahedral amorphous system. It is shown that a deep minimum in the gap density of states spectrum can lead to the linearity of the Fermi energy F(T) to the derivative (dF/dkT), as required from the rule. An expression is derived which relates the constant A in the rule to the gap density of states spectrum. The dispersion ranges of σ0 and Eσ are found to be related with the constant A. Model calculations show a magnitude of A and a wide dispersion of σ0 and Eσ in fair agreement with the experimental observations. A discussion is given to what extent the MN rule is dependent on the gap density of states spectrum.

  18. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  19. Bispectral analysis of equatorial spread F density irregularities

    NASA Technical Reports Server (NTRS)

    Labelle, J.; Lund, E. J.

    1992-01-01

    Bispectral analysis has been applied to density irregularities at frequencies 5-30 Hz observed with a sounding rocket launched from Peru in March 1983. Unlike the power spectrum, the bispectrum contains statistical information about the phase relations between the Fourier components which make up the waveform. In the case of spread F data from 475 km the 5-30 Hz portion of the spectrum displays overall enhanced bicoherence relative to that of the background instrumental noise and to that expected due to statistical considerations, implying that the observed f exp -2.5 power law spectrum has a significant non-Gaussian component. This is consistent with previous qualitative analyses. The bicoherence has also been calculated for simulated equatorial spread F density irregularities in approximately the same wavelength regime, and the resulting bispectrum has some features in common with that of the rocket data. The implications of this analysis for equatorial spread F are discussed, and some future investigations are suggested.

  20. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  1. Analysis of operational requirements for medium density air transportation, volume 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The medium density air travel market is examined and defined in terms of numbers of people transported per route per day and frequency of service. The operational characteristics for aircraft to serve this market are determined and a basepoint aircraft is designed from which tradeoff studies and parametric variations can be conducted. The impact of the operational characteristics on the air travel system is evaluated along with the economic viability of the study aircraft. Research and technology programs for future study consideration are identified.

  2. Energy density of ionospheric and solar wind origin ions in the near-Earth magnetotail during substorms

    NASA Technical Reports Server (NTRS)

    Daglis, Loannis A.; Livi, Stefano; Sarris, Emmanuel T.; Wilken, Berend

    1994-01-01

    Comprehensive energy density studies provide an important measure of the participation of various sources in energization processes and have been relatively rare in the literature. We present a statistical study of the energy density of the near-Earth magnetotail major ions (H(+), O(+), He(++), He(+)) during substorm expansion phase and discuss its implications for the solar wind/magnetosphere/ionosphere coupling. Our aim is to examine the relation between auroral activity and the particle energization during substorms through the correlation between the AE indices and the energy density of the major magnetospheric ions. The data we used here were collected by the charge-energy-mass (CHEM) spectrometer on board the Active Magnetospheric Particle Trace Explorer (AMPTE)/Charge Composition Explorer (CCE) satellite in the near-equatorial nightside magnetosphere, at geocentric distances approximately 7 to 9 R(sub E). CHEM provided the opportunity to conduct the first statistical study of energy density in the near-Earth magnetotail with multispecies particle data extending into the higher energy range (greater than or equal to 20 keV/E). the use of 1-min AE indices in this study should be emphasized, as the use (in previous statistical studies) of the (3-hour) Kp index or of long-time averages of AE indices essentially smoothed out all the information on substorms. Most distinct feature of our study is the excellent correlation of O(+) energy density with the AE index, in contrast with the remarkably poor He(++) energy density - AE index correlation. Furthermore, we examined the relation of the ion energy density to the electrojet activity during substorm growth phase. The O(+) energy density is strongly correlated with the pre-onset AU index, that is the eastward electrojet intensity, which represents the growth phase current system. Our investigation shows that the near-Earth magnetotail is increasingly fed with energetic ionospheric ions during periods of enhanced dissipation of auroral currents. The participation of the ionosphere in the substorm energization processes seems to be closely, although not solely, associated with the solar wind/magnetosphere coupling. That is, the ionosphere influences actively the substorm energization processes by responding to the increased solar wind/magnetosphere coupling as well as to the unloading dissipation of stored energy, with the increased feeding of new material into the magnetosphere.

  3. Fire danger rating network density

    Treesearch

    Rudy M. King; R. William Furman

    1976-01-01

    Conventional statistical techniques are used to answer the question, "What is the necessary station density for a fire danger network?" The Burning Index of the National Fire-Danger Rating System is used as an indicator of fire danger. Results are presented as station spacing in tabular form for each of six regions in the western United States.

  4. School Composition and the Black-White Achievement Gap: Methodology Companion. NCES 2015-032

    ERIC Educational Resources Information Center

    Bohrnstedt, G.; Kitmitto, S.; Ogut, B.; Sherman, D.; Chan, D.

    2015-01-01

    The School Composition and the Black-White Achievement Gap study was undertaken by the National Center for Education Statistics to present both descriptive and associative information on the relationships among the percentage of students in a school who were Black (referred to as "Black student density" or "density"), the…

  5. Alternative Fuels Data Center: Biodiesel Related Links

    Science.gov Websites

    or other domestic, renewable resources using sustainable agricultural methods and encourages its use ) Commodity Operations The Commodity Operations Program seeks to expand industrial consumption of agricultural Agriculture (USDA) National Agricultural Statistics Service The USDA's National Agricultural Statistics

  6. Statistical porcess control in Deep Space Network operation

    NASA Technical Reports Server (NTRS)

    Hodder, J. A.

    2002-01-01

    This report describes how the Deep Space Mission System (DSMS) Operations Program Office at the Jet Propulsion Laboratory's (EL) uses Statistical Process Control (SPC) to monitor performance and evaluate initiatives for improving processes on the National Aeronautics and Space Administration's (NASA) Deep Space Network (DSN).

  7. Charge-density analysis of a protein structure at subatomic resolution: the human aldose reductase case.

    PubMed

    Guillot, Benoît; Jelsch, Christian; Podjarny, Alberto; Lecomte, Claude

    2008-05-01

    The valence electron density of the protein human aldose reductase was analyzed at 0.66 angstroms resolution. The methodological developments in the software MoPro to adapt standard charge-density techniques from small molecules to macromolecular structures are described. The deformation electron density visible in initial residual Fourier difference maps was significantly enhanced after high-order refinement. The protein structure was refined after transfer of the experimental library multipolar atom model (ELMAM). The effects on the crystallographic statistics, on the atomic thermal displacement parameters and on the structure stereochemistry are analyzed. Constrained refinements of the transferred valence populations Pval and multipoles Plm were performed against the X-ray diffraction data on a selected substructure of the protein with low thermal motion. The resulting charge densities are of good quality, especially for chemical groups with many copies present in the polypeptide chain. To check the effect of the starting point on the result of the constrained multipolar refinement, the same charge-density refinement strategy was applied but using an initial neutral spherical atom model, i.e. without transfer from the ELMAM library. The best starting point for a protein multipolar refinement is the structure with the electron density transferred from the database. This can be assessed by the crystallographic statistical indices, including Rfree, and the quality of the static deformation electron-density maps, notably on the oxygen electron lone pairs. The analysis of the main-chain bond lengths suggests that stereochemical dictionaries would benefit from a revision based on recently determined unrestrained atomic resolution protein structures.

  8. Statistical Policy Working Paper 24. Electronic Dissemination of Statistical Data

    DOT National Transportation Integrated Search

    1995-11-01

    The report, Statistical Policy Working Paper 24, Electronic Dissemination of Statistical Data, includes several topics, such as Options and Best Uses for Different Media Operation of Electronic Dissemination Service, Customer Service Programs, Cost a...

  9. Statistics of Point Vortex Turbulence in Non-neutral Flows and in Flows with Translational and Rotational Symmetries

    NASA Astrophysics Data System (ADS)

    Esler, J. G.

    2017-12-01

    A theory (Esler and Ashbee in J Fluid Mech 779:275-308, 2015) describing the statistics of N freely-evolving point vortices in a bounded two-dimensional domain is extended. First, the case of a non-neutral vortex gas is addressed, and it is shown that the density of states function can be identified with the probability density function of an infinite sum of independent non-central chi-squared random variables, the details of which depend only on the shape of the domain. Equations for the equilibrium energy spectrum and other statistical quantities follow, the validity of which are verified against direct numerical simulations of the equations of motion. Second, domains with additional conserved quantities associated with a symmetry (e.g., circle, periodic channel) are investigated, and it is shown that the treatment of the non-neutral case can be modified to account for the additional constraint.

  10. Plasma sheet density dependence on Interplanetary Magnetic Field and Solar Wind properties: statistical study using 9+ year of THEMIS data

    NASA Astrophysics Data System (ADS)

    Nykyri, K.; Chu, C.; Dimmock, A. P.

    2017-12-01

    Previous studies have shown that plasma sheet in tenuous and hot during southward IMF, whereas northward IMF conditions are associated with cold, dense plasma. The cold, dense plasma sheet (CDPS) has strong influence on magnetospheric dynamics. Closer to Earth, the CDPS could be formed via double high-latitude reconnection, while at increasing tailward distance reconnection, diffusion and kinetic Alfven waves in association with Kelvin-Helmholtz Instability are suggested as dominant source for cold-dense plasma sheet formation. In this paper we present statistical correlation study between Solar Wind, Magnetosheath and Plasma sheet properties using 9+ years of THEMIS data in aberrated GSM frame, and in a normalized coordinate system that takes into account the changes of the magnetopause and bow shock location with respect to changing solar wind conditions. We present statistical results of the plasma sheet density dependence on IMF orientation and other solar wind properties.

  11. Eigenvalue statistics for the sum of two complex Wishart matrices

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2014-09-01

    The sum of independent Wishart matrices, taken from distributions with unequal covariance matrices, plays a crucial role in multivariate statistics, and has applications in the fields of quantitative finance and telecommunication. However, analytical results concerning the corresponding eigenvalue statistics have remained unavailable, even for the sum of two Wishart matrices. This can be attributed to the complicated and rotationally noninvariant nature of the matrix distribution that makes extracting the information about eigenvalues a nontrivial task. Using a generalization of the Harish-Chandra-Itzykson-Zuber integral, we find exact solution to this problem for the complex Wishart case when one of the covariance matrices is proportional to the identity matrix, while the other is arbitrary. We derive exact and compact expressions for the joint probability density and marginal density of eigenvalues. The analytical results are compared with numerical simulations and we find perfect agreement.

  12. Stiffening of fluid membranes due to thermal undulations: density-matrix renormalization-group study.

    PubMed

    Nishiyama, Yoshihiro

    2002-12-01

    It has been considered that the effective bending rigidity of fluid membranes should be reduced by thermal undulations. However, recent thorough investigation by Pinnow and Helfrich revealed the significance of measure factors for the partition sum. Accepting the local curvature as a statistical measure, they found that fluid membranes are stiffened macroscopically. In order to examine this remarkable idea, we performed extensive ab initio simulations for a fluid membrane. We set up a transfer matrix that is diagonalized by means of the density-matrix renormalization group. Our method has an advantage, in that it allows us to survey various statistical measures. As a consequence, we found that the effective bending rigidity flows toward strong coupling under the choice of local curvature as a statistical measure. On the contrary, for other measures such as normal displacement and tilt angle, we found a clear tendency toward softening.

  13. A geostatistical state-space model of animal densities for stream networks.

    PubMed

    Hocking, Daniel J; Thorson, James T; O'Neil, Kyle; Letcher, Benjamin H

    2018-06-21

    Population dynamics are often correlated in space and time due to correlations in environmental drivers as well as synchrony induced by individual dispersal. Many statistical analyses of populations ignore potential autocorrelations and assume that survey methods (distance and time between samples) eliminate these correlations, allowing samples to be treated independently. If these assumptions are incorrect, results and therefore inference may be biased and uncertainty under-estimated. We developed a novel statistical method to account for spatio-temporal correlations within dendritic stream networks, while accounting for imperfect detection in the surveys. Through simulations, we found this model decreased predictive error relative to standard statistical methods when data were spatially correlated based on stream distance and performed similarly when data were not correlated. We found that increasing the number of years surveyed substantially improved the model accuracy when estimating spatial and temporal correlation coefficients, especially from 10 to 15 years. Increasing the number of survey sites within the network improved the performance of the non-spatial model but only marginally improved the density estimates in the spatio-temporal model. We applied this model to Brook Trout data from the West Susquehanna Watershed in Pennsylvania collected over 34 years from 1981 - 2014. We found the model including temporal and spatio-temporal autocorrelation best described young-of-the-year (YOY) and adult density patterns. YOY densities were positively related to forest cover and negatively related to spring temperatures with low temporal autocorrelation and moderately-high spatio-temporal correlation. Adult densities were less strongly affected by climatic conditions and less temporally variable than YOY but with similar spatio-temporal correlation and higher temporal autocorrelation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Development of Composite Materials with High Passive Damping Properties

    DTIC Science & Technology

    2006-05-15

    frequency response function analysis. Sound transmission through sandwich panels was studied using the statistical energy analysis (SEA). Modal density...2.2.3 Finite element models 14 2.2.4 Statistical energy analysis method 15 CHAPTER 3 ANALYSIS OF DAMPING IN SANDWICH MATERIALS. 24 3.1 Equation of...sheets and the core. 2.2.4 Statistical energy analysis method Finite element models are generally only efficient for problems at low and middle frequencies

  15. Interpolation on the manifold of K component GMMs.

    PubMed

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  16. Iron removal, energy consumption and operating cost of electrocoagulation of drinking water using a new flow column reactor.

    PubMed

    Hashim, Khalid S; Shaw, Andy; Al Khaddar, Rafid; Pedrola, Montserrat Ortoneda; Phipps, David

    2017-03-15

    The goal of this project was to remove iron from drinking water using a new electrocoagulation (EC) cell. In this research, a flow column has been employed in the designing of a new electrocoagulation reactor (FCER) to achieve the planned target. Where, the water being treated flows through the perforated disc electrodes, thereby effectively mixing and aerating the water being treated. As a result, the stirring and aerating devices that until now have been widely used in the electrocoagulation reactors are unnecessary. The obtained results indicated that FCER reduced the iron concentration from 20 to 0.3 mg/L within 20 min of electrolysis at initial pH of 6, inter-electrode distance (ID) of 5 mm, current density (CD) of 1.5 mA/cm 2 , and minimum operating cost of 0.22 US $/m 3 . Additionally, it was found that FCER produces H 2 gas enough to generate energy of 10.14 kW/m 3 . Statistically, it was found that the relationship between iron removal and operating parameters could be modelled with R 2 of 0.86, and the influence of operating parameters on iron removal followed the order: C 0 >t>CD>pH. Finally, the SEM (scanning electron microscopy) images showed a large number of irregularities on the surface of anode due to the generation of aluminium hydroxides. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  17. Using some results about the Lie evolution of differential operators to obtain the Fokker-Planck equation for non-Hamiltonian dynamical systems of interest

    NASA Astrophysics Data System (ADS)

    Bianucci, Marco

    2018-05-01

    Finding the generalized Fokker-Planck Equation (FPE) for the reduced probability density function of a subpart of a given complex system is a classical issue of statistical mechanics. Zwanzig projection perturbation approach to this issue leads to the trouble of resumming a series of commutators of differential operators that we show to correspond to solving the Lie evolution of first order differential operators along the unperturbed Liouvillian of the dynamical system of interest. In this paper, we develop in a systematic way the procedure to formally solve this problem. In particular, here we show which the basic assumptions are, concerning the dynamical system of interest, necessary for the Lie evolution to be a group on the space of first order differential operators, and we obtain the coefficients of the so-evolved operators. It is thus demonstrated that if the Liouvillian of the system of interest is not a first order differential operator, in general, the FPE structure breaks down and the master equation contains all the power of the partial derivatives, up to infinity. Therefore, this work shed some light on the trouble of the ubiquitous emergence of both thermodynamics from microscopic systems and regular regression laws at macroscopic scales. However these results are very general and can be applied also in other contexts that are non-Hamiltonian as, for example, geophysical fluid dynamics, where important events, like El Niño, can be considered as large time scale phenomena emerging from the observation of few ocean degrees of freedom of a more complex system, including the interaction with the atmosphere.

  18. Plasma density behavior with new graphite limiters in the Hefei Tokamak-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asif, M.; Gao, X.; Li, J.

    A new set of actively cooled toroidal double-ring graphite limiters has been developed in the Hefei Tokamak-7 (HT-7) [X. Gao et al., Phys. Plasmas 7, 2933 (2000)] for long pulse operation. The extension of operational region and density behavior with graphite (C) limiters have been studied in this paper. Extended high-density region at the high plasma current low-q{sub a} was obtained. The density profile with the C limiter was studied to compare with the previous molybdenum (Mo) limiter. The critical density of multifaceted asymmetric radiation from the edge (MARFE) onset is observed in the region of Z{sub eff}{sup 1/2}f{sub GW}=0.9{approx}1.2,more » where f{sub GW}=n{sub e}/n{sub GW}. (Here n{sub e} is the maximum line average electron density and n{sub GW} is the Greenwald density.) Under the same injected power, the critical density of MARFE onset with the new C limiter is much higher than the previous Mo limiter.« less

  19. Nonuniform continuum model for solvatochromism based on frozen-density embedding theory.

    PubMed

    Shedge, Sapana Vitthal; Wesolowski, Tomasz A

    2014-10-20

    Frozen-density embedding theory (FDET) provides the formal framework for multilevel numerical simulations, such that a selected subsystem is described at the quantum mechanical level, whereas its environment is described by means of the electron density (frozen density; ${\\rho _{\\rm{B}} (\\vec r)}$). The frozen density ${\\rho _{\\rm{B}} (\\vec r)}$ is usually obtained from some lower-level quantum mechanical methods applied to the environment, but FDET is not limited to such choices for ${\\rho _{\\rm{B}} (\\vec r)}$. The present work concerns the application of FDET, in which ${\\rho _{\\rm{B}} (\\vec r)}$ is the statistically averaged electron density of the solvent ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. The specific solute-solvent interactions are represented in a statistical manner in ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. A full self-consistent treatment of solvated chromophore, thus involves a single geometry of the chromophore in a given state and the corresponding ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. We show that the coupling between the two descriptors might be made in an approximate manner that is applicable for both absorption and emission. The proposed protocol leads to accurate (error in the range of 0.05 eV) descriptions of the solvatochromic shifts in both absorption and emission. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. The contributions of breast density and common genetic variation to breast cancer risk.

    PubMed

    Vachon, Celine M; Pankratz, V Shane; Scott, Christopher G; Haeberle, Lothar; Ziv, Elad; Jensen, Matthew R; Brandt, Kathleen R; Whaley, Dana H; Olson, Janet E; Heusinger, Katharina; Hack, Carolin C; Jud, Sebastian M; Beckmann, Matthias W; Schulz-Wendtland, Ruediger; Tice, Jeffrey A; Norman, Aaron D; Cunningham, Julie M; Purrington, Kristen S; Easton, Douglas F; Sellers, Thomas A; Kerlikowske, Karla; Fasching, Peter A; Couch, Fergus J

    2015-05-01

    We evaluated whether a 76-locus polygenic risk score (PRS) and Breast Imaging Reporting and Data System (BI-RADS) breast density were independent risk factors within three studies (1643 case patients, 2397 control patients) using logistic regression models. We incorporated the PRS odds ratio (OR) into the Breast Cancer Surveillance Consortium (BCSC) risk-prediction model while accounting for its attributable risk and compared five-year absolute risk predictions between models using area under the curve (AUC) statistics. All statistical tests were two-sided. BI-RADS density and PRS were independent risk factors across all three studies (P interaction = .23). Relative to those with scattered fibroglandular densities and average PRS (2(nd) quartile), women with extreme density and highest quartile PRS had 2.7-fold (95% confidence interval [CI] = 1.74 to 4.12) increased risk, while those with low density and PRS had reduced risk (OR = 0.30, 95% CI = 0.18 to 0.51). PRS added independent information (P < .001) to the BCSC model and improved discriminatory accuracy from AUC = 0.66 to AUC = 0.69. Although the BCSC-PRS model was well calibrated in case-control data, independent cohort data are needed to test calibration in the general population. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

Top