Sample records for streams random matrix

  1. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  2. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    PubMed

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  3. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  4. Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics

    NASA Astrophysics Data System (ADS)

    Iyer, V.; Shetty, S.; Iyengar, S. S.

    2015-07-01

    Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.

  5. Integrated optic vector-matrix multiplier

    DOEpatents

    Watts, Michael R [Albuquerque, NM

    2011-09-27

    A vector-matrix multiplier is disclosed which uses N different wavelengths of light that are modulated with amplitudes representing elements of an N.times.1 vector and combined to form an input wavelength-division multiplexed (WDM) light stream. The input WDM light stream is split into N streamlets from which each wavelength of the light is individually coupled out and modulated for a second time using an input signal representing elements of an M.times.N matrix, and is then coupled into an output waveguide for each streamlet to form an output WDM light stream which is detected to generate a product of the vector and matrix. The vector-matrix multiplier can be formed as an integrated optical circuit using either waveguide amplitude modulators or ring resonator amplitude modulators.

  6. On the cross-stream spectral method for the Orr-Sommerfeld equation

    NASA Technical Reports Server (NTRS)

    Zorumski, William E.; Hodge, Steven L.

    1993-01-01

    Cross-stream models are defined as solutions to the Orr-Sommerfeld equation which are propagating normal to the flow direction. These models are utilized as a basis for a Hilbert space to approximate the spectrum of the Orr-Sommerfeld equation with plane Poiseuille flow. The cross-stream basis leads to a standard eigenvalue problem for the frequencies of Poiseuille flow instability waves. The coefficient matrix in the eigenvalue problem is shown to be the sum of a real matrix and a negative-imaginary diagonal matrix which represents the frequencies of the cross-stream modes. The real coefficient matrix is shown to approach a Toeplitz matrix when the row and column indices are large. The Toeplitz matrix is diagonally dominant, and the diagonal elements vary inversely in magnitude with diagonal position. The Poiseuille flow eigenvalues are shown to lie within Gersgorin disks with radii bounded by the product of the average flow speed and the axial wavenumber. It is shown that the eigenvalues approach the Gersgorin disk centers when the mode index is large, so that the method may be used to compute spectra with an essentially unlimited number of elements. When the mode index is large, the real part of the eigenvalue is the product of the axial wavenumber and the average flow speed, and the imaginary part of the eigen value is identical to the corresponding cross-stream mode frequency. The cross-stream method is numerically well-conditioned in comparison to Chebyshev based methods, providing equivalent accuracy for small mode indices and superior accuracy for large indices.

  7. Assessment of wadeable stream resources in the driftless area ecoregion in Western Wisconsin using a probabilistic sampling design.

    PubMed

    Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen

    2009-03-01

    The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.

  8. A novel image encryption algorithm based on chaos maps with Markov properties

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang

    2015-02-01

    In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.

  9. Refernce Conditions for Streams in the Grand Prairie Natural Division of Illinois

    NASA Astrophysics Data System (ADS)

    Sangunett, B.; Dewalt, R.

    2005-05-01

    As part of the Critical Trends Assessment Program (CTAP) of the Illinois Department of Natural Resources (IDNR), 12 potential reference quality stream sites in the Grand Prairie Natural Division were evaluated in May 2004. This agriculturally dominated region, located in east central Illinois, is the most highly modified in the state. The quality of these sites was assessed using a modified Hilsenhoff Biotic Index and species richness of Ephemeroptera, Plecoptera, and Trichoptera (EPT) insect orders and a 12 parameter Habitat Quality Index (HQI). Illinois EPA high quality fish stations, Illinois Natural History Survey insect collection data, and best professional knowledge were used to choose which streams to evaluate. For analysis, reference quality streams were compared to 37 randomly selected meandering streams and 26 randomly selected channelized streams which were assessed by CTAP between 1997 and 2001. The results showed that the reference streams exceeded both taxa richness and habitat quality of randomly selected streams in the region. Both random meandering sites and reference quality sites increased in taxa richness and HQI as stream width increased. Randomly selected channelized streams had about the same taxa richness and HQI regardless of width.

  10. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garn, Troy G; Law, Jack D; Greenhalgh, Mitchell R

    A composite media including at least one crystalline aluminosilicate material in polyacrylonitrile. A method of forming a composite media is also disclosed. The method comprises dissolving polyacrylonitrile in an organic solvent to form a matrix solution. At least one crystalline aluminosilicate material is combined with the matrix solution to form a composite media solution. The organic solvent present in the composite media solution is diluted. The composite media solution is solidified. In addition, a method of processing a fluid stream is disclosed. The method comprises providing a beads of a composite media comprising at least one crystalline aluminosilicate material dispersedmore » in a polyacrylonitrile matrix. The beads of the composite media are contacted with a fluid stream comprising at least one constituent. The at least one constituent is substantially removed from the fluid stream.« less

  12. Quantum enigma cipher as a generalization of the quantum stream cipher

    NASA Astrophysics Data System (ADS)

    Kato, Kentaro

    2016-09-01

    Various types of randomizations for the quantum stream cipher by Y00 protocol have been developed so far. In particular, it must be noted that the analysis of immunity against correlation attacks with a new type of randomization by Hirota and Kurosawa prompted a new look at the quantum stream cipher by Y00 protocol (Quant. Inform. Process. 6(2) 2007). From the preceding study on the quantum stream cipher, we recognized that the quantum stream cipher by Y00 protocol would be able to be generalized to a new type of physical cipher that has potential to exceed the Shannon limit by installing additional randomization mechanisms, in accordance with the law of quantum mechanics. We call this new type of physical random cipher the quantum enigma cipher. In this article, we introduce the recent developments for the quantum stream cipher by Y00 protocol and future plans toward the quantum enigma cipher.

  13. Blood vessel endothelium-directed tumor cell streaming in breast tumors requires the HGF/C-Met signaling pathway

    PubMed Central

    Leung, E; Xue, A; Wang, Y; Rougerie, P; Sharma, V P; Eddy, R; Cox, D; Condeelis, J

    2017-01-01

    During metastasis to distant sites, tumor cells migrate to blood vessels. In vivo, breast tumor cells utilize a specialized mode of migration known as streaming, where a linear assembly of tumor cells migrate directionally towards blood vessels on fibronectin-collagen I-containing extracellular matrix (ECM) fibers in response to chemotactic signals. We have successfully reconstructed tumor cell streaming in vitro by co-plating tumors cells, macrophages and endothelial cells on 2.5 μm thick ECM-coated micro-patterned substrates. We found that tumor cells and macrophages, when plated together on the micro-patterned substrates, do not demonstrate sustained directional migration in only one direction (sustained directionality) but show random bi-directional walking. Sustained directionality of tumor cells as seen in vivo was established in vitro when beads coated with human umbilical vein endothelial cells were placed at one end of the micro-patterned ‘ECM fibers' within the assay. We demonstrated that these endothelial cells supply the hepatocyte growth factor (HGF) required for the chemotactic gradient responsible for sustained directionality. Using this in vitro reconstituted streaming system, we found that directional streaming is dependent on, and most effectively blocked, by inhibiting the HGF/C-Met signaling pathway between endothelial cells and tumor cells. Key observations made with the in vitro reconstituted system implicating C-Met signaling were confirmed in vivo in mammary tumors using the in vivo invasion assay and intravital multiphoton imaging of tumor cell streaming. These results establish HGF/C-Met as a central organizing signal in blood vessel-directed tumor cell migration in vivo and highlight a promising role for C-Met inhibitors in blocking tumor cell streaming and metastasis in vivo, and for use in human trials. PMID:27893712

  14. Regeneration of an aqueous solution from an acid gas absorption process by matrix stripping

    DOEpatents

    Rochelle, Gary T [Austin, TX; Oyenekan, Babatunde A [Katy, TX

    2011-03-08

    Carbon dioxide and other acid gases are removed from gaseous streams using aqueous absorption and stripping processes. By replacing the conventional stripper used to regenerate the aqueous solvent and capture the acid gas with a matrix stripping configuration, less energy is consumed. The matrix stripping configuration uses two or more reboiled strippers at different pressures. The rich feed from the absorption equipment is split among the strippers, and partially regenerated solvent from the highest pressure stripper flows to the middle of sequentially lower pressure strippers in a "matrix" pattern. By selecting certain parameters of the matrix stripping configuration such that the total energy required by the strippers to achieve a desired percentage of acid gas removal from the gaseous stream is minimized, further energy savings can be realized.

  15. Optimizing the well pumping rate and its distance from a stream

    NASA Astrophysics Data System (ADS)

    Abdel-Hafez, M. H.; Ogden, F. L.

    2008-12-01

    Both ground water and surface water are very important component of the water resources. Since they are coupled systems in riparian areas, management strategies that neglect interactions between them penalize senior surface water rights to the benefit of junior ground water rights holders in the prior appropriation rights system. Water rights managers face a problem in deciding which wells need to be shut down and when, in the case of depleted stream flow. A simulation model representing a combined hypothetical aquifer and stream has been developed using MODFLOW 2000 to capture parameter sensitivity, test management strategies and guide field data collection campaigns to support modeling. An optimization approach has been applied to optimize both the well distance from the stream and the maximum pumping rate that does not affect the stream discharge downstream the pumping wells. Conjunctive management can be modeled by coupling the numerical simulation model with the optimization techniques using the response matrix technique. The response matrix can be obtained by calculating the response coefficient for each well and stream. The main assumption of the response matrix technique is that the amount of water out of the stream to the aquifer is linearly proportional to the well pumping rate (Barlow et al. 2003). The results are presented in dimensionless form, which can be used by the water managers to solve conflicts between surface water and ground water holders by making the appropriate decision to choose which well need to be shut down first.

  16. A Study on the Stream Cipher Embedded Magic Square of Random Access Files

    NASA Astrophysics Data System (ADS)

    Liu, Chenglian; Zhao, Jian-Ming; Rafsanjani, Marjan Kuchaki; Shen, Yijuan

    2011-09-01

    Magic square and stream cipher issues are both interesting and well-tried topics. In this paper, we are proposing a new scheme which streams cipher applications for random access files based on the magic square method. There are two thresholds required to secure our data, if using only decrypts by the stream cipher. It isn't to recovery original source. On other hand, we improve the model of cipher stream to strengthen and defend efficiently; it also was its own high speed and calculates to most parts of the key stream generator.

  17. Methods of removing a constituent from a feed stream using adsorption media

    DOEpatents

    Tranter, Troy J [Idaho Falls, ID; Mann, Nicholas R [Rigby, ID; Todd, Terry A [Aberdeen, ID; Herbst, Ronald S [Idaho Falls, ID

    2011-05-24

    A method of producing an adsorption medium to remove at least one constituent from a feed stream. The method comprises dissolving and/or suspending at least one metal compound in a solvent to form a metal solution, dissolving polyacrylonitrile into the metal solution to form a PAN-metal solution, and depositing the PAN-metal solution into a quenching bath to produce the adsorption medium. The at least one constituent, such as arsenic, selenium, or antimony, is removed from the feed stream by passing the feed stream through the adsorption medium. An adsorption medium having an increased metal loading and increased capacity for arresting the at least one constituent to be removed is also disclosed. The adsorption medium includes a polyacrylonitrile matrix and at least one metal hydroxide incorporated into the polyacrylonitrile matrix.

  18. Adjustment of Pesticide Concentrations for Temporal Changes in Analytical Recovery, 1992-2006

    USGS Publications Warehouse

    Martin, Jeffrey D.; Stone, Wesley W.; Wydoski, Duane S.; Sandstrom, Mark W.

    2009-01-01

    Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ('spiked' QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report examines temporal changes in the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as 'pesticides') that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 to 2006 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Temporal changes in pesticide recovery were investigated by calculating robust, locally weighted scatterplot smooths (lowess smooths) for the time series of pesticide recoveries in 5,132 laboratory reagent spikes; 1,234 stream-water matrix spikes; and 863 groundwater matrix spikes. A 10-percent smoothing window was selected to show broad, 6- to 12-month time scale changes in recovery for most of the 52 pesticides. Temporal patterns in recovery were similar (in phase) for laboratory reagent spikes and for matrix spikes for most pesticides. In-phase temporal changes among spike types support the hypothesis that temporal change in method performance is the primary cause of temporal change in recovery. Although temporal patterns of recovery were in phase for most pesticides, recovery in matrix spikes was greater than recovery in reagent spikes for nearly every pesticide. Models of recovery based on matrix spikes are deemed more appropriate for adjusting concentrations of pesticides measured in groundwater and stream-water samples than models based on laboratory reagent spikes because (1) matrix spikes are expected to more closely match the matrix of environmental water samples than are reagent spikes and (2) method performance is often matrix dependent, as was shown by higher recovery in matrix spikes for most of the pesticides. Models of recovery, based on lowess smooths of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.

  19. MATRIX-ASSISTED LASER DESORPTION IONIZATION OF SIZE AND COMPOSITION SELECTED AEROSOL PARTICLES. (R823980)

    EPA Science Inventory

    Matrix-assisted laser desorption/ionization (MALDI) was performed on individual,
    size-selected aerosol particles in the 2-8 mu m diameter range, Monodisperse aerosol droplets
    containing matrix, analyte, and solvent were generated and entrained in a dry stream of air, The dr...

  20. Nanocomposite thin films for optical gas sensing

    DOEpatents

    Ohodnicki, Paul R; Brown, Thomas D

    2014-06-03

    The disclosure relates to a plasmon resonance-based method for gas sensing in a gas stream utilizing a gas sensing material. In an embodiment the gas stream has a temperature greater than about 500.degree. C. The gas sensing material is comprised of gold nanoparticles having an average nanoparticle diameter of less than about 100 nanometers dispersed in an inert matrix having a bandgap greater than or equal to 5 eV, and an oxygen ion conductivity less than approximately 10.sup.-7 S/cm at a temperature of 700.degree. C. Exemplary inert matrix materials include SiO.sub.2, Al.sub.2O.sub.3, and Si.sub.3N.sub.4 as well as modifications to modify the effective refractive indices through combinations and/or doping of such materials. Changes in the chemical composition of the gas stream are detected by changes in the plasmon resonance peak. The method disclosed offers significant advantage over active and reducible matrix materials typically utilized, such as yttria-stabilized zirconia (YSZ) or TiO.sub.2.

  1. Autonomous Byte Stream Randomizer

    NASA Technical Reports Server (NTRS)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  2. Streaming PCA with many missing entries.

    DOT National Transportation Integrated Search

    2015-12-01

    This paper considers the problem of matrix completion when some number of the columns are : completely and arbitrarily corrupted, potentially by a malicious adversary. It is well-known that standard : algorithms for matrix completion can return arbit...

  3. Investigating the use of the dual-polarized and large incident angle of SAR data for mapping the fluvial and aeolian deposits

    NASA Astrophysics Data System (ADS)

    Gaber, Ahmed; Amarah, Bassam A.; Abdelfattah, Mohamed; Ali, Sarah

    2017-12-01

    Mapping the spatial distributions of the fluvial deposits in terms of particles size as well as imaging the near-surface features along the non-vegetated aeolian sand-sheets, provides valuable geological information. Thus this work aims at investigating the contribution of the dual-polarization SAR data in classifying and mapping the surface sediments as well as investigating the effect of the radar incident-angle on improving the images of the hidden features under the desert sand cover. For mapping the fluvial deposits, the covariance matrix ([C2]) using four dual-polarized ALOS/PALSAR-1 scenes cover the Wadi El Matulla, East Qena, Egypt were generated. This [C2] matrix was used to generate a supervised classification map with three main classes (gravel, gravel/sand and sand). The polarimetric scattering response, spectral reflectance and temperatures brightness of these 3 classes were extracted. However for the aeolian deposits investigation, two Radarsat-1 and three full-polarimetric ALOS/PALSAR-1 images, which cover the northwestern sandy part of Sinai, Egypt were calibrated, filtered, geocoded and ingested in a GIS database to image the near-surface features. The fluvial mapping results show that the values of the radar backscattered coefficient (σ°) and the degree of randomness of the obtained three classes are increasing respectively by increasing their grain size. Moreover, the large incident angle (θi = 39.7) of the Radarsat-1 image has revealed a meandering buried stream under the sand sheet of the northwestern part of Sinai. Such buried stream does not appear in the other optical, SRTM and SAR dataset. The main reason is the enhanced contrast between the low backscattered return from the revealed meandering stream and the surroundings as a result of the increased backscattering intensity, which is related to the relatively large incident angle along the undulated surface of the study area. All archaeological observations support the existence of paleo-fresh water lagoon at the northwestern corner of the study area, which might have been the discharge lagoon of the revealed hidden stream.

  4. Social Noise: Generating Random Numbers from Twitter Streams

    NASA Astrophysics Data System (ADS)

    Fernández, Norberto; Quintas, Fernando; Sánchez, Luis; Arias, Jesús

    2015-12-01

    Due to the multiple applications of random numbers in computer systems (cryptography, online gambling, computer simulation, etc.) it is important to have mechanisms to generate these numbers. True Random Number Generators (TRNGs) are commonly used for this purpose. TRNGs rely on non-deterministic sources to generate randomness. Physical processes (like noise in semiconductors, quantum phenomenon, etc.) play this role in state of the art TRNGs. In this paper, we depart from previous work and explore the possibility of defining social TRNGs using the stream of public messages of the microblogging service Twitter as randomness source. Thus, we define two TRNGs based on Twitter stream information and evaluate them using the National Institute of Standards and Technology (NIST) statistical test suite. The results of the evaluation confirm the feasibility of the proposed approach.

  5. The fast algorithm of spark in compressive sensing

    NASA Astrophysics Data System (ADS)

    Xie, Meihua; Yan, Fengxia

    2017-01-01

    Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.

  6. High capacity adsorption media and method of producing

    DOEpatents

    Tranter, Troy J.; Mann, Nicholas R.; Todd, Terry A.; Herbst, Ronald S.

    2010-10-05

    A method of producing an adsorption medium to remove at least one constituent from a feed stream. The method comprises dissolving and/or suspending at least one metal compound in a solvent to form a metal solution, dissolving polyacrylonitrile into the metal solution to form a PAN-metal solution, and depositing the PAN-metal solution into a quenching bath to produce the adsorption medium. The at least one constituent, such as arsenic, selenium, or antimony, is removed from the feed stream by passing the feed stream through the adsorption medium. An adsorption medium having an increased metal loading and increased capacity for arresting the at least one constituent to be removed is also disclosed. The adsorption medium includes a polyacrylonitrile matrix and at least one metal hydroxide incorporated into the polyacrylonitrile matrix.

  7. High capacity adsorption media and method of producing

    DOEpatents

    Tranter, Troy J [Idaho Falls, ID; Herbst, R Scott [Idaho Falls, ID; Mann, Nicholas R [Blackfoot, ID; Todd, Terry A [Aberdeen, ID

    2008-05-06

    A method of producing an adsorption medium to remove at least one constituent from a feed stream. The method comprises dissolving at least one metal compound in a solvent to form a metal solution, dissolving polyacrylonitrile into the metal solution to form a PAN-metal solution, and depositing the PAN-metal solution into a quenching bath to produce the adsorption medium. The at least one constituent, such as arsenic, selenium, or antimony, is removed from the feed stream by passing the feed stream through the adsorption medium. An adsorption medium having an increased metal loading and increased capacity for arresting the at least one constituent to be removed is also disclosed. The adsorption medium includes a polyacrylonitrile matrix and at least one metal hydroxide incorporated into the polyacrylonitrile matrix.

  8. Nitrogen Removal by Streams and Rivers of the Upper Mississippi River Basin

    EPA Science Inventory

    Our study, based on chemistry and channel dimensions data collected at 893 randomly-selected stream and river sites in the Mississippi River basin, demonstrated the interaction of stream chemistry, stream size, and NO3-N uptake metrics across a range of stream sizes and across re...

  9. The anatomy of a hydrothermal (explosion ) breccia, Abbot Village, central Maine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, D.C.

    1993-03-01

    An apparently intrusive hydrothermal breccia is exposed in a large outcrop along Kingsbury Stream downstream from the Route 6 bridge in Abbot Village. The breccia intrudes the Siluro-Devonian Madrid Formation which is comprised of thick-bedded metasandstone interbedded with less fine-grained schist and phyllite at regional biotite grade. In the vicinity of the breccia, the bedding attitude in the Madrid is N60E 70SE and the section faces SE. The breccia is a concordant body with respect to bedding and the exposure shows what appears to the SW terminus of the intrusion which extends an unknown distance NE. The main phase ofmore » the breccia consists of randomly oriented and angular clasts'' of Madrid metasandstone and schist that are cemented by a quartz-dominated matrix. The random orientation of the clasts is present this phase were it is in contact with the country rock. The matrix comprises about 15% of the volume of the breccia and, in addition to quartz, contains biotite, galena, chalcopyrite ( ), pyrite, and an iron-carbonate. In some interstitial matrix, apparently late iron-carbonate fills post-quartz vugs that contain quartz-crystal terminations. The wall phase contains a higher proportion of biotite schist clasts that in places are bent around each other and metasandstone clasts. Quartz veins extending into the country rock near the breccia follow prominent regional joint directions and suggest hydrofracturing of the Madrid was the principal mechanism for breccia formation. The breccia is interpreted to be of explosive origin with the main phase of the body representing clasts that fell down within the vent'' following upward transport. The wall phase is taken to have formed due to adhesion to the wall of breccia clasts during the eruptive stage.« less

  10. Magnetic separator having a multilayer matrix, method and apparatus

    DOEpatents

    Kelland, David R.

    1980-01-01

    A magnetic separator having multiple staggered layers of porous magnetic material positioned to intercept a fluid stream carrying magnetic particles and so placed that a bypass of each layer is effected as the pores of the layer become filled with material extracted from the fluid stream.

  11. Role of biofilms in sorptive removal of steroidal hormones and 4-nonylphenol compounds from streams

    USGS Publications Warehouse

    Writer, Jeffrey H.; Ryan, Joseph N.; Barber, Larry B.

    2011-01-01

    Stream biofilms play an important role in geochemical processing of organic matter and nutrients, however, the significance of this matrix in sorbing trace organic contaminants is less understood. This study focused on the role of stream biofilms in sorbing steroidal hormones and 4-nonylphenol compounds from surface waters using biofilms colonized in situ on artificial substrata and subsequently transferred to the laboratory for controlled batch sorption experiments. Steroidal hormones and 4-nonylphenol compounds readily sorb to stream biofilms as indicated by organic matter partition coefficients (Kom, L kg–1) for 17β-estradiol (102.5–2.8 L kg–1), 17α-ethynylestradiol (102.5–2.9 L kg–1), 4-nonylphenol (103.4–4.6 L kg–1), 4-nonylphenolmonoethoxylate (103.5–4.0 L kg–1), and 4-nonylphenoldiethoxylate (103.9–4.3 L kg–1). Experiments using water quality differences to induce changes in the relative composition of periphyton and heterotrophic bacteria in the stream biofilm did not significantly affect the sorptive properties of the stream biofilm, providing additional evidence that stream biofilms will sorb trace organic compounds under of variety of environmental conditions. Because sorption of the target compounds to stream biofilms was linearly correlated with organic matter content, hydrophobic partition into organic matter appears to be the dominant mechanism. An analysis of 17β-estradiol and 4-nonylphenol hydrophobic partition into water, biofilm, sediment, and dissolved organic matter matrices at mass/volume ratios typical of smaller rivers showed that the relative importance of the stream biofilm as a sorptive matrix was comparable to bed sediments. Therefore, stream biofilms play a primary role in attenuating these compounds in surface waters. Because the stream biofilm represents the base of the stream ecosystem, accumulation of steroidal hormones and 4-nonylphenol compounds in the stream biofilm may be an exposure pathway for organisms in higher trophic levels.

  12. INTERREGIONAL COMPARISONS OF SEDIMENT MICROBIAL RESPIRATION IN STREAMS

    EPA Science Inventory

    The rate of microbial respiration on fine-grained stream sediments was measured at 369 first to fourth-order streams in the Central Appalachians, Colorado's Southern Rockies, and California's Central Valley in 1994 and 1995. Study streams were randomly selected from the USEPA's ...

  13. Ion processing element with composite media

    DOEpatents

    Mann, Nick R.; Tranter, Troy J.; Todd, Terry A.; Sebesta, Ferdinand

    2003-02-04

    An ion processing element employing composite media disposed in a porous substrate, for facilitating removal of selected chemical species from a fluid stream. The ion processing element includes a porous fibrous glass substrate impregnated by composite media having one or more active components supported by a matrix material of polyacrylonitrile. The active components are effective in removing, by various mechanisms, one or more constituents from a fluid stream passing through the ion processing element. Due to the porosity and large surface area of both the composite medium and the substrate in which it is disposed, a high degree of contact is achieved between the active component and the fluid stream being processed. Further, the porosity of the matrix material and the substrate facilitates use of the ion processing element in high volume applications where it is desired to effectively process a high volume flows.

  14. Ion processing element with composite media

    DOEpatents

    Mann, Nick R [Blackfoot, ID; Tranter, Troy J [Idaho Falls, ID; Todd, Terry A [Aberdeen, ID; Sebesta, Ferdinand [Prague, CZ

    2009-03-24

    An ion processing element employing composite media disposed in a porous substrate, for facilitating removal of selected chemical species from a fluid stream. The ion processing element includes a porous fibrous glass substrate impregnated by composite media having one or more active components supported by a matrix material of polyacrylonitrile. The active components are effective in removing, by various mechanisms, one or more constituents from a fluid stream passing through the ion processing element. Due to the porosity and large surface area of both the composite medium and the substrate in which it is disposed, a high degree of contact is achieved between the active component and the fluid stream being processed. Further, the porosity of the matrix material and the substrate facilitates use of the ion processing element in high volume applications where it is desired to effectively process a high volume flows.

  15. SPATIAL PATTERNS AND ECOLOGICAL DETERMINANTS OF BENTHIC ALGAL ASSEMBLAGES IN MID-ATLANTIC STREAMS, USA

    EPA Science Inventory

    We attempted to identify spatial patterns and determinants for benthic algal assemblages in Mid-Atlantic streams. Periphyton, water chemistry, stream physical habitat, riparian conditions, and land cover/use in watersheds were characterized at 89 randomly selected stream sites i...

  16. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  17. Layout Study and Application of Mobile App Recommendation Approach Based On Spark Streaming Framework

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.

    2018-05-01

    For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.

  18. Streaming Potential Modeling to Understand the Identification of Hydraulically Active Fractures and Fracture-Matrix Fluid Interactions Using the Self-Potential Method

    NASA Astrophysics Data System (ADS)

    Jougnot, D.; Roubinet, D.; Linde, N.; Irving, J.

    2016-12-01

    Quantifying fluid flow in fractured media is a critical challenge in a wide variety of research fields and applications. To this end, geophysics offers a variety of tools that can provide important information on subsurface physical properties in a noninvasive manner. Most geophysical techniques infer fluid flow by data or model differencing in time or space (i.e., they are not directly sensitive to flow occurring at the time of the measurements). An exception is the self-potential (SP) method. When water flows in the subsurface, an excess of charge in the pore water that counterbalances electric charges at the mineral-pore water interface gives rise to a streaming current and an associated streaming potential. The latter can be measured with the SP technique, meaning that the method is directly sensitive to fluid flow. Whereas numerous field experiments suggest that the SP method may allow for the detection of hydraulically active fractures, suitable tools for numerically modeling streaming potentials in fractured media do not exist. Here, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid-flow and associated self-potential problems in fractured domains. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods due to computational limitations. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.

  19. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  20. On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models

    NASA Astrophysics Data System (ADS)

    Khorunzhiy, O.

    2008-08-01

    Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.

  1. Chaos and random matrices in supersymmetric SYK

    NASA Astrophysics Data System (ADS)

    Hunter-Jones, Nicholas; Liu, Junyu

    2018-05-01

    We use random matrix theory to explore late-time chaos in supersymmetric quantum mechanical systems. Motivated by the recent study of supersymmetric SYK models and their random matrix classification, we consider the Wishart-Laguerre unitary ensemble and compute the spectral form factors and frame potentials to quantify chaos and randomness. Compared to the Gaussian ensembles, we observe the absence of a dip regime in the form factor and a slower approach to Haar-random dynamics. We find agreement between our random matrix analysis and predictions from the supersymmetric SYK model, and discuss the implications for supersymmetric chaotic systems.

  2. EVALUATION OF SPIKING PROCEDURES FOR RECOVERY OF CRYPTOSPORIDIUM IN STREAM WATERS USING USEPA METHOD 1623

    EPA Science Inventory

    U.S. Environmental Protection Agency Method 1623 is widely used to monitor source waters and drinking water supplies for Cryptosporidium oocysts. Analyzing matrix spikes is an important component of Method 1623. Matrix spikes are used to determine the effect of the environmental...

  3. Statistics of time delay and scattering correlation functions in chaotic systems. I. Random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel

    2015-06-15

    We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.

  4. Biodegradation and attenuation of steroidal hormones and alkylphenols by stream biofilms and sediments

    USGS Publications Warehouse

    Writer, Jeffrey; Barber, Larry B.; Ryan, Joseph N.; Bradley, Paul M.

    2011-01-01

    Biodegradation of select endocrine-disrupting compounds (17β-estradiol, estrone, 17α-ethynylestradiol, 4-nonylphenol, 4-nonylphenolmonoexthoylate, and 4-nonylphenoldiethoxylate) was evaluated in stream biofilm, sediment, and water matrices collected from locations upstream and downstream from a wastewater treatment plant effluent discharge. Both biologically mediated transformation to intermediate metabolites and biologically mediated mineralization were evaluated in separate time interval experiments. Initial time intervals (0–7 d) evaluated biodegradation by the microbial community dominant at the time of sampling. Later time intervals (70 and 185 d) evaluated the biodegradation potential as the microbial community adapted to the absence of outside energy sources. The sediment matrix was more effective than the biofilm and water matrices at biodegrading 4-nonylphenol and 17β-estradiol. Biodegradation by the sediment matrix of 17α-ethynylestradiol occurred at later time intervals (70 and 185 d) and was not observed in the biofilm or water matrices. Stream biofilms play an important role in the attenuation of endocrine-disrupting compounds in surface waters due to both biodegradation and sorption processes. Because sorption to stream biofilms and bed sediments occurs on a faster temporal scale (<1 h) than the potential to biodegrade the target compounds (50% mineralization at >185 d), these compounds can accumulate in stream biofilms and sediments.

  5. Random matrix ensembles for many-body quantum systems

    NASA Astrophysics Data System (ADS)

    Vyas, Manan; Seligman, Thomas H.

    2018-04-01

    Classical random matrix ensembles were originally introduced in physics to approximate quantum many-particle nuclear interactions. However, there exists a plethora of quantum systems whose dynamics is explained in terms of few-particle (predom-inantly two-particle) interactions. The random matrix models incorporating the few-particle nature of interactions are known as embedded random matrix ensembles. In the present paper, we provide a brief overview of these two ensembles and illustrate how the embedded ensembles can be successfully used to study decoherence of a qubit interacting with an environment, both for fermionic and bosonic embedded ensembles. Numerical calculations show the dependence of decoherence on the nature of the environment.

  6. Characterization of PAH matrix with monazite stream containing uranium, gadolinium and iron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, Sangita, E-mail: sangpal@barc.gov.in; Goswami, D.; Meena, Sher Singh

    2016-05-23

    Uranium (U) gadolinium (Gd) and iron (Fe) containing alkaline waste simulated effluent (relevant to alkaline effluent of monazite ore) has been treated with a novel amphoteric resin viz, Polyamidehydroxamate (PAH) containing amide and hydroxamic acid groups. The resin has been synthesized in an eco-friendly manner by polymerization nad conversion to functional groups characterized by FT-IR spectra and architectural overview by SEM. Coloration of the loaded matrix and de-coloration after extraction of uranium is the special characteristic of the matrix. Effluent streams have been analyzed by ICP-AES, U loaded PAH has been characterized by FT-IR, EXAFS, Gd and Fe by X-raymore » energy values of EDXRF at 6.053 KeVand 6.405 KeV respectively. The remarkable change has been observed in Mössbauer spectrum of Fe-loaded PAH samples.« less

  7. Managing salinity in Upper Colorado River Basin streams: Selecting catchments for sediment control efforts using watershed characteristics and random forests models

    USGS Publications Warehouse

    Tillman, Fred; Anning, David W.; Heilman, Julian A.; Buto, Susan G.; Miller, Matthew P.

    2018-01-01

    Elevated concentrations of dissolved-solids (salinity) including calcium, sodium, sulfate, and chloride, among others, in the Colorado River cause substantial problems for its water users. Previous efforts to reduce dissolved solids in upper Colorado River basin (UCRB) streams often focused on reducing suspended-sediment transport to streams, but few studies have investigated the relationship between suspended sediment and salinity, or evaluated which watershed characteristics might be associated with this relationship. Are there catchment properties that may help in identifying areas where control of suspended sediment will also reduce salinity transport to streams? A random forests classification analysis was performed on topographic, climate, land cover, geology, rock chemistry, soil, and hydrologic information in 163 UCRB catchments. Two random forests models were developed in this study: one for exploring stream and catchment characteristics associated with stream sites where dissolved solids increase with increasing suspended-sediment concentration, and the other for predicting where these sites are located in unmonitored reaches. Results of variable importance from the exploratory random forests models indicate that no simple source, geochemical process, or transport mechanism can easily explain the relationship between dissolved solids and suspended sediment concentrations at UCRB monitoring sites. Among the most important watershed characteristics in both models were measures of soil hydraulic conductivity, soil erodibility, minimum catchment elevation, catchment area, and the silt component of soil in the catchment. Predictions at key locations in the basin were combined with observations from selected monitoring sites, and presented in map-form to give a complete understanding of where catchment sediment control practices would also benefit control of dissolved solids in streams.

  8. Habitat type and Permanence determine local aquatic invertebrate community structure in the Madrean Sky Islands

    Treesearch

    Michael T. Bogan; Oscar Gutierrez-Ruacho; J. Andres Alvarado-Castro; David A. Lytle

    2013-01-01

    Aquatic environments in the Madrean Sky Islands (MSI) consist of a matrix of perennial and intermittent stream segments, seasonal ponds, and human-built cattle trough habitats that support a diverse suite of aquatic macroinvertebrates. Although environmental conditions and aquatic communities are generally distinct in lotic and lentic habitats, MSI streams are...

  9. Generation of kth-order random toposequences

    NASA Astrophysics Data System (ADS)

    Odgers, Nathan P.; McBratney, Alex. B.; Minasny, Budiman

    2008-05-01

    The model presented in this paper derives toposequences from a digital elevation model (DEM). It is written in ArcInfo Macro Language (AML). The toposequences are called kth-order random toposequences, because they take a random path uphill to the top of a hill and downhill to a stream or valley bottom from a randomly selected seed point, and they are located in a streamshed of order k according to a particular stream-ordering system. We define a kth-order streamshed as the area of land that drains directly to a stream segment of stream order k. The model attempts to optimise the spatial configuration of a set of derived toposequences iteratively by using simulated annealing to maximise the total sum of distances between each toposequence hilltop in the set. The user is able to select the order, k, of the derived toposequences. Toposequences are useful for determining soil sampling locations for use in collecting soil data for digital soil mapping applications. Sampling locations can be allocated according to equal elevation or equal-distance intervals along the length of the toposequence, for example. We demonstrate the use of this model for a study area in the Hunter Valley of New South Wales, Australia. Of the 64 toposequences derived, 32 were first-order random toposequences according to Strahler's stream-ordering system, and 32 were second-order random toposequences. The model that we present in this paper is an efficient method for sampling soil along soil toposequences. The soils along a toposequence are related to each other by the topography they are found in, so soil data collected by this method is useful for establishing soil-landscape rules for the preparation of digital soil maps.

  10. Measuring order in disordered systems and disorder in ordered systems: Random matrix theory for isotropic and nematic liquid crystals and its perspective on pseudo-nematic domains

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Stratt, Richard M.

    2018-05-01

    Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.

  11. FROM THE MOUNTAINS TO THE SEA: THE STATE OF MARYLAND'S FRESHWATER STREAMS

    EPA Science Inventory

    The Maryland Biological Stream Survey, conducted by the Maryland Department of Natural Resources, sampled about 1,000 randomly-selected sites on first through third order freshwater streams throughout Maryland from 1995 to 1997. Biota (fish, benthic macroinvertebrates, herpetofau...

  12. Classification of California streams using combined deductive and inductive approaches: Setting the foundation for analysis of hydrologic alteration

    USGS Publications Warehouse

    Pyne, Matthew I.; Carlisle, Daren M.; Konrad, Christopher P.; Stein, Eric D.

    2017-01-01

    Regional classification of streams is an early step in the Ecological Limits of Hydrologic Alteration framework. Many stream classifications are based on an inductive approach using hydrologic data from minimally disturbed basins, but this approach may underrepresent streams from heavily disturbed basins or sparsely gaged arid regions. An alternative is a deductive approach, using watershed climate, land use, and geomorphology to classify streams, but this approach may miss important hydrological characteristics of streams. We classified all stream reaches in California using both approaches. First, we used Bayesian and hierarchical clustering to classify reaches according to watershed characteristics. Streams were clustered into seven classes according to elevation, sedimentary rock, and winter precipitation. Permutation-based analysis of variance and random forest analyses were used to determine which hydrologic variables best separate streams into their respective classes. Stream typology (i.e., the class that a stream reach is assigned to) is shaped mainly by patterns of high and mean flow behavior within the stream's landscape context. Additionally, random forest was used to determine which hydrologic variables best separate minimally disturbed reference streams from non-reference streams in each of the seven classes. In contrast to stream typology, deviation from reference conditions is more difficult to detect and is largely defined by changes in low-flow variables, average daily flow, and duration of flow. Our combined deductive/inductive approach allows us to estimate flow under minimally disturbed conditions based on the deductive analysis and compare to measured flow based on the inductive analysis in order to estimate hydrologic change.

  13. Effects of large woody debris placement on stream channels and benthic macroinvertebrates

    Treesearch

    Robert H. Hilderbrand; A. Dennis Lemly; C. Andrew Dolloff; Kelly L. Harpster

    1997-01-01

    Large woody debris (LWD)was added as an experimental stream restoration techniquein two streams in southwest Virginia. Additions were designed to compare human judgement in log placements against a randomized design and an unmanipulated reach, &d also to compare effectiveness in a low- and a high-gradient stream. Pool area increased 146% in the systematic placement...

  14. A Watershed-Scale Survey for Stream-Foraging Birds in Northern California

    Treesearch

    Sherri L. Miller; C. John Ralph

    2005-01-01

    Our objective was to develop a survey technique and watershed-scale design to monitor trends of population size and habitat associations in stream-foraging birds. The resulting methods and design will be used to examine the efficacy of quantifying the association of stream and watershed quality with bird abundance. We surveyed 60 randomly selected 2-km stream reaches...

  15. Evaluating physical habitat and water chemistry data from statewide stream monitoring programs to establish least-impacted conditions in Washington State

    USGS Publications Warehouse

    Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad

    2015-01-01

    Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.

  16. Microbial Ecoenzymatic Stoichiometry as an Indicator of Nutrient Limitation in US Streams and Rivers

    EPA Science Inventory

    We compared microbial ecoenzymatic activity at 2122 randomly-selected stream and river sites across the conterminous US. The sites were evenly distributed between wadeable and non-wadeable streams and rivers. Sites were aggregated into nine larger physiographic provinces for stat...

  17. Staggered chiral random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, James C.

    2011-02-01

    We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.

  18. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors

    NASA Astrophysics Data System (ADS)

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  19. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.

    PubMed

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  20. ALIEN SPECIES IMPORTANTANCE IN NATIVE VEGETATION ALONG WADEABLE STREAMS, JOHN DAY RIVER BASIN, OREGON, USA

    EPA Science Inventory

    We evaluated the importance of alien species in existing vegetation along wadeable streams of a large, topographically diverse river basin in eastern Oregon, USA; sampling 165 plots (30 × 30 m) across 29 randomly selected 1-km stream reaches. Plots represented eight streamside co...

  1. APPLICATION OF A MULTIPURPOSE UNEQUAL-PROBABILITY STREAM SURVEY IN THE MID-ATLANTIC COASTAL PLAIN

    EPA Science Inventory

    A stratified random sample with unequal-probability selection was used to design a multipurpose survey of headwater streams in the Mid-Atlantic Coastal Plain. Objectives for data from the survey include unbiased estimates of regional stream conditions, and adequate coverage of un...

  2. Random forest models for the probable biological condition of streams and rivers in the USA

    EPA Science Inventory

    The National Rivers and Streams Assessment (NRSA) is a probability based survey conducted by the US Environmental Protection Agency and its state and tribal partners. It provides information on the ecological condition of the rivers and streams in the conterminous USA, and the ex...

  3. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  4. Unifying model for random matrix theory in arbitrary space dimensions

    NASA Astrophysics Data System (ADS)

    Cicuta, Giovanni M.; Krausser, Johannes; Milkus, Rico; Zaccone, Alessio

    2018-03-01

    A sparse random block matrix model suggested by the Hessian matrix used in the study of elastic vibrational modes of amorphous solids is presented and analyzed. By evaluating some moments, benchmarked against numerics, differences in the eigenvalue spectrum of this model in different limits of space dimension d , and for arbitrary values of the lattice coordination number Z , are shown and discussed. As a function of these two parameters (and their ratio Z /d ), the most studied models in random matrix theory (Erdos-Renyi graphs, effective medium, and replicas) can be reproduced in the various limits of block dimensionality d . Remarkably, the Marchenko-Pastur spectral density (which is recovered by replica calculations for the Laplacian matrix) is reproduced exactly in the limit of infinite size of the blocks, or d →∞ , which clarifies the physical meaning of space dimension in these models. We feel that the approximate results for d =3 provided by our method may have many potential applications in the future, from the vibrational spectrum of glasses and elastic networks to wave localization, disordered conductors, random resistor networks, and random walks.

  5. Nanocomposite thin films for optical temperature sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohodnicki, Jr., Paul R.; Brown, Thomas D.; Buric, Michael P.

    2017-02-14

    The disclosure relates to an optical method for temperature sensing utilizing a temperature sensing material. In an embodiment the gas stream, liquid, or solid has a temperature greater than about 500.degree. C. The temperature sensing material is comprised of metallic nanoparticles dispersed in a dielectric matrix. The metallic nanoparticles have an electronic conductivity greater than approximately 10.sup.-1 S/cm at the temperature of the temperature sensing material. The dielectric matrix has an electronic conductivity at least two orders of magnitude less than the dispersed metallic nanoparticles at the temperature of the temperature sensing material. In some embodiments, the chemical composition ofmore » a gas stream or liquid is simultaneously monitored by optical signal shifts through multiple or broadband wavelength interrogation approaches. In some embodiments, the dielectric matrix provides additional functionality due to a temperature dependent band-edge, an optimized chemical sensing response, or an optimized refractive index of the temperature sensing material for integration with optical waveguides.« less

  6. Trace analysis of antidepressant pharmaceuticals and their select degradates in aquatic matrixes by LC/ESI/MS/MS

    USGS Publications Warehouse

    Schultz, M.M.; Furlong, E.T.

    2008-01-01

    Treated wastewater effluent is a potential environmental point source for antidepressant pharmaceuticals. A quantitative method was developed for the determination of trace levels of antidepressants in environmental aquatic matrixes using solid-phase extraction coupled with liquid chromatography- electrospray ionization tandem mass spectrometry. Recoveries of parent antidepressants from matrix spiking experiments for the individual antidepressants ranged from 72 to 118% at low concentrations (0.5 ng/L) and 70 to 118% at high concentrations (100 ng/L) for the solid-phase extraction method. Method detection limits for the individual antidepressant compounds ranged from 0.19 to 0.45 ng/L. The method was applied to wastewater effluent and samples collected from a wastewater-dominated stream. Venlafaxine was the predominant antidepressant observed in wastewater and river water samples. Individual antidepressant concentrations found in the wastewater effluent ranged from 3 (duloxetine) to 2190 ng/L (venlafaxine), whereas individual concentrations in the waste-dominated stream ranged from 0.72 (norfluoxetine) to 1310 ng/L (venlafaxine). ?? 2008 American Chemical Society.

  7. Role of streams in myxobacteria aggregate formation

    NASA Astrophysics Data System (ADS)

    Kiskowski, Maria A.; Jiang, Yi; Alber, Mark S.

    2004-10-01

    Cell contact, movement and directionality are important factors in biological development (morphogenesis), and myxobacteria are a model system for studying cell-cell interaction and cell organization preceding differentiation. When starved, thousands of myxobacteria cells align, stream and form aggregates which later develop into round, non-motile spores. Canonically, cell aggregation has been attributed to attractive chemotaxis, a long range interaction, but there is growing evidence that myxobacteria organization depends on contact-mediated cell-cell communication. We present a discrete stochastic model based on contact-mediated signaling that suggests an explanation for the initialization of early aggregates, aggregation dynamics and final aggregate distribution. Our model qualitatively reproduces the unique structures of myxobacteria aggregates and detailed stages which occur during myxobacteria aggregation: first, aggregates initialize in random positions and cells join aggregates by random walk; second, cells redistribute by moving within transient streams connecting aggregates. Streams play a critical role in final aggregate size distribution by redistributing cells among fewer, larger aggregates. The mechanism by which streams redistribute cells depends on aggregate sizes and is enhanced by noise. Our model predicts that with increased internal noise, more streams would form and streams would last longer. Simulation results suggest a series of new experiments.

  8. Random Number Generation for High Performance Computing

    DTIC Science & Technology

    2015-01-01

    number streams, a quality metric for the parallel random number streams. * * * * * Atty. Dkt . No.: 5660-14400 Customer No. 35690 Eric B. Meyertons...responsibility to ensure timely payment of maintenance fees when due. Pagel of3 PTOL-85 (Rev. 02/11) Atty. Dkt . No.: 5660-14400 Page 1 Meyertons...with each subtask executed by a separate thread or process (henceforth, process). Each process has Atty. Dkt . No.: 5660-14400 Page 2 Meyertons

  9. A generalization of random matrix theory and its application to statistical physics.

    PubMed

    Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H

    2017-02-01

    To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.

  10. Unifying research on the fragmentation of terrestrial and aquatic habitats: patches, connectivity and the matrix in riverscapes

    USGS Publications Warehouse

    Eros, Tibor; Grant, Evan H. Campbell

    2015-01-01

    Fragmentation of habitats is a critical issue in the conservation and management of stream networks across spatial scales. Although the effects of individual barriers (e.g. dams) are well documented, we argue that a more comprehensive patch–matrix landscape model will improve our understanding of fragmentation effects and improve management in riverscapes.

  11. SEDIMENT MICROBIAL RESPIRATION IN A SYNOPTIC SURVEY OF MID-ATLANTIC REGION STREAMS

    EPA Science Inventory

    l. The rate of microbial respiration on fine-grained stream sediments was measured at 196 first-to third-order sites in the mid-Atlantic region of the United States.2. Sample collection took place between April and July in 1993, 1994 and 1995.3. Study streams were randomly sele...

  12. Comparing spatial regression to random forests for large ...

    EPA Pesticide Factsheets

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po

  13. Auditory Magnetoencephalographic Frequency-Tagged Responses Mirror the Ongoing Segmentation Processes Underlying Statistical Learning.

    PubMed

    Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe

    2017-03-01

    Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.

  14. The feasibility and stability of large complex biological networks: a random matrix approach.

    PubMed

    Stone, Lewi

    2018-05-29

    In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.

  15. Direct generation of all-optical random numbers from optical pulse amplitude chaos.

    PubMed

    Li, Pu; Wang, Yun-Cai; Wang, An-Bang; Yang, Ling-Zhen; Zhang, Ming-Jiang; Zhang, Jian-Zhong

    2012-02-13

    We propose and theoretically demonstrate an all-optical method for directly generating all-optical random numbers from pulse amplitude chaos produced by a mode-locked fiber ring laser. Under an appropriate pump intensity, the mode-locked laser can experience a quasi-periodic route to chaos. Such a chaos consists of a stream of pulses with a fixed repetition frequency but random intensities. In this method, we do not require sampling procedure and external triggered clocks but directly quantize the chaotic pulses stream into random number sequence via an all-optical flip-flop. Moreover, our simulation results show that the pulse amplitude chaos has no periodicity and possesses a highly symmetric distribution of amplitude. Thus, in theory, the obtained random number sequence without post-processing has a high-quality randomness verified by industry-standard statistical tests.

  16. Fine Sediment Residency in Streambeds in Southeastern Australia.

    NASA Astrophysics Data System (ADS)

    Croke, J. C.; Thompson, C. J.; Rhodes, E.

    2007-12-01

    A detailed understanding of channel forming and maintenance processes in streams requires some measurement and/or prediction of bed load transport and sediment mobility. Traditional field based measurements of such processes are often problematic due to the high discharge characteristics of upland streams. In part to compensate for such difficulties, empirical flow competence equations have also been developed to predict armour or bedform stabilising grain mobility. These equations have been applied to individual reaches to predict the entrainment of a threshold grain size and the vertical extent of flushing. In cobble- and boulder-bed channels the threshold grain size relates to the size of the bedform stabilising grains (eg. D84, D90). This then allows some prediction of when transport of the matrix material occurs. The application of Optically Stimulated Luminescence (OSL) dating is considered here as an alternative and innovative way to determine fine sediment residency times in stream beds. Age estimates derived from the technique are used to assist in calibrating sediment entrainment models to specific channel types and hydrological regimes. The results from a one-dimensional HEC-RAS model indicate that recurrence interval floods exceeding bankfull up to 13 years are competent to mobilise the maximum overlying surface grain sizes at the sites. OSL minimum age model results of well bleached quartz in the fine matrix particles are in general agreement with selected competence equation predictions. The apparent long (100-1400y) burial age of most of the mineral quartz suggests that competent flows are not able to flush all subsurface fine-bed material. Maximum bed load exchange (flushing) depth was limited to twice the depth of the overlying D90 grain size. Application of OSL in this study provides important insight into the nature of matrix material storage and flushing in mountain streams.

  17. Two-dimensional lattice Boltzmann model for magnetohydrodynamics.

    PubMed

    Schaffenberger, Werner; Hanslmeier, Arnold

    2002-10-01

    We present a lattice Boltzmann model for the simulation of two-dimensional magnetohydro dynamic (MHD) flows. The model is an extension of a hydrodynamic lattice Boltzman model with 9 velocities on a square lattice resulting in a model with 17 velocities. Earlier lattice Boltzmann models for two-dimensional MHD used a bidirectional streaming rule. However, the use of such a bidirectional streaming rule is not necessary. In our model, the standard streaming rule is used, allowing smaller viscosities. To control the viscosity and the resistivity independently, a matrix collision operator is used. The model is then applied to the Hartmann flow, giving reasonable results.

  18. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-12-01

    In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  19. Urine sampling techniques in symptomatic primary-care patients: a diagnostic accuracy review.

    PubMed

    Holm, Anne; Aabenhus, Rune

    2016-06-08

    Choice of urine sampling technique in urinary tract infection may impact diagnostic accuracy and thus lead to possible over- or undertreatment. Currently no evidencebased consensus exists regarding correct sampling technique of urine from women with symptoms of urinary tract infection in primary care. The aim of this study was to determine the accuracy of urine culture from different sampling-techniques in symptomatic non-pregnant women in primary care. A systematic review was conducted by searching Medline and Embase for clinical studies conducted in primary care using a randomized or paired design to compare the result of urine culture obtained with two or more collection techniques in adult, female, non-pregnant patients with symptoms of urinary tract infection. We evaluated quality of the studies and compared accuracy based on dichotomized outcomes. We included seven studies investigating urine sampling technique in 1062 symptomatic patients in primary care. Mid-stream-clean-catch had a positive predictive value of 0.79 to 0.95 and a negative predictive value close to 1 compared to sterile techniques. Two randomized controlled trials found no difference in infection rate between mid-stream-clean-catch, mid-stream-urine and random samples. At present, no evidence suggests that sampling technique affects the accuracy of the microbiological diagnosis in non-pregnant women with symptoms of urinary tract infection in primary care. However, the evidence presented is in-direct and the difference between mid-stream-clean-catch, mid-stream-urine and random samples remains to be investigated in a paired design to verify the present findings.

  20. Building spatially-explicit model predictions for ecological condition of streams in the Pacific Northwest: An assessment of landscape variables, models, endpoints and prediction scale

    EPA Science Inventory

    While large-scale, randomized surveys estimate the percentage of a region’s streams in poor ecological condition, identifying particular stream reaches or watersheds in poor condition is an equally important goal for monitoring and management. We built predictive models of strea...

  1. The Estimated Likelihood of Nutrients and Pesticides in Nontidal Headwater Streams of the Maryland Coastal Plain During Base Flow

    EPA Science Inventory

    Water quality in nontidal headwater (first-order) streams of the Coastal Plain during base flow in the late winter and spring is related to land use, hydrogeology, and other natural or human influences in contributing watersheds. A random survey of 174 headwater streams of the Mi...

  2. [Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].

    PubMed

    Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan

    2016-03-01

    To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.

  3. Spectrum of walk matrix for Koch network and its application

    NASA Astrophysics Data System (ADS)

    Xie, Pinchen; Lin, Yuan; Zhang, Zhongzhi

    2015-06-01

    Various structural and dynamical properties of a network are encoded in the eigenvalues of walk matrix describing random walks on the network. In this paper, we study the spectra of walk matrix of the Koch network, which displays the prominent scale-free and small-world features. Utilizing the particular architecture of the network, we obtain all the eigenvalues and their corresponding multiplicities. Based on the link between the eigenvalues of walk matrix and random target access time defined as the expected time for a walker going from an arbitrary node to another one selected randomly according to the steady-state distribution, we then derive an explicit solution to the random target access time for random walks on the Koch network. Finally, we corroborate our computation for the eigenvalues by enumerating spanning trees in the Koch network, using the connection governing eigenvalues and spanning trees, where a spanning tree of a network is a subgraph of the network, that is, a tree containing all the nodes.

  4. Experimental study on cesium immobilization in struvite structures.

    PubMed

    Wagh, Arun S; Sayenko, S Y; Shkuropatenko, V A; Tarasov, R V; Dykiy, M P; Svitlychniy, Y O; Virych, V D; Ulybkina, Е А

    2016-01-25

    Ceramicrete, a chemically bonded phosphate ceramic, was developed for nuclear waste immobilization and nuclear radiation shielding. Ceramicrete products are fabricated by an acid-base reaction between magnesium oxide and mono potassium phosphate that has a struvite-K mineral structure. In this study, we demonstrate that this crystalline structure is ideal for incorporating radioactive Cs into a Ceramicrete matrix. This is accomplished by partially replacing K by Cs in the struvite-K structure, thus forming struvite-(K, Cs) mineral. X-ray diffraction and thermo-gravimetric analyses are used to confirm such a replacement. The resulting product is non-leachable and stable at high temperatures, and hence it is an ideal matrix for immobilizing Cs found in high-activity nuclear waste streams. The product can also be used for immobilizing secondary waste streams generated during glass vitrification of spent fuel, or the method described in this article can be used as a pretreatment method during glass vitrification of high level radioactive waste streams. Furthermore, it suggests a method of producing safe commercial radioactive Cs sources. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. PREDICTING THE OCCURRENCE OF NUTRIENTS AND PESTICIDES DURING BASE FLOW IN STREAMS: STATUS OF MID-ATLANTIC COASTAL PLAIN AND MIDWEST CORN BELT STUDIES

    EPA Science Inventory

    Random surveys of 174 headwater streams of the Mid-Atlantic Coastal Plain (MACP) and 110 third-order streams in the Midwest Corn Belt (MCB) were conducted in 2000 and 2004, respectively in two cooperative research studies by the U.S. Environmental Protection Agency and U.S. Geolo...

  6. The upcycling of post-industrial PP/PET waste streams through in-situ microfibrillar preparation

    NASA Astrophysics Data System (ADS)

    Delva, Laurens; Ragaert, Kim; Cardon, Ludwig

    2015-12-01

    Post-industrial plastic waste streams can be re-used as secondary material streams for polymer processing by extrusion or injection moulding. One of the major commercially available waste stream contains polypropylene (PP) contaminated with polyesters (mostly polyethylene tereftalate - PET). An important practical hurdle for the direct implementation of this waste stream is the immiscibility of PP and PET in the melt, which leads to segregation within the polymer structure and adversely affects the reproducibility and mechanical properties of the manufactured parts. It has been indicated in literature that the creation of PET microfibrils in the PP matrix could undo these drawbacks and upcycle the PP/PET combination. Within the current research, a commercially available virgin PP/PET was evaluated for the microfibrillar preparation. The mechanical (tensile and impact) properties, thermal properties and morphology of the composites were characterized at different stages of the microfibrillar preparation.

  7. Gravitational lensing by eigenvalue distributions of random matrix models

    NASA Astrophysics Data System (ADS)

    Martínez Alonso, Luis; Medina, Elena

    2018-05-01

    We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.

  8. Graphic matching based on shape contexts and reweighted random walks

    NASA Astrophysics Data System (ADS)

    Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun

    2018-04-01

    Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.

  9. Streaming potential modeling in fractured rock: Insights into the identification of hydraulically active fractures

    NASA Astrophysics Data System (ADS)

    Roubinet, D.; Linde, N.; Jougnot, D.; Irving, J.

    2016-05-01

    Numerous field experiments suggest that the self-potential (SP) geophysical method may allow for the detection of hydraulically active fractures and provide information about fracture properties. However, a lack of suitable numerical tools for modeling streaming potentials in fractured media prevents quantitative interpretation and limits our understanding of how the SP method can be used in this regard. To address this issue, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid flow and associated self-potential problems in fractured rock. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.

  10. Microwave off-gas treatment apparatus and process

    DOEpatents

    Schulz, Rebecca L.; Clark, David E.; Wicks, George G.

    2003-01-01

    The invention discloses a microwave off-gas system in which microwave energy is used to treat gaseous waste. A treatment chamber is used to remediate off-gases from an emission source by passing the off-gases through a susceptor matrix, the matrix being exposed to microwave radiation. The microwave radiation and elevated temperatures within the combustion chamber provide for significant reductions in the qualitative and quantitative emissions of the gas waste stream.

  11. A new simple technique for improving the random properties of chaos-based cryptosystems

    NASA Astrophysics Data System (ADS)

    Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.

    2018-03-01

    A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.

  12. A Framework to Debug Diagnostic Matrices

    NASA Technical Reports Server (NTRS)

    Kodal, Anuradha; Robinson, Peter; Patterson-Hine, Ann

    2013-01-01

    Diagnostics is an important concept in system health and monitoring of space operations. Many of the existing diagnostic algorithms utilize system knowledge in the form of diagnostic matrix (D-matrix, also popularly known as diagnostic dictionary, fault signature matrix or reachability matrix) gleaned from physical models. But, sometimes, this may not be coherent to obtain high diagnostic performance. In such a case, it is important to modify this D-matrix based on knowledge obtained from other sources such as time-series data stream (simulated or maintenance data) within the context of a framework that includes the diagnostic/inference algorithm. A systematic and sequential update procedure, diagnostic modeling evaluator (DME) is proposed to modify D-matrix and wrapper logic considering least expensive solution first. This iterative procedure includes conditions ranging from modifying 0s and 1s in the matrix, or adding/removing the rows (failure sources) columns (tests). We will experiment this framework on datasets from DX challenge 2009.

  13. Redesigning Urban Carbon Cycles: from Waste Stream to Commodity

    NASA Astrophysics Data System (ADS)

    Brabander, D. J.; Fitzstevens, M. G.

    2013-12-01

    While there has been extensive research on the global scale to quantify the fluxes and reservoirs of carbon for predictive climate change models, comparably little attention has been focused on carbon cycles in the built environment. The current management of urban carbon cycles presents a major irony: while cities produce tremendous fluxes of organic carbon waste, their populations are dependent on imported carbon because most urban have limited access to locally sourced carbon. The persistence of outdated management schemes is in part due to the fact that reimagining the handling of urban carbon waste streams requires a transdisciplinary approach. Since the end of the 19th century, U.S. cities have generally relied on the same three options for managing organic carbon waste streams: burn it, bury it, or dilute it. These options still underpin the framework for today's design and management strategies for handling urban carbon waste. We contend that urban carbon management systems for the 21st century need to be scalable, must acknowledge how climate modulates the biogeochemical cycling of urban carbon, and should carefully factor local political and cultural values. Urban waste carbon is a complex matrix ranging from wastewater biosolids to municipal compost. Our first goal in designing targeted and efficient urban carbon management schemes has been examining approaches for categorizing and geochemically fingerprinting these matrices. To date we have used a combination of major and trace element ratio analysis and bulk matrix characteristics, such as pH, density, and loss on ignition, to feed multivariable statistical analysis in order to identify variables that are effective tracers for each waste stream. This approach was initially developed for Boston, MA, US, in the context of identifying components of municipal compost streams that were responsible for increasing the lead inventory in the final product to concentrations that no longer permitted its use in supporting urban agriculture. We are now extending this approach to additional large U.S. and European urban centers where different philosophical and technological approaches to managing urban waste carbon have resulted in a range of infrastructures, from highly distributed systems (Germany) to centralized mega facilities (London). Ultimately, this research will lead to a decision-making matrix model that will permit cities to customize their urban carbon waste stream facilities and transform this waste into a usable commodity.

  14. Random matrices and the New York City subway system

    NASA Astrophysics Data System (ADS)

    Jagannath, Aukosh; Trogdon, Thomas

    2017-09-01

    We analyze subway arrival times in the New York City subway system. We find regimes where the gaps between trains are well modeled by (unitarily invariant) random matrix statistics and Poisson statistics. The departure from random matrix statistics is captured by the value of the Coulomb potential along the subway route. This departure becomes more pronounced as trains make more stops.

  15. The persistence of the attentional bias to regularities in a changing environment.

    PubMed

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  16. Predicting alpine headwater stream intermittency: a case study in the northern Rocky Mountains

    USGS Publications Warehouse

    Sando, Thomas R.; Blasch, Kyle W.

    2015-01-01

    This investigation used climatic, geological, and environmental data coupled with observational stream intermittency data to predict alpine headwater stream intermittency. Prediction was made using a random forest classification model. Results showed that the most important variables in the prediction model were snowpack persistence, represented by average snow extent from March through July, mean annual mean monthly minimum temperature, and surface geology types. For stream catchments with intermittent headwater streams, snowpack, on average, persisted until early June, whereas for stream catchments with perennial headwater streams, snowpack, on average, persisted until early July. Additionally, on average, stream catchments with intermittent headwater streams were about 0.7 °C warmer than stream catchments with perennial headwater streams. Finally, headwater stream catchments primarily underlain by coarse, permeable sediment are significantly more likely to have intermittent headwater streams than those primarily underlain by impermeable bedrock. Comparison of the predicted streamflow classification with observed stream status indicated a four percent classification error for first-order streams and a 21 percent classification error for all stream orders in the study area.

  17. Complementary fMRI and EEG evidence for more efficient neural processing of rhythmic vs. unpredictably timed sounds

    PubMed Central

    van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles

    2015-01-01

    The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044

  18. Can an aquatic macrophyte bioaccumulate glyphosate? A watershed scale study using a non-target hydrophyte Ludwigia peploides

    NASA Astrophysics Data System (ADS)

    Perez, Debora; Okada, Elena; Menone, Mirta; Aparicio, Virginia; Costa, Jose Luis

    2017-04-01

    The hydrophyte Ludwigia peploides is widely distributed in South America streams, and therefore, it can be used as a biomonitor for pesticides used in agricultural production. Glyphosate is one of the main pesticides used in Argentina. This has resulted in its occurrence in non-target wetland ecosystems. The objectives of this study were to: 1) establish and validate an extraction and quantification methodology for glyphosate in L.peploides plants, and 2) evaluated the role of this species as a glyphosate biomonitor in the agricultural watershed of the El Crespo stream. For the first objective, we collected plant material in the field. The leaves were dissected and oven dried at 60° C, grinded and sieved through a 0.5 mm mesh. Different solutions were tested for the extraction step. Labeled glyphosate was used as an internal standard to evaluate the recovery rate and the matrix effect of the different extraction methods. Glyphosate was derivatized with FMOC-Cl and then quantified by ultra-performance liquid chromatography (UPLC) coupled to a mass tandem spectrometer (MS/MS). The method based on an aqueous phase extraction step 0.01 mg/mL of activated carbon as a clean-up to decrease the matrix interference had a recovery of 117 ± 20% and the matrix effect was less than 20%. This method was used to analyze the glyphosate levels in L.peploides in the El Crespo stream. For the second objective, plants of L.peploides were collected on March 2016 in eight monitoring sites of the stream from the headwaters to the stream mouth. Surface water and sediments samples were collected at the same time to calculate the bioconcentration factors (BCFs) and biota-sediment bioaccumulation factors (BSAFs). The BCFs ranged between 28.57 - 280 L/Kg and the BSAFs ranged between 2.52- 30.66 at different sites. These results indicate that L.peploides can bioaccumulated glyphosate in its leaves and the major bioavailability is given mainly by the herbicide molecules present in surface water, rather than sediment. In this sense, L.peploides could be used as biomonitor organism to evaluate a glyphosate levels in the freshwater aquatic ecosystems

  19. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  20. Data-driven probability concentration and sampling on manifold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2016-09-15

    A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less

  1. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.

  2. The breast reconstruction evaluation of acellular dermal matrix as a sling trial (BREASTrial): design and methods of a prospective randomized trial.

    PubMed

    Agarwal, Jayant P; Mendenhall, Shaun D; Anderson, Layla A; Ying, Jian; Boucher, Kenneth M; Liu, Ting; Neumayer, Leigh A

    2015-01-01

    Recent literature has focused on the advantages and disadvantages of using acellular dermal matrix in breast reconstruction. Many of the reported data are from low level-of-evidence studies, leaving many questions incompletely answered. The present randomized trial provides high-level data on the incidence and severity of complications in acellular dermal matrix breast reconstruction between two commonly used types of acellular dermal matrix. A prospective randomized trial was conducted to compare outcomes of immediate staged tissue expander breast reconstruction using either AlloDerm or DermaMatrix. The impact of body mass index, smoking, diabetes, mastectomy type, radiation therapy, and chemotherapy on outcomes was analyzed. Acellular dermal matrix biointegration was analyzed clinically and histologically. Patient satisfaction was assessed by means of preoperative and postoperative surveys. Logistic regression models were used to identify predictors of complications. This article reports on the study design, surgical technique, patient characteristics, and preoperative survey results, with outcomes data in a separate report. After 2.5 years, we successfully enrolled and randomized 128 patients (199 breasts). The majority of patients were healthy nonsmokers, with 41 percent of patients receiving radiation therapy and 49 percent receiving chemotherapy. Half of the mastectomies were prophylactic, with nipple-sparing mastectomy common in both cancer and prophylactic cases. Preoperative survey results indicate that patients were satisfied with their premastectomy breast reconstruction education. Results from the Breast Reconstruction Evaluation Using Acellular Dermal Matrix as a Sling Trial will assist plastic surgeons in making evidence-based decisions regarding acellular dermal matrix-assisted tissue expander breast reconstruction. Therapeutic, II.

  3. Random numbers from vacuum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com; Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  4. Partial transpose of random quantum states: Exact formulas and meanders

    NASA Astrophysics Data System (ADS)

    Fukuda, Motohisa; Śniady, Piotr

    2013-04-01

    We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.

  5. RANDOMNESS of Numbers DEFINITION(QUERY:WHAT? V HOW?) ONLY Via MAXWELL-BOLTZMANN CLASSICAL-Statistics(MBCS) Hot-Plasma VS. Digits-Clumping Log-Law NON-Randomness Inversion ONLY BOSE-EINSTEIN QUANTUM-Statistics(BEQS) .

    NASA Astrophysics Data System (ADS)

    Siegel, Z.; Siegel, Edward Carl-Ludwig

    2011-03-01

    RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!

  6. QCD-inspired spectra from Blue's functions

    NASA Astrophysics Data System (ADS)

    Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail

    1996-02-01

    We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.

  7. Universality in chaos: Lyapunov spectrum and random matrix theory.

    PubMed

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  8. Universality in chaos: Lyapunov spectrum and random matrix theory

    NASA Astrophysics Data System (ADS)

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  9. Social patterns revealed through random matrix theory

    NASA Astrophysics Data System (ADS)

    Sarkar, Camellia; Jalan, Sarika

    2014-11-01

    Despite the tremendous advancements in the field of network theory, very few studies have taken weights in the interactions into consideration that emerge naturally in all real-world systems. Using random matrix analysis of a weighted social network, we demonstrate the profound impact of weights in interactions on emerging structural properties. The analysis reveals that randomness existing in particular time frame affects the decisions of individuals rendering them more freedom of choice in situations of financial security. While the structural organization of networks remains the same throughout all datasets, random matrix theory provides insight into the interaction pattern of individuals of the society in situations of crisis. It has also been contemplated that individual accountability in terms of weighted interactions remains as a key to success unless segregation of tasks comes into play.

  10. Performance of the goulden large-sample extractor in multiclass pesticide isolation and preconcentration from stream water

    USGS Publications Warehouse

    Foster, G.D.; Foreman, W.T.; Gates, Paul M.

    1991-01-01

    The reliability of the Goulden large-sample extractor in preconcentrating pesticides from water was evaluated from the recoveries of 35 pesticides amended to filtered stream waters. Recoveries greater than 90% were observed for many of the pesticides in each major chemical class, but recoveries for some of the individual pesticides varied in seemingly unpredictable ways. Corrections cannot yet be factored into liquid-liquid extraction theory to account for matrix effects, which were apparent between the two stream waters tested. The Goulden large-sample extractor appears to be well suited for rapid chemical screening applications, with quantitative analysis requiring special quality control considerations. ?? 1991 American Chemical Society.

  11. Composite media for ion processing

    DOEpatents

    Mann, Nick R [Blackfoot, ID; Wood, Donald J [Peshastin, WA; Todd, Terry A [Aberdeen, ID; Sebesta, Ferdinand [Prague, CZ

    2009-12-08

    Composite media, systems, and devices for substantially removing, or otherwise processing, one or more constituents of a fluid stream. The composite media comprise a plurality of beads, each having a matrix substantially comprising polyacrylonitrile (PAN) and supporting one or more active components which are effective in removing, by various mechanisms, one or more constituents from a fluid stream. Due to the porosity and large surface area of the beads, a high level of contact is achieved between composite media of the present invention and the fluid stream being processed. Further, the homogeneity of the beads facilitates use of the beads in high volume applications where it is desired to effectively process a large volume of flow per unit of time.

  12. Accurate Quasiparticle Spectra from the T-Matrix Self-Energy and the Particle-Particle Random Phase Approximation.

    PubMed

    Zhang, Du; Su, Neil Qiang; Yang, Weitao

    2017-07-20

    The GW self-energy, especially G 0 W 0 based on the particle-hole random phase approximation (phRPA), is widely used to study quasiparticle (QP) energies. Motivated by the desirable features of the particle-particle (pp) RPA compared to the conventional phRPA, we explore the pp counterpart of GW, that is, the T-matrix self-energy, formulated with the eigenvectors and eigenvalues of the ppRPA matrix. We demonstrate the accuracy of the T-matrix method for molecular QP energies, highlighting the importance of the pp channel for calculating QP spectra.

  13. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    NASA Astrophysics Data System (ADS)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  14. Randomized Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.

  15. CEM V based special cementitious materials investigated by means of SANS method. Preliminary results

    NASA Astrophysics Data System (ADS)

    Dragolici, A. C.; Balasoiu, M.; Orelovich, O. L.; Ionascu, L.; Nicu, M.; Soloviov, D. V.; Kuklin, A. I.; Lizunov, E. I.; Dragolici, F.

    2017-05-01

    The management of the radioactive waste assume the conditioning in a cement matrix as an embedding, stable, disposal material. Cement matrix is the first and most important engineering barrier against the migration in the environment of the radionuclides contained in the waste packages. Knowing how the microstructure develops is therefore desirable in order to assess the compatibility of radioactive streams with cement and predict waste form performance during storage and disposal. For conditioning wastes containing radioactive aluminum new formulas of low basicity cements, using coatings as a barrier between the metal and the conditioning environment or introducing a corrosion inhibitor in the matrix system are required. Preliminary microstructure investigation of such improved CEM V based cement matrix is reported.

  16. Evaluation of USEPA method 1622 for detection of Cryptosporidium oocysts in stream waters

    USGS Publications Warehouse

    Simmons, O. D.; Sobsey, M.D.; Schaefer, F. W.; Francy, D.S.; Nally, R.A.; Heaney, C.D.

    2001-01-01

    To improve surveillance for Cryptosporidium oocysts in water, the US Environmental Protection Agency developed method 1622, which consists of filtration, concentration, immunomagnetic separation, fluorescent antibody and 4, 6-diamidino-2-phenylindole (DAPI) counter-staining, and microscopic evaluation. Two filters were compared for analysis of 11 stream water samples collected throughout the United States. Replicate 10-L stream water samples (unspiked and spiked with 100-250 oocysts) were tested to evaluate matrix effects. Oocyst recoveries from the stream water samples averaged 22% (standard deviation [SD] = ??17%) with a membrane disk and 12% (SD = ??6%) with a capsule filter. Oocyst recoveries from reagent water precision and recovery samples averaged 39% (SD = ??13%) with a membrane disk and 47% (SD = ??19%) with a capsule filter. These results demonstrate that Cryptosporidium oocysts can be recovered from stream waters using method 1622, but recoveries are lower than those from reagent-grade water. This research also evaluated concentrations of indicator bacteria in the stream water samples. Because few samples were oocyst-positive, relationships between detections of oocysts and concentrations of indicator organisms could not be determined.

  17. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    NASA Astrophysics Data System (ADS)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  18. C-13 dynamics in benthic algae: Effects of light, phosphorus, and biomass development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, Walter; Fanta, S.E.; Roberts, Brian J

    2008-07-01

    We performed three experiments in indoor streams and one experiment in a natural stream to investigate the effects of growth factors on {delta}{sup 13}C levels in benthic microalgae. In the indoor streams, algae grown under conditions of high light and high phosphorus had {delta}{sup 13}C values that were 16% higher than those in algae grown under conditions of low light and low phosphorus. Light effects were much stronger than phosphorus effects. The effects of both factors increased in strength as algal biomass accrued, and by the end of the experiments, algal {delta}{sup 13}C and biomass were highly correlated. In themore » natural stream, algae exposed to direct sunlight were enriched 15% over shaded algae, corroborating the strong effect of light in the indoor streams. Growth factors such as light and nutrients probably reduce discrimination against {delta}{sup 13}C (raising {delta}{sup 13}C values) in benthic microalgae by causing CO{sub 2} depletion both within individual cells and within the assemblage matrix. However, because the most marked fractionation occurred in older and thicker assemblages, CO{sub 2} depletion within the assemblage matrix appeared to be more important than depletion within individual cells. In the absence of carbon-concentrating mechanisms, elevated {delta}{sup 13}C suggests that inorganic carbon may limit the growth of benthic algae. The extensive range of d13C values (-14{per_thousand} to -36{per_thousand}) created by light and nutrient manipulations in this study easily encompassed the mean {delta}{sup 13}C values of both C{sub 3} and C{sub 4} terrestrial plants, indicating the challenge aquatic ecologists face in identifying carbon sources for higher trophic levels when light and nutrient conditions vary.« less

  19. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  20. DOE Waste Treatability Group Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkpatrick, T.D.

    1995-01-01

    This guidance presents a method and definitions for aggregating U.S. Department of Energy (DOE) waste into streams and treatability groups based on characteristic parameters that influence waste management technology needs. Adaptable to all DOE waste types (i.e., radioactive waste, hazardous waste, mixed waste, sanitary waste), the guidance establishes categories and definitions that reflect variations within the radiological, matrix (e.g., bulk physical/chemical form), and regulated contaminant characteristics of DOE waste. Beginning at the waste container level, the guidance presents a logical approach to implementing the characteristic parameter categories as part of the basis for defining waste streams and as the solemore » basis for assigning streams to treatability groups. Implementation of this guidance at each DOE site will facilitate the development of technically defined, site-specific waste stream data sets to support waste management planning and reporting activities. Consistent implementation at all of the sites will enable aggregation of the site-specific waste stream data sets into comparable national data sets to support these activities at a DOE complex-wide level.« less

  1. State-dependent and odour-mediated anemotactic responses of the predatory mite Phytoseiulus persimilis in a wind tunnel.

    PubMed

    Van Tilborg, Merijn; Sabelis, Maurice W; Roessingh, Peter

    2004-01-01

    Anemotaxis in the predatory mite Phytoseiulus persimilis (both well-fed and starved), has previously been studied on a wire grid under slight turbulent airflow conditions yielding weak, yet distinct, gradients in wind speed and odour concentration (Sabelis and Van der Weel 1993). Such conditions might have critically influenced the outcome of the study. We repeated these experiments, under laminar airflow conditions on a flat surface in a wind tunnel, thereby avoiding variation in wind speed and odour concentration. Treatments for starved and well-fed mites were (1) still-air without herbivore-induced plant volatiles (HIPV) (well-fed mites only), (2) an HIPV-free air stream, and (3) an air stream with HIPV (originating from Lima bean plants infested by two-spotted spider mites, Tetranychus urticae). Well-fed mites oriented in random directions in still-air without HIPV. In an air stream, starved mites always oriented upwind, whether plant odours were present or not. Well-fed mites oriented downwind in an HIPV-free air stream, but in random directions in an air stream with HIPV. Only under the last treatment our results differed from those of Sabelis and Van der Weel (1993).

  2. Modeled streamflow metrics on small, ungaged stream reaches in the Upper Colorado River Basin

    USGS Publications Warehouse

    Reynolds, Lindsay V.; Shafroth, Patrick B.

    2016-01-20

    Modeling streamflow is an important approach for understanding landscape-scale drivers of flow and estimating flows where there are no streamgage records. In this study conducted by the U.S. Geological Survey in cooperation with Colorado State University, the objectives were to model streamflow metrics on small, ungaged streams in the Upper Colorado River Basin and identify streams that are potentially threatened with becoming intermittent under drier climate conditions. The Upper Colorado River Basin is a region that is critical for water resources and also projected to experience large future climate shifts toward a drying climate. A random forest modeling approach was used to model the relationship between streamflow metrics and environmental variables. Flow metrics were then projected to ungaged reaches in the Upper Colorado River Basin using environmental variables for each stream, represented as raster cells, in the basin. Last, the projected random forest models of minimum flow coefficient of variation and specific mean daily flow were used to highlight streams that had greater than 61.84 percent minimum flow coefficient of variation and less than 0.096 specific mean daily flow and suggested that these streams will be most threatened to shift to intermittent flow regimes under drier climate conditions. Map projection products can help scientists, land managers, and policymakers understand current hydrology in the Upper Colorado River Basin and make informed decisions regarding water resources. With knowledge of which streams are likely to undergo significant drying in the future, managers and scientists can plan for stream-dependent ecosystems and human water users.

  3. Spontaneous osteosarcoma of the femur in a non-obese diabetic mouse

    PubMed Central

    Hong, Sunhwa; Lee, Hyun-A; Choe, Ohmok; Chung, Youngho

    2011-01-01

    An abnormal swelling was identified in the distal portion of the right femur in a 1-year-old non-obese diabetic (NOD) mouse. Grossly, a large mass of the distal femur was observed in the right femur. Lesions were poorly marginated, associated with destruction of the cancellous and cortical elements of the bone, and showed ossification within the soft tissue component. Histologically, the tumor was identified as a poorly differentiated sarcoma. Histopathologic examination of the bone masses revealed invasive proliferation of poorly differentiated neoplastic mesenchymal cells forming streams, bundles, and nests, which resulted in destruction of normal bone. Neoplastic cells exhibited random variation in cellular appearance and arrangement, as well as matrix composition and abundance. Haphazard and often intermingling patterns of osteogenic, chondroblastic, lipoblastic, and angiogenic tissues were present. Larger areas of neoplastic bone and hyaline cartilage contained multiple large areas of hemorrhage and necrosis bordered by neoplastic cells. The mass was diagnosed as an osteosarcoma. To our knowledge, this is the first spontaneous osteosarcoma in an NOD mouse. PMID:21998615

  4. Stellar Stream and Halo Structure in the Andromeda Galaxy from a Subaru/Hyper Suprime-Cam Survey

    NASA Astrophysics Data System (ADS)

    Komiyama, Yutaka; Chiba, Masashi; Tanaka, Mikito; Tanaka, Masayuki; Kirihara, Takanobu; Miki, Yohei; Mori, Masao; Lupton, Robert H.; Guhathakurta, Puragra; Kalirai, Jason S.; Gilbert, Karoline; Kirby, Evan; Lee, Myun Gyoon; Jang, In Sung; Sharma, Sanjib; Hayashi, Kohei

    2018-01-01

    We present wide and deep photometry of the northwestern part of the halo of the Andromeda galaxy (M31) using Hyper Suprime-Cam on the Subaru Telescope. The survey covers a 9.2 deg2 field in the g, i, and NB515 bands and shows a clear red giant branch (RGB) of M31's halo stars and a pronounced red clump (RC) feature. The spatial distribution of RC stars shows a prominent stream feature, the Northwestern (NW) Stream, and a diffuse substructure in the southern part of our survey field. We estimate the distances based on the RC method and obtain (m{--}M) = 24.63 ± 0.191 (random) ± 0.057 (systematic) and 24.29 ± 0.211 (random) ± 0.057 (systematic) mag for the NW Stream and diffuse substructure, respectively, implying that the NW Stream is located behind M31, whereas the diffuse substructure is located in front of it. We also estimate line-of-sight distances along the NW Stream and find that the southern part of the stream is ∼20 kpc closer to us relative to the northern part. The distance to the NW Stream inferred from the isochrone fitting to the color–magnitude diagram favors the RC-based distance, but the tip of the RGB (TRGB)-based distance estimated for NB515-selected RGB stars does not agree with it. The surface number density distribution of RC stars across the NW Stream is found to be approximately Gaussian with an FWHM of ∼25 arcmin (5.7 kpc), with a slight skew to the southwest side. That along the NW Stream shows a complicated structure, including variations in number density and a significant gap in the stream. Based on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  5. Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.

    2016-12-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.

  6. Stoichiometry of hydrological C, N, and P losses across climate and geology: An environmental matrix approach across New Zealand primary forests

    NASA Astrophysics Data System (ADS)

    McGroddy, M. E.; Baisden, W. T.; Hedin, L. O.

    2008-03-01

    Hydrologic losses can play a key role in regulating ecosystem nutrient balances, particularly in regions where baseline nutrient cycles are not augmented by industrial deposition. We used first-order streams to integrate hydrologic losses at the watershed scale across unpolluted old-growth forests in New Zealand. We employed a matrix approach to resolve how stream water concentrations of dissolved organic carbon (DOC), organic and inorganic nitrogen (DON and DIN), and organic and inorganic phosphorus (DOP and DIP) varied as a function of landscape differences in climate and geology. We found stream water total dissolved nitrogen (TDN) to be dominated by organic forms (medians for DON, 81.3%, nitrate-N, 12.6%, and ammonium-N, 3.9%). The median stream water DOC:TDN:TDP molar ratio of 1050:21:1 favored C slightly over N and P when compared to typical temperate forest foliage ratios. Using the full set of variables in a multiple regression approach explained approximately half of the variability in DON, DOC, and TDP concentrations. Building on this approach we combined a simplified set of variables with a simple water balance model in a regression designed to predict DON export at larger spatial scales. Incorporating the effects of climate and geologic variables on nutrient exports will greatly aid the development of integrated Earth-climate biogeochemical models which are able to take into account multiple element dynamics and complex natural landscapes.

  7. Random matrix approach to plasmon resonances in the random impedance network model of disordered nanocomposites

    NASA Astrophysics Data System (ADS)

    Olekhno, N. A.; Beltukov, Y. M.

    2018-05-01

    Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0

  8. Spectral statistics of random geometric graphs

    NASA Astrophysics Data System (ADS)

    Dettmann, C. P.; Georgiou, O.; Knight, G.

    2017-04-01

    We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.

  9. Optimal sampling design for estimating spatial distribution and abundance of a freshwater mussel population

    USGS Publications Warehouse

    Pooler, P.S.; Smith, D.R.

    2005-01-01

    We compared the ability of simple random sampling (SRS) and a variety of systematic sampling (SYS) designs to estimate abundance, quantify spatial clustering, and predict spatial distribution of freshwater mussels. Sampling simulations were conducted using data obtained from a census of freshwater mussels in a 40 X 33 m section of the Cacapon River near Capon Bridge, West Virginia, and from a simulated spatially random population generated to have the same abundance as the real population. Sampling units that were 0.25 m 2 gave more accurate and precise abundance estimates and generally better spatial predictions than 1-m2 sampling units. Systematic sampling with ???2 random starts was more efficient than SRS. Estimates of abundance based on SYS were more accurate when the distance between sampling units across the stream was less than or equal to the distance between sampling units along the stream. Three measures for quantifying spatial clustering were examined: Hopkins Statistic, the Clumping Index, and Morisita's Index. Morisita's Index was the most reliable, and the Hopkins Statistic was prone to false rejection of complete spatial randomness. SYS designs with units spaced equally across and up stream provided the most accurate predictions when estimating the spatial distribution by kriging. Our research indicates that SYS designs with sampling units equally spaced both across and along the stream would be appropriate for sampling freshwater mussels even if no information about the true underlying spatial distribution of the population were available to guide the design choice. ?? 2005 by The North American Benthological Society.

  10. Alternating Renewal Process Models for Behavioral Observation: Simulation Methods, Software, and Validity Illustrations

    ERIC Educational Resources Information Center

    Pustejovsky, James E.; Runyon, Christopher

    2014-01-01

    Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…

  11. ENVIRONMENTAL MONITORING AND ASSESSMENT PROGRAM (EMAP): WESTERN STREAMS AND RIVERS STATISTICAL SUMMARY

    EPA Science Inventory

    This statistical summary reports data from the Environmental Monitoring and Assessment Program (EMAP) Western Pilot (EMAP-W). EMAP-W was a sample survey (or probability survey, often simply called 'random') of streams and rivers in 12 states of the western U.S. (Arizona, Californ...

  12. MID-ATLANTIC COASTAL STREAMS STUDY: STATISTICAL DESIGN FOR REGIONAL ASSESSMENT AND LANDSCAPE MODEL DEVELOPMENT

    EPA Science Inventory

    A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) a collaborative study between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal weighting was u...

  13. MID-ATLANTIC COASTAL STREAMS STUDY: STATISTICAL DESIGN FOR REGIONAL ASSESSMENT AND LANDSCAPE MODEL DEVELOPMENT

    EPA Science Inventory

    A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) as part of collaborative research between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal wei...

  14. Quantifying Urban Watershed Stressor Gradients and Evaluating How Different Land Cover Datasets Affect Stream Management

    EPA Science Inventory

    We used a gradient (divided into impervious cover categories), spatially-balanced, random design (1) to sample streams along an impervious cover gradient in a large coastal watershed, (2) to characterize relationships between water chemistry and land cover, and (3) to document di...

  15. Near-optimal matrix recovery from random linear measurements.

    PubMed

    Romanov, Elad; Gavish, Matan

    2018-06-25

    In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.

  16. Role of vertex corrections in the matrix formulation of the random phase approximation for the multiorbital Hubbard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altmeyer, Michaela; Guterding, Daniel; Hirschfeld, P. J.

    2016-12-21

    In the framework of a multiorbital Hubbard model description of superconductivity, a matrix formulation of the superconducting pairing interaction that has been widely used is designed to treat spin, charge, and orbital fluctuations within a random phase approximation (RPA). In terms of Feynman diagrams, this takes into account particle-hole ladder and bubble contributions as expected. It turns out, however, that this matrix formulation also generates additional terms which have the diagrammatic structure of vertex corrections. Furthermore we examine these terms and discuss the relationship between the matrix-RPA superconducting pairing interaction and the Feynman diagrams that it sums.

  17. Constructing acoustic timefronts using random matrix theory.

    PubMed

    Hegewisch, Katherine C; Tomsovic, Steven

    2013-10-01

    In a recent letter [Hegewisch and Tomsovic, Europhys. Lett. 97, 34002 (2012)], random matrix theory is introduced for long-range acoustic propagation in the ocean. The theory is expressed in terms of unitary propagation matrices that represent the scattering between acoustic modes due to sound speed fluctuations induced by the ocean's internal waves. The scattering exhibits a power-law decay as a function of the differences in mode numbers thereby generating a power-law, banded, random unitary matrix ensemble. This work gives a more complete account of that approach and extends the methods to the construction of an ensemble of acoustic timefronts. The result is a very efficient method for studying the statistical properties of timefronts at various propagation ranges that agrees well with propagation based on the parabolic equation. It helps identify which information about the ocean environment can be deduced from the timefronts and how to connect features of the data to that environmental information. It also makes direct connections to methods used in other disordered waveguide contexts where the use of random matrix theory has a multi-decade history.

  18. Effects of groundwater pumping in the lower Apalachicola-Chattahoochee-Flint River basin

    USGS Publications Warehouse

    Jones, L. Elliott

    2012-01-01

    USGS developed a groundwater-flow model of the Upper Floridan aquifer in lower Apalachicola-Chattahoochee-Flint River basin in southwest Georgia and adjacent parts of Alabama and Florida to determine the effect of agricultural groundwater pumping on aquifer/stream flow within the basin. Aquifer/stream flow is the sum of groundwater outflow to and inflow from streams, and is an important consideration for water managers in the development of water-allocation and operating plans. Specifically, the model was used to evaluate how agricultural pumping relates to 7Q10 low streamflow, a statistical low flow indicative of drought conditions that would occur during seven consecutive days, on average, once every 10 years. Argus ONETM, a software package that combines a geographic information system (GIS) and numerical modeling in an Open Numerical Environment, facilitated the design of a detailed finite-element mesh to represent the complex geometry of the stream system in the lower basin as a groundwater-model boundary. To determine the effects on aquifer/stream flow of pumping at different locations within the model area, a pumping rate equivalent to a typical center-pivot irrigation system (50,000 ft3/d) was applied individually at each of the 18,951 model nodes in repeated steady-state simulations that were compared to a base case representing drought conditions during October 1999. Effects of nodal pumping on aquifer/stream flow and other boundary flows, as compared with the base-case simulation, were computed and stored in a response matrix. Queries to the response matrix were designed to determine the sensitivity of targeted stream reaches to agricultural pumping. Argus ONE enabled creation of contour plots of query results to illustrate the spatial variation across the model area of simulated aquifer/streamflow reductions, expressed as a percentage of the long-term 7Q10 low streamflow at key USGS gaging stations in the basin. These results would enable water managers to assess the relative impact of agricultural pumping and drought conditions on streamflow throughout the basin, and to develop mitigation strategies to conserve water resources and preserve aquatic habitat.

  19. Random walks with long-range steps generated by functions of Laplacian matrices

    NASA Astrophysics Data System (ADS)

    Riascos, A. P.; Michelitsch, T. M.; Collet, B. A.; Nowakowski, A. F.; Nicolleau, F. C. G. A.

    2018-04-01

    In this paper, we explore different Markovian random walk strategies on networks with transition probabilities between nodes defined in terms of functions of the Laplacian matrix. We generalize random walk strategies with local information in the Laplacian matrix, that describes the connections of a network, to a dynamic determined by functions of this matrix. The resulting processes are non-local allowing transitions of the random walker from one node to nodes beyond its nearest neighbors. We find that only two types of Laplacian functions are admissible with distinct behaviors for long-range steps in the infinite network limit: type (i) functions generate Brownian motions, type (ii) functions Lévy flights. For this asymptotic long-range step behavior only the lowest non-vanishing order of the Laplacian function is relevant, namely first order for type (i), and fractional order for type (ii) functions. In the first part, we discuss spectral properties of the Laplacian matrix and a series of relations that are maintained by a particular type of functions that allow to define random walks on any type of undirected connected networks. Once described general properties, we explore characteristics of random walk strategies that emerge from particular cases with functions defined in terms of exponentials, logarithms and powers of the Laplacian as well as relations of these dynamics with non-local strategies like Lévy flights and fractional transport. Finally, we analyze the global capacity of these random walk strategies to explore networks like lattices and trees and different types of random and complex networks.

  20. Multi-site Field Verification of Laboratory Derived FDOM Sensor Corrections: The Good, the Bad and the Ugly

    NASA Astrophysics Data System (ADS)

    Saraceno, J.; Shanley, J. B.; Aulenbach, B. T.

    2014-12-01

    Fluorescent dissolved organic matter (FDOM) is an excellent proxy for dissolved organic carbon (DOC) in natural waters. Through this relationship, in situ FDOM can be utilized to capture both high frequency time series and long term fluxes of DOC in small streams. However, in order to calculate accurate DOC fluxes for comparison across sites, in situ FDOM data must be compensated for matrix effects. Key matrix effects, include temperature, turbidity and the inner filter effect due to color. These interferences must be compensated for to develop a reasonable relationship between FDOM and DOC. In this study, we applied laboratory-derived correction factors to real time data from the five USGS WEBB headwater streams in order to gauge their effectiveness across a range of matrix effects. The good news is that laboratory derived correction factors improved the predicative relationship (higher r2) between DOC and FDOM when compared to uncorrected data. The relative importance of each matrix effect (i.e. temperature) varied by site and by time, implying that each and every matrix effect should be compensated for when available. In general, temperature effects were more important on longer time scales, while corrections for turbidity and DOC inner filter effects were most prevalent during hydrologic events, when the highest instantaneous flux of DOC occurred. Unfortunately, even when corrected for matrix effects, in situ FDOM is a weaker predictor of DOC than A254, a common surrogate for DOC, implying that either DOC fluoresces at varying degrees (but should average out over time), that some matrix effects (e.g. pH) are either unaccounted for or laboratory-derived correction factors do not encompass the site variability of particles and organics. The least impressive finding is that the inherent dependence on three variables in the FDOM correction algorithm increases the likelihood of record data gaps which increases the uncertainty in calculated DOC flux values.

  1. Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei

    2018-04-01

    We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.

  2. The role of penetrating gas streams in setting the dynamical state of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Zinger, E.; Dekel, A.; Birnboim, Y.; Kravtsov, A.; Nagai, D.

    2016-09-01

    We utilize cosmological simulations of 16 galaxy clusters at redshifts z = 0 and z = 0.6 to study the effect of inflowing streams on the properties of the X-ray emitting intracluster medium. We find that the mass accretion occurs predominantly along streams that originate from the cosmic web and consist of heated gas. Clusters that are unrelaxed in terms of their X-ray morphology are characterized by higher mass inflow rates and deeper penetration of the streams, typically into the inner third of the virial radius. The penetrating streams generate elevated random motions, bulk flows and cold fronts. The degree of penetration of the streams may change over time such that clusters can switch from being unrelaxed to relaxed over a time-scale of several giga years.

  3. Drainage basin characteristics from ERTS data

    NASA Technical Reports Server (NTRS)

    Hollyday, E. F. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. ERTS-derived measurements of forests, riparian vegetation, open water, and combined agricultural and urban land use were added to an available matrix of map-derived basin characteristics. The matrix of basin characteristics was correlated with 40 stream flow characteristics by multiple regression techniques. Fifteen out of the 40 equations were improved. If the technique can be transferred to other physiographic regions in the nation, the opportunity exists for a potential annual savings in operations of about $250,000.

  4. Methods of using adsorption media for separating or removing constituents

    DOEpatents

    Tranter, Troy J [Idaho Falls, ID; Herbst, R Scott [Idaho Falls, ID; Mann, Nicholas R [Blackfoot, ID; Todd, Terry A [Aberdeen, ID

    2011-10-25

    Methods of using an adsorption medium to remove at least one constituent from a feed stream. The method comprises contacting an adsorption medium with a feed stream comprising at least one constituent and removing the at least one constituent from the feed stream. The adsorption medium comprises a polyacrylonitrile (PAN) matrix and at least one metal hydroxide homogenously dispersed therein. The adsorption medium may comprise from approximately 15 wt % to approximately 90 wt % of the PAN and from approximately 10 wt % to approximately 85 wt % of the at least one metal hydroxide. The at least one metal hydroxide may be selected from the group consisting of ferric hydroxide, zirconium hydroxide, lanthanum hydroxide, cerium hydroxide, titanium hydroxide, copper hydroxide, antimony hydroxide, and molybdenum hydroxide.

  5. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  6. Tendon Functional Extracellular Matrix

    PubMed Central

    Screen, H.R.C.; Birk, D.E.; Kadler, K.E.; Ramirez, F; Young, M.F.

    2015-01-01

    This article is one of a series, summarising views expressed at the Orthopaedic Research Society New Frontiers in Tendon Research Conference. This particular article reviews the three workshops held under the “Functional Extracellular Matrix” stream. The workshops focused on the roles of the tendon extracellular matrix, such as performing the mechanical functions of tendon, creating the local cell environment and providing cellular cues. Tendon is a complex network of matrix and cells, and its biological functions are influenced by widely-varying extrinsic and intrinsic factors such as age, nutrition, exercise levels and biomechanics. Consequently, tendon adapts dynamically during development, ageing and injury. The workshop discussions identified research directions associated with understanding cell-matrix interactions to be of prime importance for developing novel strategies to target tendon healing or repair. PMID:25640030

  7. On the equilibrium state of a small system with random matrix coupling to its environment

    NASA Astrophysics Data System (ADS)

    Lebowitz, J. L.; Pastur, L.

    2015-07-01

    We consider a random matrix model of interaction between a small n-level system, S, and its environment, a N-level heat reservoir, R. The interaction between S and R is modeled by a tensor product of a fixed n× n matrix and a N× N Hermitian random matrix. We show that under certain ‘macroscopicity’ conditions on R, the reduced density matrix of the system {{ρ }S}=T{{r}R}ρ S\\cup R(eq), is given by ρ S(c)˜ exp \\{-β {{H}S}\\}, where HS is the Hamiltonian of the isolated system. This holds for all strengths of the interaction and thus gives some justification for using ρ S(c) to describe some nano-systems, like biopolymers, in equilibrium with their environment (Seifert 2012 Rep. Prog. Phys. 75 126001). Our results extend those obtained previously in (Lebowitz and Pastur 2004 J. Phys. A: Math. Gen. 37 1517-34) (Lebowitz et al 2007 Contemporary Mathematics (Providence RI: American Mathematical Society) pp 199-218) for a special two-level system.

  8. Coupling GIS and multivariate approaches to reference site selection for wadeable stream monitoring.

    PubMed

    Collier, Kevin J; Haigh, Andy; Kelly, Johlene

    2007-04-01

    Geographic Information System (GIS) was used to identify potential reference sites for wadeable stream monitoring, and multivariate analyses were applied to test whether invertebrate communities reflected a priori spatial and stream type classifications. We identified potential reference sites in segments with unmodified vegetation cover adjacent to the stream and in >85% of the upstream catchment. We then used various landcover, amenity and environmental impact databases to eliminate sites that had potential anthropogenic influences upstream and that fell into a range of access classes. Each site identified by this process was coded by four dominant stream classes and seven zones, and 119 candidate sites were randomly selected for follow-up assessment. This process yielded 16 sites conforming to reference site criteria using a conditional-probabilistic design, and these were augmented by an additional 14 existing or special interest reference sites. Non-metric multidimensional scaling (NMS) analysis of percent abundance invertebrate data indicated significant differences in community composition among some of the zones and stream classes identified a priori providing qualified support for this framework in reference site selection. NMS analysis of a range standardised condition and diversity metrics derived from the invertebrate data indicated a core set of 26 closely related sites, and four outliers that were considered atypical of reference site conditions and subsequently dropped from the network. Use of GIS linked to stream typology, available spatial databases and aerial photography greatly enhanced the objectivity and efficiency of reference site selection. The multi-metric ordination approach reduced variability among stream types and bias associated with non-random site selection, and provided an effective way to identify representative reference sites.

  9. Matrix and Tensor Completion on a Human Activity Recognition Framework.

    PubMed

    Savvaki, Sofia; Tsagkatakis, Grigorios; Panousopoulou, Athanasia; Tsakalides, Panagiotis

    2017-11-01

    Sensor-based activity recognition is encountered in innumerable applications of the arena of pervasive healthcare and plays a crucial role in biomedical research. Nonetheless, the frequent situation of unobserved measurements impairs the ability of machine learning algorithms to efficiently extract context from raw streams of data. In this paper, we study the problem of accurate estimation of missing multimodal inertial data and we propose a classification framework that considers the reconstruction of subsampled data during the test phase. We introduce the concept of forming the available data streams into low-rank two-dimensional (2-D) and 3-D Hankel structures, and we exploit data redundancies using sophisticated imputation techniques, namely matrix and tensor completion. Moreover, we examine the impact of reconstruction on the classification performance by experimenting with several state-of-the-art classifiers. The system is evaluated with respect to different data structuring scenarios, the volume of data available for reconstruction, and various levels of missing values per device. Finally, the tradeoff between subsampling accuracy and energy conservation in wearable platforms is examined. Our analysis relies on two public datasets containing inertial data, which extend to numerous activities, multiple sensing parameters, and body locations. The results highlight that robust classification accuracy can be achieved through recovery, even for extremely subsampled data streams.

  10. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  11. Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h

    2010-11-01

    In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.

  12. Symmetric Stream Cipher using Triple Transposition Key Method and Base64 Algorithm for Security Improvement

    NASA Astrophysics Data System (ADS)

    Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur

    2017-12-01

    Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.

  13. On the efficiency of a randomized mirror descent algorithm in online optimization problems

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.

    2015-04-01

    A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

  14. Pseudo-Random Number Generator Based on Coupled Map Lattices

    NASA Astrophysics Data System (ADS)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  15. Describing spatial pattern in stream networks: A practical approach

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  16. A geostatistical approach for describing spatial pattern in stream networks

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  17. Linking Stream Dissolved Oxygen with the Dynamic Environmental Drivers across the Pacific Coast of U.S.A.

    NASA Astrophysics Data System (ADS)

    Araya, F. Z.; Abdul-Aziz, O. I.

    2017-12-01

    This study utilized a systematic data analytics approach to determine the relative linkages of stream dissolved oxygen (DO) with the hydro-climatic and biogeochemical drivers across the U.S. Pacific Coast. Multivariate statistical techniques of Pearson correlation matrix, principal component analysis, and factor analysis were applied to a complex water quality dataset (1998-2015) at 35 water quality monitoring stations of USGS NWIS and EPA STORET. Power-law based partial least squares regression (PLSR) models with a bootstrap Monte Carlo procedure (1000 iterations) were developed to reliably estimate the relative linkages by resolving multicollinearity (Nash-Sutcliffe Efficiency, NSE = 0.50-0.94). Based on the dominant drivers, four environmental regimes have been identified and adequately described the system-data variances. In Pacific North West and Southern California, water temperature was the most dominant driver of DO in majority of the streams. However, in Central and Northern California, stream DO was controlled by multiple drivers (i.e., water temperature, pH, stream flow, and total phosphorus), exhibiting a transitional environmental regime. Further, total phosphorus (TP) appeared to be the limiting nutrient for most streams. The estimated linkages and insights would be useful to identify management priorities to achieve healthy coastal stream ecosystems across the Pacific Coast of U.S.A. and similar regions around the world. Keywords: Data analytics, water quality, coastal streams, dissolved oxygen, environmental regimes, Pacific Coast, United States.

  18. Quality of dissolved organic matter affects planktonic but not biofilm bacterial production in streams.

    PubMed

    Kamjunke, Norbert; Herzsprung, Peter; Neu, Thomas R

    2015-02-15

    Streams and rivers are important sites of organic carbon mineralization which is dependent on the land use within river catchments. Here we tested whether planktonic and epilithic biofilm bacteria differ in their response to the quality of dissolved organic carbon (DOC). Thus, planktonic and biofilm bacterial production was compared with patterns of DOC along a land-use gradient in the Bode catchment area (Germany). The freshness index of DOC was positively related to the proportion of agricultural area in the catchment. The humification index correlated with the proportion of forest area. Abundance and production of planktonic bacteria were lower in headwaters than at downstream sites. Planktonic production was weakly correlated to the total concentration of DOC but more strongly to quality-measures as revealed by spectra indexes, i.e. positively to the freshness index and negatively to the humification index. In contrast to planktonic bacteria, abundance and production of biofilm bacteria were independent of DOC quality. This finding may be explained by the association of biofilm bacteria with benthic algae and an extracellular matrix which represent additional substrate sources. The data show that planktonic bacteria seem to be regulated at a landscape scale controlled by land use, whereas biofilm bacteria are regulated at a biofilm matrix scale controlled by autochthonous production. Thus, the effects of catchment-scale land use changes on ecosystem processes are likely lower in small streams dominated by biofilm bacteria than in larger streams dominated by planktonic bacteria. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Group identification in Indonesian stock market

    NASA Astrophysics Data System (ADS)

    Nurriyadi Suparno, Ervano; Jo, Sung Kyun; Lim, Kyuseong; Purqon, Acep; Kim, Soo Yong

    2016-08-01

    The characteristic of Indonesian stock market is interesting especially because it represents developing countries. We investigate the dynamics and structures by using Random Matrix Theory (RMT). Here, we analyze the cross-correlation of the fluctuations of the daily closing price of stocks from the Indonesian Stock Exchange (IDX) between January 1, 2007, and October 28, 2014. The eigenvalue distribution of the correlation matrix consists of noise which is filtered out using the random matrix as a control. The bulk of the eigenvalue distribution conforms to the random matrix, allowing the separation of random noise from original data which is the deviating eigenvalues. From the deviating eigenvalues and the corresponding eigenvectors, we identify the intrinsic normal modes of the system and interpret their meaning based on qualitative and quantitative approach. The results show that the largest eigenvector represents the market-wide effect which has a predominantly common influence toward all stocks. The other eigenvectors represent highly correlated groups within the system. Furthermore, identification of the largest components of the eigenvectors shows the sector or background of the correlated groups. Interestingly, the result shows that there are mainly two clusters within IDX, natural and non-natural resource companies. We then decompose the correlation matrix to investigate the contribution of the correlated groups to the total correlation, and we find that IDX is still driven mainly by the market-wide effect.

  20. Tensor manifold-based extreme learning machine for 2.5-D face recognition

    NASA Astrophysics Data System (ADS)

    Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin

    2018-01-01

    We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.

  1. Relative Linkages of Stream Dissolved Oxygen with the Hydroclimatic and Biogeochemical Drivers across the Gulf Coast of U.S.A.

    NASA Astrophysics Data System (ADS)

    Gebreslase, A. K.; Abdul-Aziz, O. I.

    2017-12-01

    Dynamics of coastal stream water quality is influenced by a multitude of interacting environmental drivers. A systematic data analytics approach was employed to determine the relative linkages of stream dissolved oxygen (DO) with the hydroclimatic and biogeochemical variables across the Gulf Coast of U.S.A. Multivariate pattern recognition techniques of PCA and FA, alongside Pearson's correlation matrix, were utilized to examine the interrelation of variables at 36 water quality monitoring stations from USGS NWIS and EPA STORET databases. Power-law based partial least square regression models with a bootstrap Monte Carlo procedure (1000 iterations) were developed to estimate the relative linkages of dissolved oxygen with the hydroclimatic and biogeochemical variables by appropriately resolving multicollinearity (Nash-Sutcliffe efficiency = 0.58-0.94). Based on the dominant drivers, stations were divided into four environmental regimes. Water temperature was the dominant driver of DO in the majority of streams, representing most the northern part of Gulf Coast states. However, streams in the southern part of Texas and Florida showed a dominant pH control on stream DO. Further, streams representing the transition zone of the two environmental regimes showed notable controls of multiple drivers (i.e., water temperature, stream flow, and specific conductance) on the stream DO. The data analytics research provided profound insight to understand the dynamics of stream DO with the hydroclimatic and biogeochemical variables. The knowledge can help water quality managers in formulating plans for effective stream water quality and watershed management in the U.S. Gulf Coast. Keywords Data analytics, coastal streams, relative linkages, dissolved oxygen, environmental regimes, Gulf Coast, United States.

  2. Assessing the use of existing data to compare plains fish assemblages collected from random and fixed sites in Colorado

    USGS Publications Warehouse

    Zuellig, Robert E.; Crockett, Harry J.

    2013-01-01

    The U.S. Geological Survey, in cooperation with Colorado Parks and Wildlife, assessed the potential use of combining recently (2007 to 2010) and formerly (1992 to 1996) collected data to compare plains fish assemblages sampled from random and fixed sites located in the South Platte and Arkansas River Basins in Colorado. The first step was to determine if fish assemblages collected between 1992 and 1996 were comparable to samples collected at the same sites between 2007 and 2010. If samples from the two time periods were comparable, then it was considered reasonable that the combined time-period data could be used to make comparisons between random and fixed sites. In contrast, if differences were found between the two time periods, then it was considered unreasonable to use these data to make comparisons between random and fixed sites. One-hundred samples collected during the 1990s and 2000s from 50 sites dispersed among 19 streams in both basins were compiled from a database maintained by Colorado Parks and Wildlife. Nonparametric multivariate two-way analysis of similarities was used to test for fish-assemblage differences between time periods while accounting for stream-to-stream differences. Results indicated relatively weak but significant time-period differences in fish assemblages. Weak time-period differences in this case possibly were related to changes in fish assemblages associated with environmental factors; however, it is difficult to separate other possible explanations such as limited replication of paired time-period samples in many of the streams or perhaps differences in sampling efficiency and effort between the time periods. Regardless, using the 1990s data to fill data gaps to compare random and fixed-site fish-assemblage data is ill advised based on the significant separation in fish assemblages between time periods and the inability to determine conclusive explanations for these results. These findings indicated that additional sampling will be necessary before unbiased comparisons can be made between fish assemblages collected from random and fixed sites in the South Platte and Arkansas River Basins.

  3. Unsteady solute-transport simulation in streamflow using a finite-difference model

    USGS Publications Warehouse

    Land, Larry F.

    1978-01-01

    This report documents a rather simple, general purpose, one-dimensional, one-parameter, mass-transport model for field use. The model assumes a well-mixed conservative solute that may be coming from an unsteady source and is moving in unsteady streamflow. The quantity of solute being transported is in the units of concentration. Results are reported as such. An implicit finite-difference technique is used to solve the mass transport equation. It consists of creating a tridiagonal matrix and using the Thomas algorithm to solve the matrix for the unknown concentrations at the new time step. The computer program pesented is designed to compute the concentration of a water-quality constituent at any point and at any preselected time in a one-dimensional stream. The model is driven by the inflowing concentration of solute at the upstream boundary and is influenced by the solute entering the stream from tributaries and lateral ground-water inflow and from a source or sink. (Woodard-USGS)

  4. [Research on partial least squares for determination of impurities in the presence of high concentration of matrix by ICP-AES].

    PubMed

    Wang, Yan-peng; Gong, Qi; Yu, Sheng-rong; Liu, You-yan

    2012-04-01

    A method for detecting trace impurities in high concentration matrix by ICP-AES based on partial least squares (PLS) was established. The research showed that PLS could effectively correct the interference caused by high level of matrix concentration error and could withstand higher concentrations of matrix than multicomponent spectral fitting (MSF). When the mass ratios of matrix to impurities were from 1 000 : 1 to 20 000 : 1, the recoveries of standard addition were between 95% and 105% by PLS. For the system in which interference effect has nonlinear correlation with the matrix concentrations, the prediction accuracy of normal PLS method was poor, but it can be improved greatly by using LIN-PPLS, which was based on matrix transformation of sample concentration. The contents of Co, Pb and Ga in stream sediment (GBW07312) were detected by MSF, PLS and LIN-PPLS respectively. The results showed that the prediction accuracy of LIN-PPLS was better than PLS, and the prediction accuracy of PLS was better than MSF.

  5. Eigenvalue density of cross-correlations in Sri Lankan financial market

    NASA Astrophysics Data System (ADS)

    Nilantha, K. G. D. R.; Ranasinghe; Malmini, P. K. C.

    2007-05-01

    We apply the universal properties with Gaussian orthogonal ensemble (GOE) of random matrices namely spectral properties, distribution of eigenvalues, eigenvalue spacing predicted by random matrix theory (RMT) to compare cross-correlation matrix estimators from emerging market data. The daily stock prices of the Sri Lankan All share price index and Milanka price index from August 2004 to March 2005 were analyzed. Most eigenvalues in the spectrum of the cross-correlation matrix of stock price changes agree with the universal predictions of RMT. We find that the cross-correlation matrix satisfies the universal properties of the GOE of real symmetric random matrices. The eigen distribution follows the RMT predictions in the bulk but there are some deviations at the large eigenvalues. The nearest-neighbor spacing and the next nearest-neighbor spacing of the eigenvalues were examined and found that they follow the universality of GOE. RMT with deterministic correlations found that each eigenvalue from deterministic correlations is observed at values, which are repelled from the bulk distribution.

  6. An overview of the Columbia Habitat Monitoring Program's (CHaMP) spatial-temporal design framework

    EPA Science Inventory

    We briefly review the concept of a master sample applied to stream networks in which a randomized set of stream sites is selected across a broad region to serve as a list of sites from which a subset of sites is selected to achieve multiple objectives of specific designs. The Col...

  7. THE RELATIONSHIP BETWEEN TEMPERATURE, PHYSICAL HABITAT AND FISH ASSEMBLAGE DATA IN A STATE WIDE PROBABILITY SURVEY OF OREGON STREAMS

    EPA Science Inventory

    To assess the ecological condition of streams and rivers in Oregon, we sampled 146 sites
    in summer, 1997 as part of the U.S. EPA's Environmental Monitoring and Assessment Program.
    Sample reaches were selected using a systematic, randomized sample design from the blue-line n...

  8. MEASURING BASE-FLOW CHEMISTRY AS AN INDICATOR OF REGIONAL GROUND-WATER QUALITY IN THE MID-ATLANTIC COASTAL PLAIN

    EPA Science Inventory

    Water quality in headwater (first-order) streams of the Mid-Atlantic Coastal Plain during base flow in the winter and spring is related to land use, hydrogeology, and other natural and human influences. A random survey of water quality in 174 headwater streams in the Mid-Atlantic...

  9. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  10. Exploiting the MODIS albedos with the Two-stream Inversion Package (JRC-TIP): 2. Fractions of transmitted and absorbed fluxes in the vegetation and soil layers

    NASA Astrophysics Data System (ADS)

    Pinty, B.; Clerici, M.; Andredakis, I.; Kaminski, T.; Taberner, M.; Verstraete, M. M.; Gobron, N.; Plummer, S.; Widlowski, J.-L.

    2011-05-01

    The two-stream model parameters and associated uncertainties retrieved by inversion against MODIS broadband visible and near-infrared white sky surface albedos were discussed in a companion paper. The present paper concentrates on the partitioning of the solar radiation fluxes delivered by the Joint Research Centre Two-stream Inversion Package (JRC-TIP). The estimation of the various flux fractions related to the vegetation and the background layers separately capitalizes on the probability density functions of the model parameters discussed in the companion paper. The propagation of uncertainties from the observations to the model parameters is achieved via the Hessian of the cost function and yields a covariance matrix of posterior parameter uncertainties. This matrix is propagated to the radiation fluxes via the model's Jacobian matrix of first derivatives. Results exhibit a rather good spatiotemporal consistency given that the prior values on the model parameters are not specified as a function of land cover type and/or vegetation phenological states. A specific investigation based on a scenario imposing stringent conditions of leaf absorbing and scattering properties highlights the impact of such constraints that are, as a matter of fact, currently adopted in vegetation index approaches. Special attention is also given to snow-covered and snow-contaminated areas since these regions encompass significant reflectance changes that strongly affect land surface processes. A definite asset of the JRC-TIP lies in its capability to control and ultimately relax a number of assumptions that are often implicit in traditional approaches. These features greatly help us understand the discrepancies between the different data sets of land surface properties and fluxes that are currently available. Through a series of selected examples, the inverse procedure implemented in the JRC-TIP is shown to be robust, reliable, and compliant with large-scale processing requirements. Furthermore, this package ensures the physical consistency between the set of observations, the two-stream model parameters, and radiation fluxes. It also documents the retrieval of associated uncertainties.

  11. A random matrix approach to credit risk.

    PubMed

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  12. A Random Matrix Approach to Credit Risk

    PubMed Central

    Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864

  13. Electrostatic dry powder prepregging of carbon fiber

    NASA Technical Reports Server (NTRS)

    Throne, James L.; Sohn, Min-Seok

    1990-01-01

    Ultrafine, 5-10 micron polymer-matrix resin powders are directly applied to carbon fiber tows by passing then in an air or nitrogen stream through an electrostatic potential; the particles thus charged will strongly adhere to grounded carbon fibers, and can be subsequently fused to the fiber in a continuously-fed radiant oven. This electrostatic technique derived significant end-use mechanical property advantages from the obviation of solvents, binders, and other adulterants. Additional matrix resins used to produce prepregs to date have been PMR-15, Torlon 40000, and LaRC TPI.

  14. Vectorization of linear discrete filtering algorithms

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1977-01-01

    Linear filters, including the conventional Kalman filter and versions of square root filters devised by Potter and Carlson, are studied for potential application on streaming computers. The square root filters are known to maintain a positive definite covariance matrix in cases in which the Kalman filter diverges due to ill-conditioning of the matrix. Vectorization of the filters is discussed, and comparisons are made of the number of operations and storage locations required by each filter. The Carlson filter is shown to be the most efficient of the filters on the Control Data STAR-100 computer.

  15. Symmetry Transition Preserving Chirality in QCD: A Versatile Random Matrix Model

    NASA Astrophysics Data System (ADS)

    Kanazawa, Takuya; Kieburg, Mario

    2018-06-01

    We consider a random matrix model which interpolates between the chiral Gaussian unitary ensemble and the Gaussian unitary ensemble while preserving chiral symmetry. This ensemble describes flavor symmetry breaking for staggered fermions in 3D QCD as well as in 4D QCD at high temperature or in 3D QCD at a finite isospin chemical potential. Our model is an Osborn-type two-matrix model which is equivalent to the elliptic ensemble but we consider the singular value statistics rather than the complex eigenvalue statistics. We report on exact results for the partition function and the microscopic level density of the Dirac operator in the ɛ regime of QCD. We compare these analytical results with Monte Carlo simulations of the matrix model.

  16. Assessing Anthropogenic Influence and Edge Effect Influence on Forested Riparian Buffer Spatial Configuration and Structure: An Example Using Lidar Remote Sensing Methods

    NASA Astrophysics Data System (ADS)

    Wasser, L. A.; Chasmer, L. E.

    2012-12-01

    Forested riparian buffers (FRB) perform numerous critical ecosystem services. However, globally, FRB spatial configuration and structure have been modified by anthropogenic development resulting in widespread ecological degradation as seen in the Gulf of Mexico and the Chesapeake Bay. Riparian corridors within developed areas are particularly vulnerable to disturbance given two edges - the naturally occurring stream edge and the matrix edge. Increased edge length predisposes riparian vegetation to "edge effects", characterized by modified physical and environmental conditions at the interface between the forested buffer and the adjacent landuse, or matrix and forest fragment degradation. The magnitude and distance of edge influence may be further influenced by adjacent landuse type and the width of the buffer corridor at any given location. There is a need to quantify riparian buffer spatial configuration and structure over broad geographic extents and within multiple riparian systems in support of ecologically sound management and landuse decisions. This study thus assesses the influence of varying landuse types (agriculture, suburban development and undeveloped) on forested riparian buffer 3-dimensional structure and spatial configuration using high resolution Light Detection and Ranging (LiDAR) data collected within a headwater watershed. Few studies have assessed riparian buffer structure and width contiguously for an entire watershed, an integral component of watershed planning and restoration efforts such as those conducted throughout the Chesapeake Bay. The objectives of the study are to 1) quantify differences in vegetation structure at the stream and matrix influenced riparian buffer edges, compared to the forested interior and 2) assess continuous patterns of changes in vegetation structure throughout the buffer corridor beginning at the matrix edge and ending at the stream within buffers a) of varying width and b) that are adjacent to varying landuse types. Results suggest that 1) the spatial configuration of riparian forests has a strong influence on forest structure compared to a weaker association with adjacent landuse type 2) developed landuse types are often associated with increased understory vegetation density 3) that riparian vegetation canopy cover is dense regardless of corridor width or adjacent landuse type and 4) the degree to which edge effects propagate into the buffer corridor is most influenced by corridor width. The study further demonstrates the utility of automated algorithms that sample lidar data in watershed-wide ecological analysis. Results suggest that landuse regulations should encourage wider buffers which will in turn support a greater range of ecosystem services including improved wildlife habitat, stream shading and detrital inputs.

  17. Random matrix theory and portfolio optimization in Moroccan stock exchange

    NASA Astrophysics Data System (ADS)

    El Alaoui, Marwane

    2015-09-01

    In this work, we use random matrix theory to analyze eigenvalues and see if there is a presence of pertinent information by using Marčenko-Pastur distribution. Thus, we study cross-correlation among stocks of Casablanca Stock Exchange. Moreover, we clean correlation matrix from noisy elements to see if the gap between predicted risk and realized risk would be reduced. We also analyze eigenvectors components distributions and their degree of deviations by computing the inverse participation ratio. This analysis is a way to understand the correlation structure among stocks of Casablanca Stock Exchange portfolio.

  18. The supersymmetric method in random matrix theory and applications to QCD

    NASA Astrophysics Data System (ADS)

    Verbaarschot, Jacobus

    2004-12-01

    The supersymmetric method is a powerful method for the nonperturbative evaluation of quenched averages in disordered systems. Among others, this method has been applied to the statistical theory of S-matrix fluctuations, the theory of universal conductance fluctuations and the microscopic spectral density of the QCD Dirac operator. We start this series of lectures with a general review of Random Matrix Theory and the statistical theory of spectra. An elementary introduction of the supersymmetric method in Random Matrix Theory is given in the second and third lecture. We will show that a Random Matrix Theory can be rewritten as an integral over a supermanifold. This integral will be worked out in detail for the Gaussian Unitary Ensemble that describes level correlations in systems with broken time-reversal invariance. We especially emphasize the role of symmetries. As a second example of the application of the supersymmetric method we discuss the calculation of the microscopic spectral density of the QCD Dirac operator. This is the eigenvalue density near zero on the scale of the average level spacing which is known to be given by chiral Random Matrix Theory. Also in this case we use symmetry considerations to rewrite the generating function for the resolvent as an integral over a supermanifold. The main topic of the second last lecture is the recent developments on the relation between the supersymmetric partition function and integrable hierarchies (in our case the Toda lattice hierarchy). We will show that this relation is an efficient way to calculate superintegrals. Several examples that were given in previous lectures will be worked out by means of this new method. Finally, we will discuss the quenched QCD Dirac spectrum at nonzero chemical potential. Because of the nonhermiticity of the Dirac operator the usual supersymmetric method has not been successful in this case. However, we will show that the supersymmetric partition function can be evaluated by means of the replica limit of the Toda lattice equation.

  19. Measurement Matrix Design for Phase Retrieval Based on Mutual Information

    NASA Astrophysics Data System (ADS)

    Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.

    2018-01-01

    In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.

  20. Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective

    NASA Astrophysics Data System (ADS)

    Jamali, Tayeb; Jafari, G. R.

    2015-07-01

    We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.

  1. Recurrence of random walks with long-range steps generated by fractional Laplacian matrices on regular networks and simple cubic lattices

    NASA Astrophysics Data System (ADS)

    Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.

    2017-12-01

    We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0<α ≤slant 2 . We deduce probability-generating functions (network Green’s functions) for the fractional random walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0<α< 1 the fractional random walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα < 2 for dimensions d≥slant 2 . Finally, for α=2 , Polya’s classical recurrence theorem is recovered, namely the walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0<α<1 closed form expressions for the fractional lattice Green’s function matrix containing the escape and ever passage probabilities. The ever passage probabilities (fractional lattice Green’s functions) in the transient regime fulfil Riesz potential power law decay asymptotic behavior for nodes far from the departure node. The non-locality of the fractional random walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.

  2. Finding a Hadamard matrix by simulated annealing of spin vectors

    NASA Astrophysics Data System (ADS)

    Bayu Suksmono, Andriyan

    2017-05-01

    Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.

  3. Summary of Environmental Monitoring and Assessment Program (EMAP) activities in South Dakota, 2000-2004

    USGS Publications Warehouse

    Heakin, Allen J.; Neitzert, Kathleen M.; Shearer, Jeffrey S.

    2006-01-01

    The U.S. Environmental Protection Agency (USEPA) initiated data-collection activities for the Environmental Monitoring and Assessment Program-West (EMAP-West) in South Dakota during 2000. The objectives of the study were to develop the monitoring tools necessary to produce unbiased estimates of the ecological condition of surface waters across a large geographic area of the western United States, and to demonstrate the effectiveness of those tools in a large-scale assessment. In 2001, the U.S. Geological Survey (USGS) and the South Dakota Department of Game, Fish and Parks (GF&P) established a cooperative agreement and assumed responsibility for completing the remaining assessments for the perennial, wadable streams of the EMAP-West in the State. Stream assessment sites were divided into two broad categories-the first category of sites was randomly selected and assigned by the USEPA for South Dakota. The second category consisted of sites that were specifically selected because they appeared to have reasonable potential for representing the best available physical, chemical, and biological conditions in the State. These sites comprise the second category of assessment sites and were called 'reference' sites and were selected following a detailed evaluation process. Candidate reference site data will serve as a standard or benchmark for assessing the overall ecological condition of the randomly selected sites. During 2000, the USEPA completed 22 statewide stream assessments in South Dakota. During 2001-2003, the USGS and GF&P completed another 42 stream assessments bringing the total of randomly selected stream assessments within South Dakota to 64. In addition, 18 repeat assessments designed to meet established quality-assurance/quality-control requirements were completed at 12 of these 64 sites. During 2002-2004, the USGS in cooperation with GF&P completed stream assessments at 45 candidate reference sites. Thus, 109 sites had stream assessments completed in South Dakota for EMAP-West (2000-2004). Relatively early in the EMAP-West stream-assessment process, it became apparent that for some streams in south-central South Dakota, in-stream conditions varied considerably over relatively short distances of only a few miles. These changes appeared to be a result of geomorphic changes associated with changes in the underlying geology. For these streams, moving stream assessment sites short distances upstream or downstream had the potential to provide substantially different bioassessment data. In order to obtain a better understanding of how geology influences stream conditions, two streams located in south-central South Dakota were chosen for multiple stream sampling at sites located along their longitudinal profile at points where notable changes in geomorphology were observed. Subsequently, three sites on Bear-in-the-Lodge Creek and three sites on Black Pipe Creek were selected for multiple stream sampling using EMAP-West protocols so that more could be learned about geologic influences on stream conditions. Values for dissolved oxygen and specific conductance generally increased from upstream to downstream locations on Bear-in-the-Lodge Creek. Values for pH and water temperature generally decreased from upstream to downstream locations. Decreasing water temperature could be indicative of ground-water inflows. Values for dissolved oxygen, pH, and water temperature generally increased from upstream to downstream locations on Black Pipe Creek. The increase in temperature at the lower sites is a result of less dense riparian cover, and the warmer water also could account for the lower concentrations of dissolved oxygen found in the lower reaches of Black Pipe Creek. Values for specific conductance were more than three times greater at the lower site (1,342 microsiemens per centimeter (?S/cm)) than at the upper site (434 ?S/cm). The increase probably occurs when the stream transitions from contacting the underlying Ar

  4. Portfolio optimization and the random magnet problem

    NASA Astrophysics Data System (ADS)

    Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.

    2002-08-01

    Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.

  5. The Stream-Catchment (StreamCat) and Lake-Catchment ...

    EPA Pesticide Factsheets

    Background/Question/MethodsLake and stream conditions respond to both natural and human-related landscape features. Characterizing these features within contributing areas (i.e., delineated watersheds) of streams and lakes could improve our understanding of how biological conditions vary spatially and improve the use, management, and restoration of these aquatic resources. However, the specialized geospatial techniques required to define and characterize stream and lake watersheds has limited their widespread use in both scientific and management efforts at large spatial scales. We developed the StreamCat and LakeCat Datasets to model, predict, and map the probable biological conditions of streams and lakes across the conterminous US (CONUS). Both StreamCat and LakeCat contain watershed-level characterizations of several hundred natural (e.g., soils, geology, climate, and land cover) and anthropogenic (e.g., urbanization, agriculture, mining, and forest management) landscape features for ca. 2.6 million stream segments and 376,000 lakes across the CONUS, respectively. These datasets can be paired with field samples to provide independent variables for modeling and other analyses. We paired 1,380 stream and 1,073 lake samples from the USEPAs National Aquatic Resource Surveys with StreamCat and LakeCat and used random forest (RF) to model and then map an invertebrate condition index and chlorophyll a concentration, respectively. Results/ConclusionsThe invertebrate

  6. Universal shocks in the Wishart random-matrix ensemble.

    PubMed

    Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr

    2013-05-01

    We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.

  7. Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick

    2018-05-01

    For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.

  8. Convergence to equilibrium under a random Hamiltonian.

    PubMed

    Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  9. Convergence to equilibrium under a random Hamiltonian

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  10. Intermediate quantum maps for quantum computation

    NASA Astrophysics Data System (ADS)

    Giraud, O.; Georgeot, B.

    2005-10-01

    We study quantum maps displaying spectral statistics intermediate between Poisson and Wigner-Dyson. It is shown that they can be simulated on a quantum computer with a small number of gates, and efficiently yield information about fidelity decay or spectral statistics. We study their matrix elements and entanglement production and show that they converge with time to distributions which differ from random matrix predictions. A randomized version of these maps can be implemented even more economically and yields pseudorandom operators with original properties, enabling, for example, one to produce fractal random vectors. These algorithms are within reach of present-day quantum computers.

  11. Scattering and transport statistics at the metal-insulator transition: A numerical study of the power-law banded random-matrix model

    NASA Astrophysics Data System (ADS)

    Méndez-Bermúdez, J. A.; Gopar, Victor A.; Varga, Imre

    2010-09-01

    We study numerically scattering and transport statistical properties of the one-dimensional Anderson model at the metal-insulator transition described by the power-law banded random matrix (PBRM) model at criticality. Within a scattering approach to electronic transport, we concentrate on the case of a small number of single-channel attached leads. We observe a smooth crossover from localized to delocalized behavior in the average-scattering matrix elements, the conductance probability distribution, the variance of the conductance, and the shot noise power by varying b (the effective bandwidth of the PBRM model) from small (b≪1) to large (b>1) values. We contrast our results with analytic random matrix theory predictions which are expected to be recovered in the limit b→∞ . We also compare our results for the PBRM model with those for the three-dimensional (3D) Anderson model at criticality, finding that the PBRM model with bɛ[0.2,0.4] reproduces well the scattering and transport properties of the 3D Anderson model.

  12. Streaming weekly soap opera video episodes to smartphones in a randomized controlled trial to reduce HIV risk in young urban African American/black women.

    PubMed

    Jones, Rachel; Lacroix, Lorraine J

    2012-07-01

    Love, Sex, and Choices is a 12-episode soap opera video series created as an intervention to reduce HIV sex risk. The effect on women's HIV risk behavior was evaluated in a randomized controlled trial in 238 high risk, predominately African American young adult women in the urban Northeast. To facilitate on-demand access and privacy, the episodes were streamed to study-provided smartphones. Here, we discuss the development of a mobile platform to deliver the 12-weekly video episodes or weekly HIV risk reduction written messages to smartphones, including; the technical requirements, development, and evaluation. Popularity of the smartphone and use of the Internet for multimedia offer a new channel to address health disparities in traditionally underserved populations. This is the first study to report on streaming a serialized video-based intervention to a smartphone. The approach described here may provide useful insights in assessing advantages and disadvantages of smartphones to implement a video-based intervention.

  13. Bi-dimensional null model analysis of presence-absence binary matrices.

    PubMed

    Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J

    2018-01-01

    Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  14. Sediment Yield From First-Order Streams in Managed Redwood Forests: Effects of Recent Harvests and Legacy Management Practices

    Treesearch

    M.D. O' Connor; C.H. Perry; W. McDavitt

    2007-01-01

    According to the State of California, most of North Coast’s watersheds are impaired by sediment. This study quantified sediment yield from watersheds under different management conditions. Temporary sedimentation basins were installed in 30 randomly chosen first-order streams in two watersheds in Humboldt County, California. Most treatment sites were clearcuts, but two...

  15. Aspects géométriques et intégrables des modèles de matrices aléatoires

    NASA Astrophysics Data System (ADS)

    Marchal, Olivier

    2010-12-01

    This thesis deals with the geometric and integrable aspects associated with random matrix models. Its purpose is to provide various applications of random matrix theory, from algebraic geometry to partial differential equations of integrable systems. The variety of these applications shows why matrix models are important from a mathematical point of view. First, the thesis will focus on the study of the merging of two intervals of the eigenvalues density near a singular point. Specifically, we will show why this special limit gives universal equations from the Painlevé II hierarchy of integrable systems theory. Then, following the approach of (bi) orthogonal polynomials introduced by Mehta to compute partition functions, we will find Riemann-Hilbert and isomonodromic problems connected to matrix models, making the link with the theory of Jimbo, Miwa and Ueno. In particular, we will describe how the hermitian two-matrix models provide a degenerate case of Jimbo-Miwa-Ueno's theory that we will generalize in this context. Furthermore, the loop equations method, with its central notions of spectral curve and topological expansion, will lead to the symplectic invariants of algebraic geometry recently proposed by Eynard and Orantin. This last point will be generalized to the case of non-hermitian matrix models (arbitrary beta) paving the way to "quantum algebraic geometry" and to the generalization of symplectic invariants to "quantum curves". Finally, this set up will be applied to combinatorics in the context of topological string theory, with the explicit computation of an hermitian random matrix model enumerating the Gromov-Witten invariants of a toric Calabi-Yau threefold.

  16. Stream salamander species richness and abundance in relation to environmental factors in Shenandoah National Park, Virginia

    USGS Publications Warehouse

    Campbell Grant, Evan H.; Jung, Robin E.; Rice, Karen C.

    2005-01-01

    Stream salamanders are sensitive to acid mine drainage and may be sensitive to acidification and low acid neutralizing capacity (ANC) of a watershed. Streams in Shenandoah National Park, Virginia, are subject to episodic acidification from precipitation events. We surveyed 25 m by 2 m transects located on the stream bank adjacent to the water channel in Shenandoah National Park for salamanders using a stratified random sampling design based on elevation, aspect and bedrock geology. We investigated the relationships of four species (Eurycea bislineata, Desmognathus fuscus, D. monticola and Gyrinophilus porphyriticus) to habitat and water quality variables. We did not find overwhelming evidence that stream salamanders are affected by the acid-base status of streams in Shenandoah National Park. Desmognathus fuscus and D. monticola abundance was greater both in streams that had a higher potential to neutralize acidification, and in higher elevation (>700 m) streams. Neither abundance of E. bislineata nor species richness were related to any of the habitat variables. Our sampling method preferentially detected the adult age class of the study species and did not allow us to estimate population sizes. We suggest that continued monitoring of stream salamander populations in SNP will determine the effects of stream acidification on these taxa.

  17. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  18. Quantification of the multi-streaming effect in redshift space distortion

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Oh, Minji

    2017-05-01

    Both multi-streaming (random motion) and bulk motion cause the Finger-of-God (FoG) effect in redshift space distortion (RSD). We apply a direct measurement of the multi-streaming effect in RSD from simulations, proving that it induces an additional, non-negligible FoG damping to the redshift space density power spectrum. We show that, including the multi-streaming effect, the RSD modelling is significantly improved. We also provide a theoretical explanation based on halo model for the measured effect, including a fitting formula with one to two free parameters. The improved understanding of FoG helps break the fσ8-σv degeneracy in RSD cosmology, and has the potential of significantly improving cosmological constraints.

  19. A matrix contraction process

    NASA Astrophysics Data System (ADS)

    Wilkinson, Michael; Grant, John

    2018-03-01

    We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \

  20. Derivatives of random matrix characteristic polynomials with applications to elliptic curves

    NASA Astrophysics Data System (ADS)

    Snaith, N. C.

    2005-12-01

    The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.

  1. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.

    2018-03-01

    In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.

  2. Automated MALDI Matrix Coating System for Multiple Tissue Samples for Imaging Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Mounfield, William P.; Garrett, Timothy J.

    2012-03-01

    Uniform matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is key for reproducible analyte ion signals. Current methods often result in nonhomogenous matrix deposition, and take time and effort to produce acceptable ion signals. Here we describe a fully-automated method for matrix deposition using an enclosed spray chamber and spray nozzle for matrix solution delivery. A commercial air-atomizing spray nozzle was modified and combined with solenoid controlled valves and a Programmable Logic Controller (PLC) to control and deliver the matrix solution. A spray chamber was employed to contain the nozzle, sample, and atomized matrix solution stream, and to prevent any interference from outside conditions as well as allow complete control of the sample environment. A gravity cup was filled with MALDI matrix solutions, including DHB in chloroform/methanol (50:50) at concentrations up to 60 mg/mL. Various samples (including rat brain tissue sections) were prepared using two deposition methods (spray chamber, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed a uniform coating of matrix crystals across the sample. Overall, the mass spectral images gathered from tissues coated using the spray chamber system were of better quality and more reproducible than from tissue specimens prepared by the inkjet deposition method.

  3. Automated MALDI matrix coating system for multiple tissue samples for imaging mass spectrometry.

    PubMed

    Mounfield, William P; Garrett, Timothy J

    2012-03-01

    Uniform matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is key for reproducible analyte ion signals. Current methods often result in nonhomogenous matrix deposition, and take time and effort to produce acceptable ion signals. Here we describe a fully-automated method for matrix deposition using an enclosed spray chamber and spray nozzle for matrix solution delivery. A commercial air-atomizing spray nozzle was modified and combined with solenoid controlled valves and a Programmable Logic Controller (PLC) to control and deliver the matrix solution. A spray chamber was employed to contain the nozzle, sample, and atomized matrix solution stream, and to prevent any interference from outside conditions as well as allow complete control of the sample environment. A gravity cup was filled with MALDI matrix solutions, including DHB in chloroform/methanol (50:50) at concentrations up to 60 mg/mL. Various samples (including rat brain tissue sections) were prepared using two deposition methods (spray chamber, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed a uniform coating of matrix crystals across the sample. Overall, the mass spectral images gathered from tissues coated using the spray chamber system were of better quality and more reproducible than from tissue specimens prepared by the inkjet deposition method.

  4. Finite-time stability of neutral-type neural networks with random time-varying delays

    NASA Astrophysics Data System (ADS)

    Ali, M. Syed; Saravanan, S.; Zhu, Quanxin

    2017-11-01

    This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.

  5. Statistical properties of MHD fluctuations associated with high speed streams from HELIOS 2 observations

    NASA Technical Reports Server (NTRS)

    Bavassano, B.; Dobrowolny, H.; Fanfoni, G.; Mariani, F.; Ness, N. F.

    1981-01-01

    Helios 2 magnetic data were used to obtain several statistical properties of MHD fluctuations associated with the trailing edge of a given stream served in different solar rotations. Eigenvalues and eigenvectors of the variance matrix, total power and degree of compressibility of the fluctuations were derived and discussed both as a function of distance from the Sun and as a function of the frequency range included in the sample. The results obtained add new information to the picture of MHD turbulence in the solar wind. In particular, a dependence from frequency range of the radial gradients of various statistical quantities is obtained.

  6. Data Stream Mining Based Dynamic Link Anomaly Analysis Using Paired Sliding Time Window Data

    DTIC Science & Technology

    2014-11-01

    Conference on Knowledge Dis- covery and Data Mining, PAKDD’10, Hyderabad, India , (2010). [2] Almansoori, W., Gao, S., Jarada, T. N., Elsheikh, A. M...F., Greif, C., and Lakshmanan, L. V., “Fast Matrix Computations for Pairwise and Columnwise Commute Times and Katz Scores,” Internet Mathematics, Vol

  7. Behavioral Measures of Auditory Streaming in Ferrets (Mustela putorius)

    PubMed Central

    Ma, Ling; Yin, Pingbo; Micheyl, Christophe; Oxenham, Andrew J.; Shamma, Shihab A.

    2015-01-01

    An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. PMID:20695663

  8. Clutch sizes and nests of tailed frogs from the Olympic Peninsula, Washington

    USGS Publications Warehouse

    Bury, R. Bruce; Loafman, P.; Rofkar, D.; Mike, K.

    2001-01-01

    In the summers 1995-1998, we sampled 168 streams (1,714 in of randomly selected 1-m bands) to determine distribution and abundance of stream amphibians in Olympic National Park, Washington. We found six nests (two in one stream) of the tailed frog, compared to only two nests with clutch sizes reported earlier for coastal regions. This represents only one nest per 286 in searched and one nest per 34 streams sampled. Tailed frogs occurred only in 94 (60%) of the streams and, for these waters, we found one nest per 171 in searched or one nest per 20 streams sampled. The numbers of eggs for four masses ((x) over bar = 48.3, range 40-55) were low but one single strand in a fifth nest had 96 eggs. One nest with 185 eggs likely represented communal egg deposition. Current evidence indicates a geographic trend with yearly clutches of relatively few eggs in coastal tailed frogs compared to biennial nesting with larger clutches for inland populations in the Rocky Mountains.

  9. Time series, correlation matrices and random matrix models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinayak; Seligman, Thomas H.

    2014-01-08

    In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less

  10. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  11. Large-scale variation in subsurface stream biofilms: a cross-regional comparison of metabolic function and community similarity.

    PubMed

    Findlay, S; Sinsabaugh, R L

    2006-10-01

    We examined bacterial metabolic activity and community similarity in shallow subsurface stream sediments distributed across three regions of the eastern United States to assess whether there were parallel changes in functional and structural attributes at this large scale. Bacterial growth, oxygen consumption, and a suite of extracellular enzyme activities were assayed to describe functional variability. Community similarity was assessed using randomly amplified polymorphic DNA (RAPD) patterns. There were significant differences in streamwater chemistry, metabolic activity, and bacterial growth among regions with, for instance, twofold higher bacterial production in streams near Baltimore, MD, compared to Hubbard Brook, NH. Five of eight extracellular enzymes showed significant differences among regions. Cluster analyses of individual streams by metabolic variables showed clear groups with significant differences in representation of sites from different regions among groups. Clustering of sites based on randomly amplified polymorphic DNA banding resulted in groups with generally less internal similarity although there were still differences in distribution of regional sites. There was a marginally significant (p = 0.09) association between patterns based on functional and structural variables. There were statistically significant but weak (r2 approximately 30%) associations between landcover and measures of both structure and function. These patterns imply a large-scale organization of biofilm communities and this structure may be imposed by factor(s) such as landcover and covariates such as nutrient concentrations, which are known to also cause differences in macrobiota of stream ecosystems.

  12. P-Type Factor Analyses of Individuals' Thought Sampling Data.

    ERIC Educational Resources Information Center

    Hurlburt, Russell T.; Melancon, Susan M.

    Recently, interest in research measuring stream of consciousness or thought has increased. A study was conducted, based on a previous study by Hurlburt, Lech, and Saltman, in which subjects were randomly interrupted to rate their thoughts and moods on a Likert-type scale. Thought samples were collected from 27 subjects who carried random-tone…

  13. Biomechanical and biophysical environment of bone from the macroscopic to the pericellular and molecular level.

    PubMed

    Ren, Li; Yang, Pengfei; Wang, Zhe; Zhang, Jian; Ding, Chong; Shang, Peng

    2015-10-01

    Bones with complicated hierarchical configuration and microstructures constitute the load-bearing system. Mechanical loading plays an essential role in maintaining bone health and regulating bone mechanical adaptation (modeling and remodeling). The whole-bone or sub-region (macroscopic) mechanical signals, including locomotion-induced loading and external actuator-generated vibration, ultrasound, oscillatory skeletal muscle stimulation, etc., give rise to sophisticated and distinct biomechanical and biophysical environments at the pericellular (microscopic) and collagen/mineral molecular (nanoscopic) levels, which are the direct stimulations that positively influence bone adaptation. While under microgravity, the stimulations decrease or even disappear, which exerts a negative influence on bone adaptation. A full understanding of the biomechanical and biophysical environment at different levels is necessary for exploring bone biomechanical properties and mechanical adaptation. In this review, the mechanical transferring theories from the macroscopic to the microscopic and nanoscopic levels are elucidated. First, detailed information of the hierarchical structures and biochemical composition of bone, which are the foundations for mechanical signal propagation, are presented. Second, the deformation feature of load-bearing bone during locomotion is clarified as a combination of bending and torsion rather than simplex bending. The bone matrix strains at microscopic and nanoscopic levels directly induced by bone deformation are critically discussed, and the strain concentration mechanism due to the complicated microstructures is highlighted. Third, the biomechanical and biophysical environments at microscopic and nanoscopic levels positively generated during bone matrix deformation or by dynamic mechanical loadings induced by external actuators, as well as those negatively affected under microgravity, are systematically discussed, including the interstitial fluid flow (IFF) within the lacunar-canalicular system and at the endosteum, the piezoelectricity at the deformed bone surface, and the streaming potential accompanying the IFF. Their generation mechanisms and the regulation effect on bone adaptation are presented. The IFF-induced chemotransport effect, shear stress, and fluid drag on the pericellular matrix are meaningful and noteworthy. Furthermore, we firmly believe that bone adaptation is regulated by the combination of bone biomechanical and biophysical environment, not only the commonly considered matrix strain, fluid shear stress, and hydrostatic pressure, but also the piezoelectricity and streaming potential. Especially, it is necessary to incorporate bone matrix piezoelectricity and streaming potential to explain how osteoblasts (bone formation cells) and osteoclasts (bone resorption cells) can differentiate among different types of loads. Specifically, the regulation effects and the related mechanisms of the biomechanical and biophysical environments on bone need further exploration, and the incorporation of experimental research with theoretical simulations is essential. Copyright © 2015. Published by Elsevier Ltd.

  14. THE 300 km s{sup -1} STELLAR STREAM NEAR SEGUE 1: INSIGHTS FROM HIGH-RESOLUTION SPECTROSCOPY OF ITS BRIGHTEST STAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frebel, Anna; Casey, Andrew R.; Lunnan, Ragnhild

    2013-07-01

    We present a chemical abundance analysis of 300S-1, the brightest likely member star of the 300 km s{sup -1} stream near the faint satellite galaxy Segue 1. From a high-resolution Magellan/MIKE spectrum, we determine a metallicity of [Fe/H] = -1.46 {+-} 0.05 {+-} 0.23 (random and systematic uncertainties) for star 300S-1, and find an abundance pattern similar to typical halo stars at this metallicity. Comparing our stellar parameters to theoretical isochrones, we estimate a distance of 18 {+-} 7 kpc. Both the metallicity and distance estimates are in good agreement with what can be inferred from comparing the Sloan Digitalmore » Sky Survey photometric data of the stream stars to globular cluster sequences. While several other structures overlap with the stream in this part of the sky, the combination of kinematic, chemical, and distance information makes it unlikely that these stars are associated with either the Segue 1 galaxy, the Sagittarius Stream, or the Orphan Stream. Streams with halo-like abundance signatures, such as the 300 km s{sup -1} stream, present another observational piece for understanding the accretion history of the Galactic halo.« less

  15. Assessing the Vulnerability of Streams to Increased Frequency and Severity of Low Flows in the Southeastern United States

    NASA Astrophysics Data System (ADS)

    Konrad, C. P.

    2014-12-01

    A changing climate poses risks to the availability and quality of water resources. Among the risks, increased frequency and severity of low flow periods in streams would be significant for many in-stream and out-of-stream uses of water. While down-scaled climate projections serve as the basis for understanding impacts of climate change on hydrologic systems, a robust framework for risk assessment incorporates multiple dimensions of risks including the vulnerability of hydrologic systems to climate change impacts. Streamflow records from the southeastern US were examined to assess the vulnerability of streams to increased frequency and severity of low flows. Long-term (>50 years) records provide evidence of more frequent and severe low flows in more streams than would be expected from random chance. Trends in low flows appear to be a result of changes in the temporal distribution rather than the annual amount of preciptation and/or in evaporation. Base flow recession provides an indicator of a stream's vulnerability to such changes. Linkages between streamflow patterns across temporal scales can be used for understanding and asessing stream responses to the various possible expressions of a changing climate.

  16. Online neural monitoring of statistical learning

    PubMed Central

    Batterink, Laura J.; Paller, Ken A.

    2017-01-01

    The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the RT task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. PMID:28324696

  17. Statistically extracted fundamental watershed variables for estimating the loads of total nitrogen in small streams

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.; Terziotti, Silvia

    2016-01-01

    Accurate estimation of total nitrogen loads is essential for evaluating conditions in the aquatic environment. Extrapolation of estimates beyond measured streams will greatly expand our understanding of total nitrogen loading to streams. Recursive partitioning and random forest regression were used to assess 85 geospatial, environmental, and watershed variables across 636 small (<585 km2) watersheds to determine which variables are fundamentally important to the estimation of annual loads of total nitrogen. Initial analysis led to the splitting of watersheds into three groups based on predominant land use (agricultural, developed, and undeveloped). Nitrogen application, agricultural and developed land area, and impervious or developed land in the 100-m stream buffer were commonly extracted variables by both recursive partitioning and random forest regression. A series of multiple linear regression equations utilizing the extracted variables were created and applied to the watersheds. As few as three variables explained as much as 76 % of the variability in total nitrogen loads for watersheds with predominantly agricultural land use. Catchment-scale national maps were generated to visualize the total nitrogen loads and yields across the USA. The estimates provided by these models can inform water managers and help identify areas where more in-depth monitoring may be beneficial.

  18. Using Isotopic Age of Water as a Constraint on Model Identification at a Critical Zone Observatory

    NASA Astrophysics Data System (ADS)

    Duffy, C.; Thomas, E.; Bhatt, G.; George, H.; Boyer, E. W.; Sullivan, P. L.

    2016-12-01

    This paper presents an ecohydrologic model constrained by comprehensive space and time observations of water and stable isotopes of oxygen and hydrogen for an upland catchment, the Susquehanna/Shale Hills Critical Zone Observatory (SSH_CZO). The paper first develops the theoretical basis for simulation of flow, isotope ratios and "age" as water moves through the canopy, to the unsaturated and saturated zones and finally to an intermittent stream. The model formulation demonstrates that the residence time and age of environmental tracers can be directly simulated without knowledge of the form of the underlying residence time distribution function and without the addition of any new physical parameters. The model is used to explore the observed rapid attenuation of event and seasonal isotopic ratios in precipitation over the depth of the soil zone and the impact of decreasing hydraulic conductivity with depth on the dynamics of streamflow and stream isotope ratios. The results suggest the importance of mobile macropore flow on recharge to groundwater during the non-growing cold-wet season. The soil matrix is also recharged during this season with a cold-season isotope signature. During the growing-dry season, root uptake and evaporation from the soil matrix along with a declining water table provides the main source of water for plants and determines the growing season signature. Flow path changes during storm events and transient overland flow is inferred by comparing the frequency distribution of groundwater and stream isotope histories with model results. Model uncertainty is evaluated for conditions of matrix-macropore partitioning and heterogeneous variations in conductivity with depth. The paper concludes by comparing the fully dynamical model with the simplified mixing model form in dynamic equilibrium. The comparison illustrates the importance of system memory on the time scales for flow and mixing processes and the limitations of the dynamic equilibrium assumption on estimated age and residence time.

  19. Quasiparticle random phase approximation uncertainties and their correlations in the analysis of 0{nu}{beta}{beta} decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faessler, Amand; Rodin, V.; Fogli, G. L.

    2009-03-01

    The variances and covariances associated to the nuclear matrix elements of neutrinoless double beta decay (0{nu}{beta}{beta}) are estimated within the quasiparticle random phase approximation. It is shown that correlated nuclear matrix elements uncertainties play an important role in the comparison of 0{nu}{beta}{beta} decay rates for different nuclei, and that they are degenerate with the uncertainty in the reconstructed Majorana neutrino mass.

  20. Localized motion in random matrix decomposition of complex financial systems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian

    2017-04-01

    With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.

  1. Quantifying Nitrogen Transport from Riparian Groundwater Seeps to a Headwater Stream in an Agricultural Watershed

    NASA Astrophysics Data System (ADS)

    Redder, B.; Buda, A. R.; Kennedy, C. D.; Folmar, G.; DeWalle, D. R.; Boyer, E. W.

    2017-12-01

    Headwater streams in the Northeast region of the United States typically receive more than 50% of their base flow from groundwater, either by diffuse discharge through the streambed or by localized discharge through riparian seeps. It is very difficult to separate the individual contributions of these two groundwater fluxes to streamflow. Furthermore, riparian seeps show significant variability in discharge and nutrient concentration, adding uncertainty to estimates of groundwater-based nitrogen inputs to streams. In this study, we combined stream measurements at two different scales to quantify groundwater discharge by matrix flow through the streambed and by macropore flow through the riparian zone. The study site was a 175-m stream reach located in a heavily cultivated 45-hectare watershed in east-central Pennsylvania. Differential streamflow gauging and streambed measurements of hydraulic head gradient, hydraulic conductivity, and groundwater chemistry were used to solve for the riparian groundwater flux in a reach mass balance equation. Adopting a mass balance approach, riparian groundwater fluxes ranged from 115-205 m3 d-1, transporting 2-4 kg N d-1 of nitrate from the fractured bedrock aquifer to the stream. Air-water manometer readings from short-screened piezometers installed in the shallow streambed (30 cm) indicated slightly losing head gradients between the stream and groundwater, despite substantial (36-66%) increases in stream flow along the stream reach. Preliminary chemical data for the stream, streambed, and shallow ground water suggest that the stream is partially disconnected from the underlying aquifer and that riparian groundwater seeps supply essentially all water and nitrogen to the system. These results, along with the comparison of shallow and deep aquifer water with seep chemistry, provide insight into sources of water to riparian groundwater seeps and allow us to determine the transport and fate of nitrogen in a fractured aquifer system. The use of water isotopes and hydrometric data will be used to further test the hypothesis that this is a perched system disconnected from the aquifer below.

  2. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  3. Semiclassical matrix model for quantum chaotic transport with time-reversal symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel, E-mail: marcel.novaes@gmail.com

    2015-10-15

    We show that the semiclassical approach to chaotic quantum transport in the presence of time-reversal symmetry can be described by a matrix model. In other words, we construct a matrix integral whose perturbative expansion satisfies the semiclassical diagrammatic rules for the calculation of transport statistics. One of the virtues of this approach is that it leads very naturally to the semiclassical derivation of universal predictions from random matrix theory.

  4. A study protocol of a three-group randomized feasibility trial of an online yoga intervention for mothers after stillbirth (The Mindful Health Study).

    PubMed

    Huberty, Jennifer; Matthews, Jeni; Leiferman, Jenn; Cacciatore, Joanne; Gold, Katherine J

    2018-01-01

    In the USA, stillbirth (in utero fetal death ≥20 weeks gestation) is a major public health issue. Women who experience stillbirth, compared to women with live birth, have a nearly sevenfold increased risk of a positive screen for post-traumatic stress disorder (PTSD) and a fourfold increased risk of depressive symptoms. Because the majority of women who have experienced the death of their baby become pregnant within 12-18 months and the lack of intervention studies conducted within this population, novel approaches targeting physical and mental health, specific to the needs of this population, are critical. Evidence suggests that yoga is efficacious, safe, acceptable, and cost-effective for improving mental health in a variety of populations, including pregnant and postpartum women. To date, there are no known studies examining online-streaming yoga as a strategy to help mothers cope with PTSD symptoms after stillbirth. The present study is a two-phase randomized controlled trial. Phase 1 will involve (1) an iterative design process to develop the online yoga prescription for phase 2 and (2) qualitative interviews to identify cultural barriers to recruitment in non-Caucasian women (i.e., predominately Hispanic and/or African American) who have experienced stillbirth ( N  = 5). Phase 2 is a three-group randomized feasibility trial with assessments at baseline, and at 12 and 20 weeks post-intervention. Ninety women who have experienced a stillbirth within 6 weeks to 24 months will be randomized into one of the following three arms for 12 weeks: (1) intervention low dose (LD) = 60 min/week online-streaming yoga ( n  = 30), (2) intervention moderate dose (MD) = 150 min/week online-streaming yoga ( n  = 30), or (3) stretch and tone control (STC) group = 60 min/week of stretching/toning exercises ( n  = 30). This study will explore the feasibility and acceptability of a 12-week, home-based, online-streamed yoga intervention, with varying doses among mothers after a stillbirth. If feasible, the findings from this study will inform a full-scale trial to determine the effectiveness of home-based online-streamed yoga to improve PTSD. Long-term, health care providers could use online yoga as a non-pharmaceutical, inexpensive resource for stillbirth aftercare. NCT02925481.

  5. Random Matrix Theory and Econophysics

    NASA Astrophysics Data System (ADS)

    Rosenow, Bernd

    2000-03-01

    Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory Analysis of Diffusion in Stock Price Dynamics, preprint

  6. Quantification of the multi-streaming effect in redshift space distortion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yi; Oh, Minji; Zhang, Pengjie, E-mail: yizheng@kasi.re.kr, E-mail: zhangpj@sjtu.edu.cn, E-mail: minjioh@kasi.re.kr

    Both multi-streaming (random motion) and bulk motion cause the Finger-of-God (FoG) effect in redshift space distortion (RSD). We apply a direct measurement of the multi-streaming effect in RSD from simulations, proving that it induces an additional, non-negligible FoG damping to the redshift space density power spectrum. We show that, including the multi-streaming effect, the RSD modelling is significantly improved. We also provide a theoretical explanation based on halo model for the measured effect, including a fitting formula with one to two free parameters. The improved understanding of FoG helps break the f σ{sub 8}−σ {sub v} degeneracy in RSD cosmology,more » and has the potential of significantly improving cosmological constraints.« less

  7. Stability and dynamical properties of material flow systems on random networks

    NASA Astrophysics Data System (ADS)

    Anand, K.; Galla, T.

    2009-04-01

    The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.

  8. Tensor Minkowski Functionals for random fields on the sphere

    NASA Astrophysics Data System (ADS)

    Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom

    2017-12-01

    We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.

  9. Inflation with a graceful exit in a random landscape

    NASA Astrophysics Data System (ADS)

    Pedro, F. G.; Westphal, A.

    2017-03-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  10. Correlation and volatility in an Indian stock market: A random matrix approach

    NASA Astrophysics Data System (ADS)

    Kulkarni, Varsha; Deo, Nivedita

    2007-11-01

    We examine the volatility of an Indian stock market in terms of correlation of stocks and quantify the volatility using the random matrix approach. First we discuss trends observed in the pattern of stock prices in the Bombay Stock Exchange for the three-year period 2000 2002. Random matrix analysis is then applied to study the relationship between the coupling of stocks and volatility. The study uses daily returns of 70 stocks for successive time windows of length 85 days for the year 2001. We compare the properties of matrix C of correlations between price fluctuations in time regimes characterized by different volatilities. Our analyses reveal that (i) the largest (deviating) eigenvalue of C correlates highly with the volatility of the index, (ii) there is a shift in the distribution of the components of the eigenvector corresponding to the largest eigenvalue across regimes of different volatilities, (iii) the inverse participation ratio for this eigenvector anti-correlates significantly with the market fluctuations and finally, (iv) this eigenvector of C can be used to set up a Correlation Index, CI whose temporal evolution is significantly correlated with the volatility of the overall market index.

  11. Estimation of river pollution index in a tidal stream using kriging analysis.

    PubMed

    Chen, Yen-Chang; Yeh, Hui-Chung; Wei, Chiang

    2012-08-29

    Tidal streams are complex watercourses that represent a transitional zone between riverine and marine systems; they occur where fresh and marine waters converge. Because tidal circulation processes cause substantial turbulence in these highly dynamic zones, tidal streams are the most productive of water bodies. Their rich biological diversity, combined with the convenience of land and water transports, provide sites for concentrated populations that evolve into large cities. Domestic wastewater is generally discharged directly into tidal streams in Taiwan, necessitating regular evaluation of the water quality of these streams. Given the complex flow dynamics of tidal streams, only a few models can effectively evaluate and identify pollution levels. This study evaluates the river pollution index (RPI) in tidal streams by using kriging analysis. This is a geostatistical method for interpolating random spatial variation to estimate linear grid points in two or three dimensions. A kriging-based method is developed to evaluate RPI in tidal streams, which is typically considered as 1D in hydraulic engineering. The proposed method efficiently evaluates RPI in tidal streams with the minimum amount of water quality data. Data of the Tanshui River downstream reach available from an estuarine area validate the accuracy and reliability of the proposed method. Results of this study demonstrate that this simple yet reliable method can effectively estimate RPI in tidal streams.

  12. Statistical analysis of effective singular values in matrix rank determination

    NASA Technical Reports Server (NTRS)

    Konstantinides, Konstantinos; Yao, Kung

    1988-01-01

    A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.

  13. Superparamagnetic perpendicular magnetic tunnel junctions for true random number generators

    NASA Astrophysics Data System (ADS)

    Parks, Bradley; Bapna, Mukund; Igbokwe, Julianne; Almasi, Hamid; Wang, Weigang; Majetich, Sara A.

    2018-05-01

    Superparamagnetic perpendicular magnetic tunnel junctions are fabricated and analyzed for use in random number generators. Time-resolved resistance measurements are used as streams of bits in statistical tests for randomness. Voltage control of the thermal stability enables tuning the average speed of random bit generation up to 70 kHz in a 60 nm diameter device. In its most efficient operating mode, the device generates random bits at an energy cost of 600 fJ/bit. A narrow range of magnetic field tunes the probability of a given state from 0 to 1, offering a means of probabilistic computing.

  14. Random Matrix Theory Approach to Chaotic Coherent Perfect Absorbers

    NASA Astrophysics Data System (ADS)

    Li, Huanan; Suwunnarat, Suwun; Fleischmann, Ragnar; Schanz, Holger; Kottos, Tsampikos

    2017-01-01

    We employ random matrix theory in order to investigate coherent perfect absorption (CPA) in lossy systems with complex internal dynamics. The loss strength γCPA and energy ECPA, for which a CPA occurs, are expressed in terms of the eigenmodes of the isolated cavity—thus carrying over the information about the chaotic nature of the target—and their coupling to a finite number of scattering channels. Our results are tested against numerical calculations using complex networks of resonators and chaotic graphs as CPA cavities.

  15. Long-term Patterns of Microhabitat Use by Fish in a Southern Appalachian Stream from 1983 to 1992: Effects of Hydrologic Period, Season and Fish Length

    Treesearch

    Gary D. Grossman; Robert E. Ratajczak

    1998-01-01

    We quantified microhabitat use by members of a southern Appalachian stream fish assemblage over a ten-year period that included both floods and droughts. Our study site (37 m in length) encompassed riffle, run and pool habitats. Previous research indicated that species belonged to either benthic or water-column microhabitat guilds. Most species exhibited non-random...

  16. Semistochastic approach to many electron systems

    NASA Astrophysics Data System (ADS)

    Grossjean, M. K.; Grossjean, M. F.; Schulten, K.; Tavan, P.

    1992-08-01

    A Pariser-Parr-Pople (PPP) Hamiltonian of an 8π electron system of the molecule octatetraene, represented in a configuration-interaction basis (CI basis), is analyzed with respect to the statistical properties of its matrix elements. Based on this analysis we develop an effective Hamiltonian, which represents virtual excitations by a Gaussian orthogonal ensemble (GOE). We also examine numerical approaches which replace the original Hamiltonian by a semistochastically generated CI matrix. In that CI matrix, the matrix elements of high energy excitations are choosen randomly according to distributions reflecting the statistics of the original CI matrix.

  17. EML, VEGA, ODM, LTER, GLEON - considerations and technologies for building a buoy information system at an LTER site

    NASA Astrophysics Data System (ADS)

    Gries, C.; Winslow, L.; Shin, P.; Hanson, P. C.; Barseghian, D.

    2010-12-01

    At the North Temperate Lakes Long Term Ecological Research (NTL LTER) site six buoys and one met station are maintained, each equipped with up to 20 sensors producing up to 45 separate data streams at a 1 or 10 minute frequency. Traditionally, this data volume has been managed in many matrix type tables, each described in the Ecological Metadata Language (EML) and accessed online by a query system based on the provided metadata. To develop a more flexible information system, several technologies are currently being experimented with. We will review, compare and evaluate these technologies and discuss constraints and advantages of network memberships and implementation of standards. A Data Turbine server is employed to stream data from data logger files into a database with the Real-time Data Viewer being used for monitoring sensor health. The Kepler work flow processor is being explored to introduce quality control routines into this data stream taking advantage of the Data Turbine actor. Kepler could replace traditional database triggers while adding visualization and advanced data access functionality for downstream modeling or other analytical applications. The data are currently streamed into the traditional matrix type tables and into an Observation Data Model (ODM) following the CUAHSI ODM 1.1 specifications. In parallel these sensor data are managed within the Global Lake Ecological Observatory Network (GLEON) where the software package Ziggy streams the data into a database of the VEGA data model. Contributing data to a network implies compliance with established standards for data delivery and data documentation. ODM or VEGA type data models are not easily described in EML, the metadata exchange standard for LTER sites, but are providing many advantages from an archival standpoint. Both GLEON and CUAHSI have developed advanced data access capabilities based on their respective data models and data exchange standards while LTER is currently in a phase of intense technology developments which will eventually provide standardized data access that includes ecological data set types currently not covered by either ODM or VEGA.

  18. Network trending; leadership, followership and neutrality among companies: A random matrix approach

    NASA Astrophysics Data System (ADS)

    Mobarhan, N. S. Safavi; Saeedi, A.; Roodposhti, F. Rahnamay; Jafari, G. R.

    2016-11-01

    In this article, we analyze the cross-correlation between returns of different stocks to answer the following important questions. The first one is: If there exists collective behavior in a financial market, how could we detect it? And the second question is: Is there a particular company among the companies of a market as the leader of the collective behavior? Or is there no specified leadership governing the system similar to some complex systems? We use the method of random matrix theory to answer the mentioned questions. Cross-correlation matrix of index returns of four different markets is analyzed. The participation ratio quantity related to each matrices' eigenvectors and the eigenvalue spectrum is calculated. We introduce shuffled-matrix created of cross correlation matrix in such a way that the elements of the later one are displaced randomly. Comparing the participation ratio quantities obtained from a correlation matrix of a market and its related shuffled-one, on the bulk distribution region of the eigenvalues, we detect a meaningful deviation between the mentioned quantities indicating the collective behavior of the companies forming the market. By calculating the relative deviation of participation ratios, we obtain a measure to compare the markets according to their collective behavior. Answering the second question, we show there are three groups of companies: The first group having higher impact on the market trend called leaders, the second group is followers and the third one is the companies who have not a considerable role in the trend. The results can be utilized in portfolio construction.

  19. Concurrent assessment of fish and habitat in warmwater streams in Wyoming

    USGS Publications Warehouse

    Quist, M.C.; Hubert, W.A.; Rahel, F.J.

    2006-01-01

    Fisheries research and management in North America have focused largely on sport fishes, but native non-game fishes have attracted increased attention due to their declines. The Warmwater Stream Assessment (WSA) was developed to evaluate simultaneously both fish and habitat in Wyoming streams by a process that includes three major components: (1) stream-reach selection and accumulation of existing information, (2) fish and habitat sampling and (3) summarisation and evaluation of fish and habitat information. Fish are sampled by electric fishing or seining and habitat is measured at reach and channel-unit (i.e. pool, run, riffle, side channel, or backwater) scales. Fish and habitat data are subsequently summarised using a data-matrix approach. Hierarchical decision trees are used to assess critical habitat requirements for each fish species expected or found in the reach. Combined measurements of available habitat and the ecology of individual species contribute to the evaluation of the observed fish assemblage. The WSA incorporates knowledge of the fish assemblage and habitat features to enable inferences of factors likely influencing both the fish assemblage and their habitat. The WSA was developed for warmwater streams in Wyoming, but its philosophy, process and conceptual basis may be applied to environmental assessments in other geographical areas. ?? 2006 Blackwell Publishing Ltd.

  20. A Deep Stochastic Model for Detecting Community in Complex Networks

    NASA Astrophysics Data System (ADS)

    Fu, Jingcheng; Wu, Jianliang

    2017-01-01

    Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.

  1. A Distributed-Memory Package for Dense Hierarchically Semi-Separable Matrix Computations Using Randomization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter

    In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less

  2. A Distributed-Memory Package for Dense Hierarchically Semi-Separable Matrix Computations Using Randomization

    DOE PAGES

    Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...

    2016-06-30

    In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less

  3. RANDOM MATRIX DIAGONALIZATION--A COMPUTER PROGRAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuchel, K.; Greibach, R.J.; Porter, C.E.

    A computer prograra is described which generates random matrices, diagonalizes them and sorts appropriately the resulting eigenvalues and eigenvector components. FAP and FORTRAN listings for the IBM 7090 computer are included. (auth)

  4. A novel attack method about double-random-phase-encoding-based image hiding method

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  5. A Numerical Examination of the Long-Term Coherency of Meteoroid Streams in Near-Earth Orbit

    NASA Astrophysics Data System (ADS)

    Grazier, K. R.; Lipschutz, M. E.

    2000-05-01

    The statement that some small bodies in the Solar System--asteroids, comets, meteors (of cometary origin)--travel in co-orbital streams, would be accepted by planetary scientists without argument. After all, streams have been observed of fragments of at least one comet (Scotti and Melosh, 1993; Weaver et al., 1993), asteroids (Drummond, 1991; Rabinowitz et al., 1993; Binzel and Xu, 1993) and meteoroids of asteroidal origin, like Innisfree (Halliday et al., 1990; cf. Drummond, 1991). Whether members of a stream can be recognized from compositional studies of meteorites recovered on Earth and linked to a common source is more controversial since such linkage would imply variations in the Earth's sampling of extraterrestrial material that persist for tens of Myr. The dates of fall of H chondrites show that many - including Clusters in May, 1855-1895, September, 1812-1831 and Sept.-Oct., 1843-1992 -- apparently derive from specific meteoroids (Lipschutz et al., 1997). Contents of highly volatile elements in these 3 Clusters (selected by one criterion, fall circumstances), when analyzed using multivariate statistical techniques demonstrate that members of each Cluster (i.e. stream) are recognizable by a totally different characteristic criterion: a thermal history distinguishable from those of random H chondrite falls (cf. Lipschutz et al., 1997, for specific references). Antarctic H chondrites with terrestrial ages 50 Myr (Michlovich et al., 1995) also show this. Metallographic and thermoluminescence data for these H chondrites also reflect their thermal histories, and support the existence of such meteoroid streams (Sears et al., 1991; Benoit and Sears, 1993), but cosmogenic noble gas contents do not (Loeken et al., 1993; Schultz and Weber, 1996). Important unanswered orbital dynamic questions are how long a meteoroid stream should be recognizable and what dynamic conditions are implied by Clusters, whose members have cosmic ray exposure ages of some Myr. To begin to address these open issues, we simulate the trajectories of several near-Earth meteoroid streams--some with orbital elements corresponding to suspected streams, others randomly chosen. To integrate the trajectories as accurately as possible, we use an error-optimized modified 13th order Störmer integration scheme, capable of handling close planet/meteoroid approaches (Grazier et al., 1998). Using Drummond's (1979) d' criteria to determine stream membership and coherency as a function of time, we find that stream coherency beyond 100 Ky--certainly beyond 1 My--exists but is rare.

  6. Determination of impurities in uranium matrices by time-of-flight ICP-MS using matrix-matched method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buerger, Stefan; Riciputi, Lee R; Bostick, Debra A

    2007-01-01

    The analysis of impurities in uranium matrices is performed in a variety of fields, e.g. for quality control in the production stream converting uranium ores to fuels, as element signatures in nuclear forensics and safeguards, and for non-proliferation control. We have investigated the capabilities of time-of-flight ICP-MS for the analysis of impurities in uranium matrices using a matrix-matched method. The method was applied to the New Brunswick Laboratory CRM 124(1-7) series. For the seven certified reference materials, an overall precision and accuracy of approximately 5% and 14%, respectively, were obtained for 18 analyzed elements.

  7. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE PAGES

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.; ...

    2018-03-06

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  8. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  9. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  10. Water chemistry in 179 randomly selected Swedish headwater streams related to forest production, clear-felling and climate.

    PubMed

    Löfgren, Stefan; Fröberg, Mats; Yu, Jun; Nisell, Jakob; Ranneby, Bo

    2014-12-01

    From a policy perspective, it is important to understand forestry effects on surface waters from a landscape perspective. The EU Water Framework Directive demands remedial actions if not achieving good ecological status. In Sweden, 44 % of the surface water bodies have moderate ecological status or worse. Many of these drain catchments with a mosaic of managed forests. It is important for the forestry sector and water authorities to be able to identify where, in the forested landscape, special precautions are necessary. The aim of this study was to quantify the relations between forestry parameters and headwater stream concentrations of nutrients, organic matter and acid-base chemistry. The results are put into the context of regional climate, sulphur and nitrogen deposition, as well as marine influences. Water chemistry was measured in 179 randomly selected headwater streams from two regions in southwest and central Sweden, corresponding to 10 % of the Swedish land area. Forest status was determined from satellite images and Swedish National Forest Inventory data using the probabilistic classifier method, which was used to model stream water chemistry with Bayesian model averaging. The results indicate that concentrations of e.g. nitrogen, phosphorus and organic matter are related to factors associated with forest production but that it is not forestry per se that causes the excess losses. Instead, factors simultaneously affecting forest production and stream water chemistry, such as climate, extensive soil pools and nitrogen deposition, are the most likely candidates The relationships with clear-felled and wetland areas are likely to be direct effects.

  11. Streaming current for particle-covered surfaces: simulations and experiments

    NASA Astrophysics Data System (ADS)

    Blawzdziewicz, Jerzy; Adamczyk, Zbigniew; Ekiel-Jezewska, Maria L.

    2017-11-01

    Developing in situ methods for assessment of surface coverage by adsorbed nanoparticles is crucial for numerous technological processes, including controlling protein deposition and fabricating diverse microstructured materials (e.g., antibacterial coatings, catalytic surfaces, and particle-based optical systems). For charged surfaces and particles, promising techniques for evaluating surface coverage are based on measurements of the electrokinetic streaming current associated with ion convection in the double-layer region. We have investigated the dependence of the streaming current on the area fraction of adsorbed particles for equilibrium and random-sequential-adsorption (RSA) distributions of spherical particles, and for periodic square and hexagonal sphere arrays. The RSA results have been verified experimentally. Our numerical results indicate that the streaming current weakly depends on the microstructure of the particle monolayer. Combining simulations with the virial expansion, we provide convenient fitting formulas for the particle and surface contributions to the streaming current as functions of area fractions. For particles that have the same ζ-potential as the surface, we find that surface roughness reduces the streaming current. Supported by NSF Award No. 1603627.

  12. On efficient randomized algorithms for finding the PageRank vector

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Dmitriev, D. Yu.

    2015-03-01

    Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

  13. Full Equations (FEQ) model for the solution of the full, dynamic equations of motion for one-dimensional unsteady flow in open channels and through control structures

    USGS Publications Warehouse

    Franz, Delbert D.; Melching, Charles S.

    1997-01-01

    The Full EQuations (FEQ) model is a computer program for solution of the full, dynamic equations of motion for one-dimensional unsteady flow in open channels and through control structures. A stream system that is simulated by application of FEQ is subdivided into stream reaches (branches), parts of the stream system for which complete information on flow and depth are not required (dummy branches), and level-pool reservoirs. These components are connected by special features; that is, hydraulic control structures, including junctions, bridges, culverts, dams, waterfalls, spillways, weirs, side weirs, and pumps. The principles of conservation of mass and conservation of momentum are used to calculate the flow and depth throughout the stream system resulting from known initial and boundary conditions by means of an implicit finite-difference approximation at fixed points (computational nodes). The hydraulic characteristics of (1) branches including top width, area, first moment of area with respect to the water surface, conveyance, and flux coefficients and (2) special features (relations between flow and headwater and (or) tail-water elevations, including the operation of variable-geometry structures) are stored in function tables calculated in the companion program, Full EQuations UTiLities (FEQUTL). Function tables containing other information used in unsteady-flow simulation (boundary conditions, tributary inflows or outflows, gate settings, correction factors, characteristics of dummy branches and level-pool reservoirs, and wind speed and direction) are prepared by the user as detailed in this report. In the iterative solution scheme for flow and depth throughout the stream system, an interpolation of the function tables corresponding to the computational nodes throughout the stream system is done in the model. FEQ can be applied in the simulation of a wide range of stream configurations (including loops), lateral-inflow conditions, and special features. The accuracy and convergence of the numerical routines in the model are demonstrated for the case of laboratory measurements of unsteady flow in a sewer pipe. Verification of the routines in the model for field data on the Fox River in northeastern Illinois also is briefly discussed. The basic principles of unsteady-flow modeling and the relation between steady flow and unsteady flow are presented. Assumptions and the limitations of the model also are presented. The schematization of the stream system and the conversion of the physical characteristics of the stream reaches and a wide range of special features into function tables for model applications are described. The modified dynamic-wave equation used in FEQ for unsteady flow in curvilinear channels with drag on minor hydraulic structures and channel constrictions determined from an equivalent energy slope is developed. The matrix equation relating flows and depths at computational nodes throughout the stream system by the continuity (conservation of mass) and modified dynamic-wave equations is illustrated for four sequential examples. The solution of the matrix equation by Newton's method is discussed. Finally, the input for FEQ and the error messages and warnings issued are presented.

  14. Entanglement spectrum of random-singlet quantum critical points

    NASA Astrophysics Data System (ADS)

    Fagotti, Maurizio; Calabrese, Pasquale; Moore, Joel E.

    2011-01-01

    The entanglement spectrum (i.e., the full distribution of Schmidt eigenvalues of the reduced density matrix) contains more information than the conventional entanglement entropy and has been studied recently in several many-particle systems. We compute the disorder-averaged entanglement spectrum in the form of the disorder-averaged moments TrρAα̲ of the reduced density matrix ρA for a contiguous block of many spins at the random-singlet quantum critical point in one dimension. The result compares well in the scaling limit with numerical studies on the random XX model and is also expected to describe the (interacting) random Heisenberg model. Our numerical studies on the XX case reveal that the dependence of the entanglement entropy and spectrum on the geometry of the Hilbert space partition is quite different than for conformally invariant critical points.

  15. Exploring multicollinearity using a random matrix theory approach.

    PubMed

    Feher, Kristen; Whelan, James; Müller, Samuel

    2012-01-01

    Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.

  16. Random matrix theory and fund of funds portfolio optimisation

    NASA Astrophysics Data System (ADS)

    Conlon, T.; Ruskin, H. J.; Crane, M.

    2007-08-01

    The proprietary nature of Hedge Fund investing means that it is common practise for managers to release minimal information about their returns. The construction of a fund of hedge funds portfolio requires a correlation matrix which often has to be estimated using a relatively small sample of monthly returns data which induces noise. In this paper, random matrix theory (RMT) is applied to a cross-correlation matrix C, constructed using hedge fund returns data. The analysis reveals a number of eigenvalues that deviate from the spectrum suggested by RMT. The components of the deviating eigenvectors are found to correspond to distinct groups of strategies that are applied by hedge fund managers. The inverse participation ratio is used to quantify the number of components that participate in each eigenvector. Finally, the correlation matrix is cleaned by separating the noisy part from the non-noisy part of C. This technique is found to greatly reduce the difference between the predicted and realised risk of a portfolio, leading to an improved risk profile for a fund of hedge funds.

  17. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  18. Application of a multipurpose unequal probability stream survey in the Mid-Atlantic Coastal Plain

    USGS Publications Warehouse

    Ator, S.W.; Olsen, A.R.; Pitchford, A.M.; Denver, J.M.

    2003-01-01

    A stratified, spatially balanced sample with unequal probability selection was used to design a multipurpose survey of headwater streams in the Mid-Atlantic Coastal Plain. Objectives for the survey include unbiased estimates of regional stream conditions, and adequate coverage of unusual but significant environmental settings to support empirical modeling of the factors affecting those conditions. The design and field application of the survey are discussed in light of these multiple objectives. A probability (random) sample of 175 first-order nontidal streams was selected for synoptic sampling of water chemistry and benthic and riparian ecology during late winter and spring 2000. Twenty-five streams were selected within each of seven hydrogeologic subregions (strata) that were delineated on the basis of physiography and surficial geology. In each subregion, unequal inclusion probabilities were used to provide an approximately even distribution of streams along a gradient of forested to developed (agricultural or urban) land in the contributing watershed. Alternate streams were also selected. Alternates were included in groups of five in each subregion when field reconnaissance demonstrated that primary streams were inaccessible or otherwise unusable. Despite the rejection and replacement of a considerable number of primary streams during reconnaissance (up to 40 percent in one subregion), the desired land use distribution was maintained within each hydrogeologic subregion without sacrificing the probabilistic design.

  19. Communication Optimal Parallel Multiplication of Sparse Random Matrices

    DTIC Science & Technology

    2013-02-21

    Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That

  20. Online neural monitoring of statistical learning.

    PubMed

    Batterink, Laura J; Paller, Ken A

    2017-05-01

    The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The causes of recurrent geomagnetic storms

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Lepping, R. P.

    1976-01-01

    The causes of recurrent geomagnetic activity were studied by analyzing interplanetary magnetic field and plasma data from earth-orbiting spacecraft in the interval from November 1973 to February 1974. This interval included the start of two long sequences of geomagnetic activity and two corresponding corotating interplanetary streams. In general, the geomagnetic activity was related to an electric field which was due to two factors: (1) the ordered, mesoscale pattern of the stream itself, and (2) random, smaller-scale fluctuations in the southward component of the interplanetary magnetic field Bz. The geomagnetic activity in each recurrent sequence consisted of two successive stages. The first stage was usually the most intense, and it occurred during the passage of the interaction region at the front of a stream. These large amplitudes of Bz were primarily produced in the interplanetary medium by compression of ambient fluctuations as the stream steepened in transit to 1 A.U. The second stage of geomagnetic activity immediately following the first was associated with the highest speeds in the stream.

  2. Occurrence and Nonoccurrence of Random Sequences: Comment on Hahn and Warren (2009)

    ERIC Educational Resources Information Center

    Sun, Yanlong; Tweney, Ryan D.; Wang, Hongbin

    2010-01-01

    On the basis of the statistical concept of waiting time and on computer simulations of the "probabilities of nonoccurrence" (p. 457) for random sequences, Hahn and Warren (2009) proposed that given people's experience of a finite data stream from the environment, the gambler's fallacy is not as gross an error as it might seem. We deal with two…

  3. Composition for absorbing hydrogen from gas mixtures

    DOEpatents

    Heung, Leung K.; Wicks, George G.; Lee, Myung W.

    1999-01-01

    A hydrogen storage composition is provided which defines a physical sol-gel matrix having an average pore size of less than 3.5 angstroms which effectively excludes gaseous metal hydride poisons while permitting hydrogen gas to enter. The composition is useful for separating hydrogen gas from diverse gas streams which may have contaminants that would otherwise render the hydrogen absorbing material inactive.

  4. Platelets and cancer: a casual or causal relationship: revisited

    PubMed Central

    Menter, David G.; Tucker, Stephanie C.; Kopetz, Scott; Sood, Anil K.; Crissman, John D.; Honn, Kenneth V.

    2014-01-01

    Human platelets arise as subcellular fragments of megakaryocytes in bone marrow. The physiologic demand, presence of disease such as cancer, or drug effects can regulate the production circulating platelets. Platelet biology is essential to hemostasis, vascular integrity, angiogenesis, inflammation, innate immunity, wound healing, and cancer biology. The most critical biological platelet response is serving as “First Responders” during the wounding process. The exposure of extracellular matrix proteins and intracellular components occurs after wounding. Numerous platelet receptors recognize matrix proteins that trigger platelet activation, adhesion, aggregation, and stabilization. Once activated, platelets change shape and degranulate to release growth factors and bioactive lipids into the blood stream. This cyclic process recruits and aggregates platelets along with thrombogenesis. This process facilitates wound closure or can recognize circulating pathologic bodies. Cancer cell entry into the blood stream triggers platelet-mediated recognition and is amplified by cell surface receptors, cellular products, extracellular factors, and immune cells. In some cases, these interactions suppress immune recognition and elimination of cancer cells or promote arrest at the endothelium, or entrapment in the microvasculature, and survival. This supports survival and spread of cancer cells and the establishment of secondary lesions to serve as important targets for prevention and therapy. PMID:24696047

  5. Biotreatment of refinery spent-sulfidic caustic using an enrichment culture immobilized in a novel support matrix.

    PubMed

    Conner, J A; Beitle, R R; Duncan, K; Kolhatkar, R; Sublette, K L

    2000-01-01

    Sodium hydroxide solutions are used in petroleum refining to remove hydrogen sulfide (H2S) and mercaptans from various hydrocarbon streams. The resulting sulfide-laden waste stream is called spent-sulfidic caustic. An aerobic enrichment culture was previously developed using a gas mixture of H2S and methyl-mercaptan (MeSH) as the sole energy source. This culture has now been immobilized in a novel support matrix, DuPont BIO-SEP beads, and is used to bio-treat a refinery spent-sulfidic caustic containing both inorganic sulfide and mercaptans in a continuous flow, fluidized-bed column bioreactor. Complete oxidation of both inorganic and organic sulfur to sulfate was observed with no breakthrough of H2S and < 2 ppmv of MeSH produced in the bioreactor outlet gas. Excessive buildup of sulfate (> 12 g/L) in the bioreactor medium resulted in an upset condition evidenced by excessive MeSH breakthrough. Therefore, bioreactor performance was limited by the steady-state sulfate concentration. Further improvement in volumetric productivity of a bioreactor system based on this enrichment culture will be dependent on maintenance of sulfate concentrations below inhibitory levels.

  6. Tensor Decompositions for Learning Latent Variable Models

    DTIC Science & Technology

    2012-12-08

    and eigenvectors of tensors is generally significantly more complicated than their matrix counterpart (both algebraically [Qi05, CS11, Lim05] and...The reduction First, let W ∈ Rd×k be a linear transformation such that M2(W,W ) = W M2W = I where I is the k × k identity matrix (i.e., W whitens ...approximate the whitening matrix W ∈ Rd×k from second-moment matrix M2 ∈ Rd×d. To do this, one first multiplies M2 by a random matrix R ∈ Rd×k′ for some k′ ≥ k

  7. Structure of local interactions in complex financial dynamics

    PubMed Central

    Jiang, X. F.; Chen, T. T.; Zheng, B.

    2014-01-01

    With the network methods and random matrix theory, we investigate the interaction structure of communities in financial markets. In particular, based on the random matrix decomposition, we clarify that the local interactions between the business sectors (subsectors) are mainly contained in the sector mode. In the sector mode, the average correlation inside the sectors is positive, while that between the sectors is negative. Further, we explore the time evolution of the interaction structure of the business sectors, and observe that the local interaction structure changes dramatically during a financial bubble or crisis. PMID:24936906

  8. Conditional random matrix ensembles and the stability of dynamical systems

    NASA Astrophysics Data System (ADS)

    Kirk, Paul; Rolando, Delphine M. Y.; MacLean, Adam L.; Stumpf, Michael P. H.

    2015-08-01

    Random matrix theory (RMT) has found applications throughout physics and applied mathematics, in subject areas as diverse as communications networks, population dynamics, neuroscience, and models of the banking system. Many of these analyses exploit elegant analytical results, particularly the circular law and its extensions. In order to apply these results, assumptions must be made about the distribution of matrix elements. Here we demonstrate that the choice of matrix distribution is crucial. In particular, adopting an unrealistic matrix distribution for the sake of analytical tractability is liable to lead to misleading conclusions. We focus on the application of RMT to the long-standing, and at times fractious, ‘diversity-stability debate’, which is concerned with establishing whether large complex systems are likely to be stable. Early work (and subsequent elaborations) brought RMT to bear on the debate by modelling the entries of a system’s Jacobian matrix as independent and identically distributed (i.i.d.) random variables. These analyses were successful in yielding general results that were not tied to any specific system, but relied upon a restrictive i.i.d. assumption. Other studies took an opposing approach, seeking to elucidate general principles of stability through the analysis of specific systems. Here we develop a statistical framework that reconciles these two contrasting approaches. We use a range of illustrative dynamical systems examples to demonstrate that: (i) stability probability cannot be summarily deduced from any single property of the system (e.g. its diversity); and (ii) our assessment of stability depends on adequately capturing the details of the systems analysed. Failing to condition on the structure of dynamical systems will skew our analysis and can, even for very small systems, result in an unnecessarily pessimistic diagnosis of their stability.

  9. The effect of stochiastic technique on estimates of population viability from transition matrix models

    USGS Publications Warehouse

    Kaye, T.N.; Pyke, David A.

    2003-01-01

    Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.

  10. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  11. Evaluation of a standardized treatment regimen of anti-tuberculosis drugs for patients with multi-drug-resistant tuberculosis (STREAM): study protocol for a randomized controlled trial.

    PubMed

    Nunn, Andrew J; Rusen, I D; Van Deun, Armand; Torrea, Gabriela; Phillips, Patrick P J; Chiang, Chen-Yuan; Squire, S Bertel; Madan, Jason; Meredith, Sarah K

    2014-09-09

    In contrast to drug-sensitive tuberculosis, the guidelines for the treatment of multi-drug-resistant tuberculosis (MDR-TB) have a very poor evidence base; current recommendations, based on expert opinion, are that patients should be treated for a minimum of 20 months. A series of cohort studies conducted in Bangladesh identified a nine-month regimen with very promising results. There is a need to evaluate this regimen in comparison with the currently recommended regimen in a randomized controlled trial in a variety of settings, including patients with HIV-coinfection. STREAM is a multi-centre randomized trial of non-inferiority design comparing a nine-month regimen to the treatment currently recommended by the World Health Organization in patients with MDR pulmonary TB with no evidence on line probe assay of fluoroquinolone or kanamycin resistance. The nine-month regimen includes clofazimine and high-dose moxifloxacin and can be extended to 11 months in the event of delay in smear conversion. The primary outcome is based on the bacteriological status of the patients at 27 months post-randomization. Based on the assumption that the nine-month regimen will be slightly more effective than the control regimen and, given a 10% margin of non-inferiority, a total of 400 patients are required to be enrolled. Health economics data are being collected on all patients in selected sites. The results from the study in Bangladesh and cohorts in progress elsewhere are encouraging, but for this regimen to be recommended more widely than in a research setting, robust evidence is needed from a randomized clinical trial. Results from the STREAM trial together with data from ongoing cohorts should provide the evidence necessary to revise current recommendations for the treatment for MDR-TB. This trial was registered with clincaltrials.gov (registration number: ISRCTN78372190) on 14 October 2010.

  12. Copolymers For Capillary Gel Electrophoresis

    DOEpatents

    Liu, Changsheng; Li, Qingbo

    2005-08-09

    This invention relates to an electrophoresis separation medium having a gel matrix of at least one random, linear copolymer comprising a primary comonomer and at least one secondary comonomer, wherein the comonomers are randomly distributed along the copolymer chain. The primary comonomer is an acrylamide or an acrylamide derivative that provides the primary physical, chemical, and sieving properties of the gel matrix. The at least one secondary comonomer imparts an inherent physical, chemical, or sieving property to the copolymer chain. The primary and secondary comonomers are present in a ratio sufficient to induce desired properties that optimize electrophoresis performance. The invention also relates to a method of separating a mixture of biological molecules using this gel matrix, a method of preparing the novel electrophoresis separation medium, and a capillary tube filled with the electrophoresis separation medium.

  13. Basin characteristics, history of stream gaging, and statistical summary of selected streamflow records for the Rapid Creek basin, western South Dakota

    USGS Publications Warehouse

    Driscoll, Daniel G.; Zogorski, John S.

    1990-01-01

    The report presents a summary of basin characteristics affecting streamflow, a history of the U.S. Geological Survey 's stream-gaging program, and a compilation of discharge records and statistical summaries for selected sites within the Rapid Creek basin. It is the first in a series which will investigate surface-water/groundwater relations along Rapid Creek. The summary of basin characteristics includes descriptions of the geology and hydrogeology, physiography and climate, land use and vegetation, reservoirs, and water use within the basin. A recounting of the U.S. Geological Survey 's stream-gaging program and a tabulation of historic stream-gaging stations within the basin are furnished. A compilation of monthly and annual mean discharge values for nine currently operated, long-term, continuous-record, streamflow-gaging stations on Rapid Creek is presented. The statistical summary for each site includes summary statistics on monthly and annual mean values, correlation matrix for monthly values, serial correlation for 1 year lag for monthly values, percentile rankings for monthly and annual mean values, low and high value tables, duration curves, and peak-discharge tables. Records of monthend contents for two reservoirs within the basin also are presented. (USGS)

  14. Identifying high energy density stream-reaches through refined geospatial resolution in hydropower resource assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasha, M. Fayzul K.; Yang, Majntxov; Yeasmin, Dilruba

    Benefited from the rapid development of multiple geospatial data sets on topography, hydrology, and existing energy-water infrastructures, the reconnaissance level hydropower resource assessment can now be conducted using geospatial models in all regions of the US. Furthermore, the updated techniques can be used to estimate the total undeveloped hydropower potential across all regions, and may eventually help identify further hydropower opportunities that were previously overlooked. To enhance the characterization of higher energy density stream-reaches, this paper explored the sensitivity of geospatial resolution on the identification of hydropower stream-reaches using the geospatial merit matrix based hydropower resource assessment (GMM-HRA) model. GMM-HRAmore » model simulation was conducted with eight different spatial resolutions on six U.S. Geological Survey (USGS) 8-digit hydrologic units (HUC8) located at three different terrains; Flat, Mild, and Steep. The results showed that more hydropower potential from higher energy density stream-reaches can be identified with increasing spatial resolution. Both Flat and Mild terrains exhibited lower impacts compared to the Steep terrain. Consequently, greater attention should be applied when selecting the discretization resolution for hydropower resource assessments in the future study.« less

  15. Identifying high energy density stream-reaches through refined geospatial resolution in hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yang, Majntxov; Yeasmin, Dilruba; ...

    2016-01-07

    Benefited from the rapid development of multiple geospatial data sets on topography, hydrology, and existing energy-water infrastructures, the reconnaissance level hydropower resource assessment can now be conducted using geospatial models in all regions of the US. Furthermore, the updated techniques can be used to estimate the total undeveloped hydropower potential across all regions, and may eventually help identify further hydropower opportunities that were previously overlooked. To enhance the characterization of higher energy density stream-reaches, this paper explored the sensitivity of geospatial resolution on the identification of hydropower stream-reaches using the geospatial merit matrix based hydropower resource assessment (GMM-HRA) model. GMM-HRAmore » model simulation was conducted with eight different spatial resolutions on six U.S. Geological Survey (USGS) 8-digit hydrologic units (HUC8) located at three different terrains; Flat, Mild, and Steep. The results showed that more hydropower potential from higher energy density stream-reaches can be identified with increasing spatial resolution. Both Flat and Mild terrains exhibited lower impacts compared to the Steep terrain. Consequently, greater attention should be applied when selecting the discretization resolution for hydropower resource assessments in the future study.« less

  16. Removal of Stationary Sinusoidal Noise from Random Vibration Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian; Cap, Jerome S.

    In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less

  17. Simple Emergent Power Spectra from Complex Inflationary Physics

    NASA Astrophysics Data System (ADS)

    Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David

    2016-09-01

    We construct ensembles of random scalar potentials for Nf-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For Nf=O (few ), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For Nf≫1 , the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large Nf universality of random matrix theory.

  18. Simple Emergent Power Spectra from Complex Inflationary Physics.

    PubMed

    Dias, Mafalda; Frazer, Jonathan; Marsh, M C David

    2016-09-30

    We construct ensembles of random scalar potentials for N_{f}-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For N_{f}=O(few), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For N_{f}≫1, the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large N_{f} universality of random matrix theory.

  19. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  20. Resonant Drag Instability of Grains Streaming in Fluids

    NASA Astrophysics Data System (ADS)

    Squire, J.; Hopkins, P. F.

    2018-03-01

    We show that grains streaming through a fluid are generically unstable if their velocity, projected along some direction, matches the phase velocity of a fluid wave (linear oscillation). This can occur whenever grains stream faster than any fluid wave. The wave itself can be quite general—sound waves, magnetosonic waves, epicyclic oscillations, and Brunt–Väisälä oscillations each generate instabilities, for example. We derive a simple expression for the growth rates of these “resonant drag instabilities” (RDI). This expression (i) illustrates why such instabilities are so virulent and generic and (ii) allows for simple analytic computation of RDI growth rates and properties for different fluids. As examples, we introduce several new instabilities, which could see application across a variety of physical systems from atmospheres to protoplanetary disks, the interstellar medium, and galactic outflows. The matrix-based resonance formalism we introduce can also be applied more generally in other (nonfluid) contexts, providing a simple means for calculating and understanding the stability properties of interacting systems.

  1. Cell and method for electrolysis of water and anode

    NASA Technical Reports Server (NTRS)

    Aylward, J. R. (Inventor)

    1981-01-01

    An electrolytic cell for converting water vapor to oxygen and hydrogen include an anode comprising a foraminous conductive metal substrate with a 65-85 weight percent iridium oxide coating and 15-35 weight percent of a high temperature resin binder. A matrix member contains an electrolyte to which a cathode substantially inert. The foraminous metal member is most desirably expanded tantalum mesh, and the cell desirably includes reservoir elements of porous sintered metal in contact with the anode to receive and discharge electrolyte to the matrix member as required. Upon entry of a water vapor containing airstream into contact with the outer surface of the anode and thence into contact with iridium oxide coating, the water vapor is electrolytically converted to hydrogen ions and oxygen with the hydrogen ions migrating through the matrix to the cathode and the oxygen gas produced at the anode to enrich the air stream passing by the anode.

  2. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  3. Partial Removal of Nail Matrix in the Treatment of Ingrown Nails: Prospective Randomized Control Study Between Curettage and Electrocauterization.

    PubMed

    Kim, Maru; Song, In-Guk; Kim, Hyung Jin

    2015-06-01

    The aim of this study was to compare the result of electrocauterization and curettage, which can be done with basic instruments. Patients with ingrown nail were randomized to 2 groups. In the first group, nail matrix was removed by curettage, and the second group, nail matrix was removed by electrocautery. A total of 61 patients were enrolled; 32 patients were operated by curettage, and 29 patients were operated by electrocautery. Wound infections, as early complication, were found in 15.6% (5/32) of the curettage group, 10.3% (3/29) of the electrocautery group patients each (P = .710). Nonrecurrence was observed in 93.8% (30/32) and 86.2% (25/29) of the curettage and electrocautery groups, respectively, (lower limit of 1-sided 90% confidence interval = -2.3% > -15% [noninferiority margin]). To remove nail matrix, the curettage is effective as well as the electrocauterization. Further study is required to determine the differences between the procedures. © The Author(s) 2014.

  4. Regional flood-frequency relations for streams with many years of no flow

    USGS Publications Warehouse

    Hjalmarson, Hjalmar W.; Thomas, Blakemore E.; ,

    1990-01-01

    In the southwestern United States, flood-frequency relations for streams that drain small arid basins are difficult to estimate, largely because of the extreme temporal and spatial variability of floods and the many years of no flow. A method is proposed that is based on the station-year method. The new method produces regional flood-frequency relations using all available annual peak-discharge data. The prediction errors for the relations are directly assessed using randomly selected subsamples of the annual peak discharges.

  5. No Bit Left Behind: The Limits of Heap Data Compression

    DTIC Science & Technology

    2008-06-01

    Lempel - Ziv compression is non-lossy, in other words, the original data can be fully recovered by decompression. Unlike the data representations for most...of the other models, Lempel - Ziv compressed data does not permit random access, let alone in-place update. To compute this model as accu- rately as...of the collection, we print the size of the full stream, i.e., all live data in the heap. We then apply Lempel - Ziv compression to the stream

  6. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  7. Enhancement of cooperation in the spatial prisoner's dilemma with a coherence-resonance effect through annealed randomness at a cooperator-defector boundary; comparison of two variant models

    NASA Astrophysics Data System (ADS)

    Tanimoto, Jun

    2016-11-01

    Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.

  8. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  9. Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments

    DTIC Science & Technology

    2013-12-11

    positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for

  10. Market Correlation Structure Changes Around the Great Crash: A Random Matrix Theory Analysis of the Chinese Stock Market

    NASA Astrophysics Data System (ADS)

    Han, Rui-Qi; Xie, Wen-Jie; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing

    The correlation structure of a stock market contains important financial contents, which may change remarkably due to the occurrence of financial crisis. We perform a comparative analysis of the Chinese stock market around the occurrence of the 2008 crisis based on the random matrix analysis of high-frequency stock returns of 1228 Chinese stocks. Both raw correlation matrix and partial correlation matrix with respect to the market index in two time periods of one year are investigated. We find that the Chinese stocks have stronger average correlation and partial correlation in 2008 than in 2007 and the average partial correlation is significantly weaker than the average correlation in each period. Accordingly, the largest eigenvalue of the correlation matrix is remarkably greater than that of the partial correlation matrix in each period. Moreover, each largest eigenvalue and its eigenvector reflect an evident market effect, while other deviating eigenvalues do not. We find no evidence that deviating eigenvalues contain industrial sectorial information. Surprisingly, the eigenvectors of the second largest eigenvalues in 2007 and of the third largest eigenvalues in 2008 are able to distinguish the stocks from the two exchanges. We also find that the component magnitudes of the some largest eigenvectors are proportional to the stocks’ capitalizations.

  11. Random matrix approach to cross correlations in financial data

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene

    2002-06-01

    We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.

  12. Random versus fixed-site sampling when monitoring relative abundance of fishes in headwater streams of the upper Colorado River basin

    USGS Publications Warehouse

    Quist, M.C.; Gerow, K.G.; Bower, M.R.; Hubert, W.A.

    2006-01-01

    Native fishes of the upper Colorado River basin (UCRB) have declined in distribution and abundance due to habitat degradation and interactions with normative fishes. Consequently, monitoring populations of both native and nonnative fishes is important for conservation of native species. We used data collected from Muddy Creek, Wyoming (2003-2004), to compare sample size estimates using a random and a fixed-site sampling design to monitor changes in catch per unit effort (CPUE) of native bluehead suckers Catostomus discobolus, flannelmouth suckers C. latipinnis, roundtail chub Gila robusta, and speckled dace Rhinichthys osculus, as well as nonnative creek chub Semotilus atromaculatus and white suckers C. commersonii. When one-pass backpack electrofishing was used, detection of 10% or 25% changes in CPUE (fish/100 m) at 60% statistical power required 50-1,000 randomly sampled reaches among species regardless of sampling design. However, use of a fixed-site sampling design with 25-50 reaches greatly enhanced the ability to detect changes in CPUE. The addition of seining did not appreciably reduce required effort. When detection of 25-50% changes in CPUE of native and nonnative fishes is acceptable, we recommend establishment of 25-50 fixed reaches sampled by one-pass electrofishing in Muddy Creek. Because Muddy Creek has habitat and fish assemblages characteristic of other headwater streams in the UCRB, our results are likely to apply to many other streams in the basin. ?? Copyright by the American Fisheries Society 2006.

  13. Using regression methods to estimate stream phosphorus loads at the Illinois River, Arkansas

    USGS Publications Warehouse

    Haggard, B.E.; Soerens, T.S.; Green, W.R.; Richards, R.P.

    2003-01-01

    The development of total maximum daily loads (TMDLs) requires evaluating existing constituent loads in streams. Accurate estimates of constituent loads are needed to calibrate watershed and reservoir models for TMDL development. The best approach to estimate constituent loads is high frequency sampling, particularly during storm events, and mass integration of constituents passing a point in a stream. Most often, resources are limited and discrete water quality samples are collected on fixed intervals and sometimes supplemented with directed sampling during storm events. When resources are limited, mass integration is not an accurate means to determine constituent loads and other load estimation techniques such as regression models are used. The objective of this work was to determine a minimum number of water-quality samples needed to provide constituent concentration data adequate to estimate constituent loads at a large stream. Twenty sets of water quality samples with and without supplemental storm samples were randomly selected at various fixed intervals from a database at the Illinois River, northwest Arkansas. The random sets were used to estimate total phosphorus (TP) loads using regression models. The regression-based annual TP loads were compared to the integrated annual TP load estimated using all the data. At a minimum, monthly sampling plus supplemental storm samples (six samples per year) was needed to produce a root mean square error of less than 15%. Water quality samples should be collected at least semi-monthly (every 15 days) in studies less than two years if seasonal time factors are to be used in the regression models. Annual TP loads estimated from independently collected discrete water quality samples further demonstrated the utility of using regression models to estimate annual TP loads in this stream system.

  14. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  15. Using LiDAR to Estimate Surface Erosion Volumes within the Post-storm 2012 Bagley Fire

    NASA Astrophysics Data System (ADS)

    Mikulovsky, R. P.; De La Fuente, J. A.; Mondry, Z. J.

    2014-12-01

    The total post-storm 2012 Bagley fire sediment budget of the Squaw Creek watershed in the Shasta-Trinity National Forest was estimated using many methods. A portion of the budget was quantitatively estimated using LiDAR. Simple workflows were designed to estimate the eroded volume's of debris slides, fill failures, gullies, altered channels and streams. LiDAR was also used to estimate depositional volumes. Thorough manual mapping of large erosional features using the ArcGIS 10.1 Geographic Information System was required as these mapped features determined the eroded volume boundaries in 3D space. The 3D pre-erosional surface for each mapped feature was interpolated based on the boundary elevations. A surface difference calculation was run using the estimated pre-erosional surfaces and LiDAR surfaces to determine volume of sediment potentially delivered into the stream system. In addition, cross sections of altered channels and streams were taken using stratified random selection based on channel gradient and stream order respectively. The original pre-storm surfaces of channel features were estimated using the cross sections and erosion depth criteria. Open source software Inkscape was used to estimate cross sectional areas for randomly selected channel features and then averaged for each channel gradient and stream order classes. The average areas were then multiplied by the length of each class to estimate total eroded altered channel and stream volume. Finally, reservoir and in-channel depositional volumes were estimated by mapping channel forms and generating specific reservoir elevation zones associated with depositional events. The in-channel areas and zones within the reservoir were multiplied by estimated and field observed sediment thicknesses to attain a best guess sediment volume. In channel estimates included re-occupying stream channel cross sections established before the fire. Once volumes were calculated, other erosion processes of the Bagley sedimentation study, such as surface soil erosion were combined to estimate the total fire and storm sediment budget for the Squaw Creek watershed. The LiDAR-based measurement workflows can be easily applied to other sediment budget studies using one high resolution LiDAR dataset.

  16. Forecasting fish biomasses, densities, productions, and bioaccumulation potentials of mid-atlantic wadeable streams.

    PubMed

    Barber, M Craig; Rashleigh, Brenda; Cyterski, Michael J

    2016-01-01

    Regional fishery conditions of Mid-Atlantic wadeable streams in the eastern United States are estimated using the Bioaccumulation and Aquatic System Simulator (BASS) bioaccumulation and fish community model and data collected by the US Environmental Protection Agency's Environmental Monitoring and Assessment Program (EMAP). Average annual biomasses and population densities and annual productions are estimated for 352 randomly selected streams. Realized bioaccumulation factors (BAF) and biomagnification factors (BMF), which are dependent on these forecasted biomasses, population densities, and productions, are also estimated by assuming constant water exposures to methylmercury and tetra-, penta-, hexa-, and hepta-chlorinated biphenyls. Using observed biomasses, observed densities, and estimated annual productions of total fish from 3 regions assumed to support healthy fisheries as benchmarks (eastern Tennessee and Catskill Mountain trout streams and Ozark Mountains smallmouth bass streams), 58% of the region's wadeable streams are estimated to be in marginal or poor condition (i.e., not healthy). Using simulated BAFs and EMAP Hg fish concentrations, we also estimate that approximately 24% of the game fish and subsistence fishing species that are found in streams having detectable Hg concentrations would exceed an acceptable human consumption criterion of 0.185 μg/g wet wt. Importantly, such streams have been estimated to represent 78.2% to 84.4% of the Mid-Atlantic's wadeable stream lengths. Our results demonstrate how a dynamic simulation model can support regional assessment and trends analysis for fisheries. © 2015 SETAC.

  17. Detection Performance of Horizontal Linear Hydrophone Arrays in Shallow Water.

    DTIC Science & Technology

    1980-12-15

    random phase G gain G angle interval covariance matrix h processor vector H matrix matched filter; generalized beamformer I unity matrix 4 SACLANTCEN SR...omnidirectional sensor is h*Ph P G = - h [Eq. 47] G = h* Q h P s The following two sections evaluate a few examples of application of the OLP. Following the...At broadside the signal covariance matrix reduces to a dyadic: P 󈧬 s s*;therefore, the gain (e.g. Eq. 37) becomes tr(H* P H) Pn * -1 Q -1 Pn G ~OQp

  18. The wasteland of random supergravities

    NASA Astrophysics Data System (ADS)

    Marsh, David; McAllister, Liam; Wrase, Timm

    2012-03-01

    We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.

  19. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  20. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  1. DENSITY VARIATIONS IN THE NW STAR STREAM OF M31

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, R. G.; Richer, Harvey B.; McConnachie, Alan W., E-mail: carlberg@astro.utoronto.ca, E-mail: richer@astro.ubc.ca, E-mail: alan.mcconnachie@nrc-cnrc.gc.ca

    2011-04-20

    The Pan Andromeda Archeological Survey (PAndAS) CFHT Megaprime survey of the M31-M33 system has found a star stream which extends about 120 kpc NW from the center of M31. The great length of the stream, and the likelihood that it does not significantly intersect the disk of M31, means that it is unusually well suited for a measurement of stream gaps and clumps along its length as a test for the predicted thousands of dark matter sub-halos. The main result of this paper is that the density of the stream varies between zero and about three times the mean alongmore » its length on scales of 2-20 kpc. The probability that the variations are random fluctuations in the star density is less than 10{sup -5}. As a control sample, we search for density variations at precisely the same location in stars with metallicity higher than the stream [Fe/H] = [0, -0.5] and find no variations above the expected shot noise. The lumpiness of the stream is not compatible with a low mass star stream in a smooth galactic potential, nor is it readily compatible with the disturbance caused by the visible M31 satellite galaxies. The stream's density variations appear to be consistent with the effects of a large population of steep mass function dark matter sub-halos, such as found in LCDM simulations, acting on an approximately 10 Gyr old star stream. The effects of a single set of halo substructure realizations are shown for illustration, reserving a statistical comparison for another study.« less

  2. Characterizations of matrix and operator-valued Φ-entropies, and operator Efron-Stein inequalities.

    PubMed

    Cheng, Hao-Chung; Hsieh, Min-Hsiu

    2016-03-01

    We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19 , 1-30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron-Stein inequality.

  3. PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.

    1998-01-01

    PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.

  4. Use of Matrix Sampling Procedures to Assess Achievement in Solving Open Addition and Subtraction Sentences.

    ERIC Educational Resources Information Center

    Montague, Margariete A.

    This study investigated the feasibility of concurrently and randomly sampling examinees and items in order to estimate group achievement. Seven 32-item tests reflecting a 640-item universe of simple open sentences were used such that item selection (random, systematic) and assignment (random, systematic) of items (four, eight, sixteen) to forms…

  5. Individual complex Dirac eigenvalue distributions from random matrix theory and comparison to quenched lattice QCD with a quark chemical potential.

    PubMed

    Akemann, G; Bloch, J; Shifrin, L; Wettig, T

    2008-01-25

    We analyze how individual eigenvalues of the QCD Dirac operator at nonzero quark chemical potential are distributed in the complex plane. Exact and approximate analytical results for both quenched and unquenched distributions are derived from non-Hermitian random matrix theory. When comparing these to quenched lattice QCD spectra close to the origin, excellent agreement is found for zero and nonzero topology at several values of the quark chemical potential. Our analytical results are also applicable to other physical systems in the same symmetry class.

  6. Fidelity under isospectral perturbations: a random matrix study

    NASA Astrophysics Data System (ADS)

    Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.

    2013-07-01

    The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.

  7. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    DTIC Science & Technology

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  8. FPGA architecture and implementation of sparse matrix vector multiplication for the finite element method

    NASA Astrophysics Data System (ADS)

    Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.

    2008-04-01

    The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.

  9. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  10. Sparse matrix beamforming and image reconstruction for real-time 2D HIFU monitoring using Harmonic Motion Imaging for Focused Ultrasound (HMIFU) with in vitro validation

    PubMed Central

    Hou, Gary Y.; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E.

    2015-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a recently developed High-Intensity Focused Ultrasound (HIFU) treatment monitoring method. HMIFU utilizes an Amplitude-Modulated (fAM = 25 Hz) HIFU beam to induce a localized focal oscillatory motion, which is simultaneously estimated and imaged by confocally-aligned imaging transducer. HMIFU feasibilities have been previously shown in silico, in vitro, and in vivo in 1-D or 2-D monitoring of HIFU treatment. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system composed of a 93-element HIFU transducer (fcenter = 4.5MHz) and coaxially-aligned 64-element phased array (fcenter = 2.5MHz) for displacement excitation and motion estimation, respectively. A single transmit beam with divergent beam transmit was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface. The present work developed and implemented a sparse matrix beamforming onto a fully-integrated, clinically relevant system, which can stream displacement images up to 15 Hz using a GPU-based processing, an increase of 100 fold in rate of streaming displacement images compared to conventional CPU-based conventional beamforming and reconstruction processing. The achieved feedback rate is also currently the fastest and only approach that does not require interrupting the HIFU treatment amongst the acoustic radiation force based HIFU imaging techniques. Results in phantom experiments showed reproducible displacement imaging, and monitoring of twenty two in vitro HIFU treatments using the new 2D system showed a consistent average focal displacement decrease of 46.7±14.6% during lesion formation. Complementary focal temperature monitoring also indicated an average rate of displacement increase and decrease with focal temperature at 0.84±1.15 %/ °C, and 2.03± 0.93%/ °C, respectively. These results reinforce the HMIFU capability of estimating and monitoring stiffness related changes in real time. Current ongoing studies include clinical translation of the presented system for monitoring of HIFU treatment for breast and pancreatic tumor applications. PMID:24960528

  11. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.

    PubMed

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

  12. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  13. Discrete simulations of spatio-temporal dynamics of small water bodies under varied stream flow discharges

    NASA Astrophysics Data System (ADS)

    Daya Sagar, B. S.

    2005-01-01

    Spatio-temporal patterns of small water bodies (SWBs) under the influence of temporally varied stream flow discharge are simulated in discrete space by employing geomorphologically realistic expansion and contraction transformations. Cascades of expansion-contraction are systematically performed by synchronizing them with stream flow discharge simulated via the logistic map. Templates with definite characteristic information are defined from stream flow discharge pattern as the basis to model the spatio-temporal organization of randomly situated surface water bodies of various sizes and shapes. These spatio-temporal patterns under varied parameters (λs) controlling stream flow discharge patterns are characterized by estimating their fractal dimensions. At various λs, nonlinear control parameters, we show the union of boundaries of water bodies that traverse the water body and non-water body spaces as geomorphic attractors. The computed fractal dimensions of these attractors are 1.58, 1.53, 1.78, 1.76, 1.84, and 1.90, respectively, at λs of 1, 2, 3, 3.46, 3.57, and 3.99. These values are in line with general visual observations.

  14. New Algorithm Identifies Tidal Streams Oriented Along our Line-of-Sight

    NASA Astrophysics Data System (ADS)

    Lin, Ziyi; Newberg, Heidi; Amy, Paul; Martin, Charles Harold; Rockcliffe, Keighley E.

    2018-01-01

    The known dwarf galaxy tidal streams in the Milky Way are primarily oriented perpendicular to our line-of-sight. That is because they are concentrated into an observable higher-surface-brightness feature at a particular distance, or because they tightly cluster in line-of-sight velocity in a particular direction. Streams that are oriented along our line-of-sight are spread over a large range of distances and velocities. However, these distances and velocities are correlated in predicable ways. We used a set of randomly oriented Milky Way orbits to develop a technique that bins stars in combinations of distance and velocity that are likely for tidal streams. We applied this technique to identify previously unknown tidal streams in a set of blue horizontal branch stars in the first quadrant from Data Release 10 of the Sloan Digital Sky Survey (SDSS). This project was supported by NSF grant AST 16-15688, a Rensselaer Presidential Fellowship, the NASA/NY Space Grant fellowship, and contributions made by The Marvin Clan, Babette Josephs, Manit Limlamai, and the 2015 Crowd Funding Campaign to Support Milky Way Research.

  15. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    NASA Astrophysics Data System (ADS)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  16. Measurement of the Ecological Integrity of Cerrado Streams Using Biological Metrics and the Index of Habitat Integrity

    PubMed Central

    dos Reis, Deusiano Florêncio; Salazar, Ayala Eduardo; Machado, Mayana Mendes Dias; Couceiro, Sheyla Regina Marques; de Morais, Paula Benevides

    2017-01-01

    Generally, aquatic communities reflect the effects of anthropogenic changes such as deforestation or organic pollution. The Cerrado stands among the most threatened ecosystems by human activities in Brazil. In order to evaluate the ecological integrity of the streams in a preserved watershed in the Northern Cerrado biome corresponding to a mosaic of ecosystems in transition to the Amazonia biome in Brazil, biological metrics related to diversity, structure, and sensitivity of aquatic macroinvertebrates were calculated. Sampling included collections along stretches of 200 m of nine streams and measurements of abiotic variables (temperature, electrical conductivity, pH, total dissolved solids, dissolved oxygen, and discharge) and the Index of Habitat Integrity (HII). The values of the abiotic variables and the HII indicated that most of the streams have good ecological integrity, due to high oxygen levels and low concentrations of dissolved solids and electric conductivity. Two streams showed altered HII scores mainly related to small dams for recreational and domestic use, use of Cerrado natural pasture for cattle raising, and spot deforestation in bathing areas. However, this finding is not reflected in the biological metrics that were used. Considering all nine streams, only two showed satisfactory ecological quality (measured by Biological Monitoring Working Party (BMWP), total richness, and EPT (Ephemeroptera, Plecoptera, and Trichoptera) richness), only one of which had a low HII score. These results indicate that punctual measures of abiotic parameters do not reveal the long-term impacts of anthropic activities in these streams, including related fire management of pasture that annually alters the vegetation matrix and may act as a disturbance for the macroinvertebrate communities. Due to this, biomonitoring of low order streams in Cerrado ecosystems of the Northern Central Brazil by different biotic metrics and also physical attributes of the riparian zone such as HII is recommended for the monitoring and control of anthropic impacts on aquatic communities. PMID:28085090

  17. Variations in fluvial deposition on an alluvial plain: an example from the Tongue River Member of the Fort Union Formation (Paleocene), southeastern Powder River Basin, Wyoming, U.S.A.

    USGS Publications Warehouse

    Johnson, E.A.; Pierce, F.W.

    1990-01-01

    The Tongue River Member of the Paleocene Fort Union Formation is an important coal-bearing sedimentary unit in the Powder River Basin of Wyoming and Montana. We studied the depositional environments of a portion of this member at three sites 20 km apart in the southeastern part of the basin. Six lithofacies are recognized that we assign to five depositional facies categorized as either channel or interchannel-wetlands environments. (1) Type A sandstone is cross stratified and occurs as lenticular bodies with concave-upward basal surfaces; these bodies are assigned to the channel facies interpreted to be the product of low-sinuosity streams. (2) Type B sandstone occurs in parallel-bedded units containing mudrock partings and fossil plant debris; these units constitute the levee facies. (3) Type C sandstone typically lacks internal structure and occurs as tabular bodies separating finer grained deposits; these bodies represent the crevasse-splay facies. (4) Gray mudrock is generally nonlaminated and contains ironstone concretions; these deposits constitute the floodplain facies. (5) Carbonaceous shale and coal are assigned to the swamp facies. We recognize two styles of stream deposition in our study area. Laterally continuous complexes of single and multistoried channel bodies occur at our middle study site and we interpret these to be the deposits of sandy braided stream systems. In the two adjacent study sites, single and multistoried channel bodies are isolated in a matrix of finer-grained interchannel sediment suggesting deposition by anastomosed streams. A depositional model for our study area contains northwest-trending braided stream systems. Avulsions of these systems created anastomosed streams that flowed into adjacent interchannel areas. We propose that during late Paleocene a broad alluvial plain existed on the southeastern flank of the Powder River Basin. The braided streams that crossed this surface were tributaries to a northward-flowing, basin-axis trunk stream that existed to the west. ?? 1990.

  18. Measurement of the Ecological Integrity of Cerrado Streams Using Biological Metrics and the Index of Habitat Integrity.

    PubMed

    Reis, Deusiano Florêncio Dos; Salazar, Ayala Eduardo; Machado, Mayana Mendes Dias; Couceiro, Sheyla Regina Marques; Morais, Paula Benevides de

    2017-01-12

    Generally, aquatic communities reflect the effects of anthropogenic changes such as deforestation or organic pollution. The Cerrado stands among the most threatened ecosystems by human activities in Brazil. In order to evaluate the ecological integrity of the streams in a preserved watershed in the Northern Cerrado biome corresponding to a mosaic of ecosystems in transition to the Amazonia biome in Brazil, biological metrics related to diversity, structure, and sensitivity of aquatic macroinvertebrates were calculated. Sampling included collections along stretches of 200 m of nine streams and measurements of abiotic variables (temperature, electrical conductivity, pH, total dissolved solids, dissolved oxygen, and discharge) and the Index of Habitat Integrity (HII). The values of the abiotic variables and the HII indicated that most of the streams have good ecological integrity, due to high oxygen levels and low concentrations of dissolved solids and electric conductivity. Two streams showed altered HII scores mainly related to small dams for recreational and domestic use, use of Cerrado natural pasture for cattle raising, and spot deforestation in bathing areas. However, this finding is not reflected in the biological metrics that were used. Considering all nine streams, only two showed satisfactory ecological quality (measured by Biological Monitoring Working Party (BMWP), total richness, and EPT (Ephemeroptera, Plecoptera, and Trichoptera) richness), only one of which had a low HII score. These results indicate that punctual measures of abiotic parameters do not reveal the long-term impacts of anthropic activities in these streams, including related fire management of pasture that annually alters the vegetation matrix and may act as a disturbance for the macroinvertebrate communities. Due to this, biomonitoring of low order streams in Cerrado ecosystems of the Northern Central Brazil by different biotic metrics and also physical attributes of the riparian zone such as HII is recommended for the monitoring and control of anthropic impacts on aquatic communities.

  19. A minimum drives automatic target definition procedure for multi-axis random control testing

    NASA Astrophysics Data System (ADS)

    Musella, Umberto; D'Elia, Giacomo; Carrella, Alex; Peeters, Bart; Mucchi, Emiliano; Marulo, Francesco; Guillaume, Patrick

    2018-07-01

    Multiple-Input Multiple-Output (MIMO) vibration control tests are able to closely replicate, via shakers excitation, the vibration environment that a structure needs to withstand during its operational life. This feature is fundamental to accurately verify the experienced stress state, and ultimately the fatigue life, of the tested structure. In case of MIMO random tests, the control target is a full reference Spectral Density Matrix in the frequency band of interest. The diagonal terms are the Power Spectral Densities (PSDs), representative for the acceleration operational levels, and the off-diagonal terms are the Cross Spectral Densities (CSDs). The specifications of random vibration tests are however often given in terms of PSDs only, coming from a legacy of single axis testing. Information about the CSDs is often missing. An accurate definition of the CSD profiles can further enhance the MIMO random testing practice, as these terms influence both the responses and the shaker's voltages (the so-called drives). The challenges are linked to the algebraic constraint that the full reference matrix must be positive semi-definite in the entire bandwidth, with no flexibility in modifying the given PSDs. This paper proposes a newly developed method that automatically provides the full reference matrix without modifying the PSDs, considered as test specifications. The innovative feature is the capability of minimizing the drives required to match the reference PSDs and, at the same time, to directly guarantee that the obtained full matrix is positive semi-definite. The drives minimization aims on one hand to reach the fixed test specifications without stressing the delicate excitation system; on the other hand it potentially allows to further increase the test levels. The detailed analytic derivation and implementation steps of the proposed method are followed by real-life testing considering different scenarios.

  20. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX - RAdiation Dose study (RAD-MATRIX).

    PubMed

    Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco

    2014-06-01

    Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  2. Benzene destruction in aqueous waste—I. Bench-scale gamma irradiation experiments

    NASA Astrophysics Data System (ADS)

    Cooper, William J.; Dougal, Roger A.; Nickelsen, Michael G.; Waite, Thomas D.; Kurucz, Charles N.; Lin, Kaijin; Bibler, Jane P.

    1996-07-01

    Destruction of the benzene component of a simulated low-level mixed aqueous waste stream by high energy irradiation was explored. This work was motivated by the fact that mixed waste, containing both radionuclides and regulated (non-radioactive) chemicals, is more difficult and more expensive to dispose of than only radioactive waste. After the benzene is destroyed, the waste can then be listed only as radiological waste instead of mixed waste, simplifying its disposal. This study quantifies the removal of benzene, and the formation and destruction of reaction products in a relatively complex waste stream matrix consisting of NO 3-, SO 42-, PO 43-, Fe 2+ and detergent at a pH of 3. All of the experiments were conducted at a bench scale using a 60Co gamma source.

  3. Waste Treatment And Immobilization Plant U. S. Department Of Energy Office Of River Protection Submerged Bed Scrubber Condensate Disposition Project - Abstract # 13460

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanochko, Ronald M; Corcoran, Connie

    The Hanford Waste Treatment and Immobilization Plant (WTP) will generate an off-gas treatment system secondary liquid waste stream [submerged bed scrubber (SBS) condensate], which is currently planned for recycle back to the WTP Low Activity Waste (LAW) melter. This SBS condensate waste stream is high in Tc-99, which is not efficiently captured in the vitrified glass matrix. A pre-conceptual engineering study was prepared in fiscal year 2012 to evaluate alternate flow paths for melter off-gas secondary liquid waste generated by the WTP LAW facility. This study evaluated alternatives for direct off-site disposal of this SBS without pre-treatment, which mitigates potentialmore » issues associated with recycling.« less

  4. Aligned fibers direct collective cell migration to engineer closing and nonclosing wound gaps

    PubMed Central

    Sharma, Puja; Ng, Colin; Jana, Aniket; Padhi, Abinash; Szymanski, Paige; Lee, Jerry S. H.; Behkam, Bahareh; Nain, Amrinder S.

    2017-01-01

    Cell emergence onto damaged or organized fibrous extracellular matrix (ECM) is a crucial precursor to collective cell migration in wound closure and cancer metastasis, respectively. However, there is a fundamental gap in our quantitative understanding of the role of local ECM size and arrangement in cell emergence–based migration and local gap closure. Here, using ECM-mimicking nanofibers bridging cell monolayers, we describe a method to recapitulate and quantitatively describe these in vivo behaviors over multispatial (single cell to cell sheets) and temporal (minutes to weeks) scales. On fiber arrays with large interfiber spacing, cells emerge (invade) either singularly by breaking cell–cell junctions analogous to release of a stretched rubber band (recoil), or in groups of few cells (chains), whereas on closely spaced fibers, multiple chains emerge collectively. Advancing cells on fibers form cell streams, which support suspended cell sheets (SCS) of various sizes and curvatures. SCS converge to form local gaps that close based on both the gap size and shape. We document that cell stream spacing of 375 µm and larger hinders SCS advancement, thus providing abilities to engineer closing and nonclosing gaps. Altogether we highlight the importance of studying cell-fiber interactions and matrix structural remodeling in fundamental and translational cell biology. PMID:28747440

  5. A fish-based index of biotic integrity to assess intermittent headwater streams in Wisconsin, USA.

    PubMed

    Lyons, John

    2006-11-01

    I developed a fish-based index of biotic integrity (IBI) to assess environmental quality in intermittent headwater streams in Wisconsin, USA. Backpack electrofishing and habitat surveys were conducted four times on 102 small (watershed area 1.7-41.5 km(2)), cool or warmwater (maximum daily mean water temperature > or = 22 C), headwater streams in spring and late summer/fall 2000 and 2001. Despite seasonal and annual changes in stream flow and habitat volume, there were few significant temporal trends in fish attributes. Analysis of 36 least-impacted streams indicated that fish were too scarce to calculate an IBI at stations with watershed areas less than 4 km(2) or at stations with watershed areas from 4-10 km(2) if stream gradient exceeded 10 m/km (1% slope). For streams with sufficient fish, potential fish attributes (metrics) were not related to watershed size or gradient. Seven metrics distinguished among streams with low, agricultural, and urban human impacts: numbers of native, minnow (Cyprinidae), headwater-specialist, and intolerant (to environmental degradation) species; catches of all fish excluding species tolerant of environmental degradation and of brook stickleback (Culaea inconstans) per 100 m stream length; and percentage of total individuals with deformities, eroded fins, lesions, or tumors. These metrics were used in the final IBI, which ranged from 0 (worst) to 100 (best). The IBI accurately assessed the environmental quality of 16 randomly chosen streams not used in index development. Temporal variation in IBI scores in the absence of changes in environmental quality was not related to season, year, or type of human impact and was similar in magnitude to variation reported for other IBI's.

  6. Characterizations of matrix and operator-valued Φ-entropies, and operator Efron–Stein inequalities

    PubMed Central

    Cheng, Hao-Chung; Hsieh, Min-Hsiu

    2016-01-01

    We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19, 1–30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron–Stein inequality. PMID:27118909

  7. QCD dirac operator at nonzero chemical potential: lattice data and matrix model.

    PubMed

    Akemann, Gernot; Wettig, Tilo

    2004-03-12

    Recently, a non-Hermitian chiral random matrix model was proposed to describe the eigenvalues of the QCD Dirac operator at nonzero chemical potential. This matrix model can be constructed from QCD by mapping it to an equivalent matrix model which has the same symmetries as QCD with chemical potential. Its microscopic spectral correlations are conjectured to be identical to those of the QCD Dirac operator. We investigate this conjecture by comparing large ensembles of Dirac eigenvalues in quenched SU(3) lattice QCD at a nonzero chemical potential to the analytical predictions of the matrix model. Excellent agreement is found in the two regimes of weak and strong non-Hermiticity, for several different lattice volumes.

  8. Image encryption using random sequence generated from generalized information domain

    NASA Astrophysics Data System (ADS)

    Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu

    2016-05-01

    A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.

  9. Physical layer one-time-pad data encryption through synchronized semiconductor laser networks

    NASA Astrophysics Data System (ADS)

    Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris

    2016-02-01

    Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.

  10. Random acoustic metamaterial with a subwavelength dipolar resonance.

    PubMed

    Duranteau, Mickaël; Valier-Brasier, Tony; Conoir, Jean-Marc; Wunenburger, Régis

    2016-06-01

    The effective velocity and attenuation of longitudinal waves through random dispersions of rigid, tungsten-carbide beads in an elastic matrix made of epoxy resin in the range of beads volume fraction 2%-10% are determined experimentally. The multiple scattering model proposed by Luppé, Conoir, and Norris [J. Acoust. Soc. Am. 131(2), 1113-1120 (2012)], which fully takes into account the elastic nature of the matrix and the associated mode conversions, accurately describes the measurements. Theoretical calculations show that the rigid particles display a local, dipolar resonance which shares several features with Minnaert resonance of bubbly liquids and with the dipolar resonance of core-shell particles. Moreover, for the samples under study, the main cause of smoothing of the dipolar resonance of the scatterers and the associated variations of the effective mass density of the dispersions is elastic relaxation, i.e., the finite time required for the shear stresses associated to the translational motion of the scatterers to propagate through the matrix. It is shown that its influence is governed solely by the value of the particle to matrix mass density contrast.

  11. 0 ν β β -decay nuclear matrix element for light and heavy neutrino mass mechanisms from deformed quasiparticle random-phase approximation calculations for 76Ge, 82Se, 130Te, 136Xe, and 150Nd with isospin restoration

    NASA Astrophysics Data System (ADS)

    Fang, Dong-Liang; Faessler, Amand; Šimkovic, Fedor

    2018-04-01

    In this paper, with restored isospin symmetry, we evaluated the neutrinoless double-β -decay nuclear matrix elements for 76Ge, 82Se, 130Te, 136Xe, and 150Nd for both the light and heavy neutrino mass mechanisms using the deformed quasiparticle random-phase approximation approach with realistic forces. We give detailed decompositions of the nuclear matrix elements over different intermediate states and nucleon pairs, and discuss how these decompositions are affected by the model space truncations. Compared to the spherical calculations, our results show reductions from 30 % to about 60 % of the nuclear matrix elements for the calculated isotopes mainly due to the presence of the BCS overlap factor between the initial and final ground states. The comparison between different nucleon-nucleon (NN) forces with corresponding short-range correlations shows that the choice of the NN force gives roughly 20 % deviations for the light exchange neutrino mechanism and much larger deviations for the heavy neutrino exchange mechanism.

  12. Random Matrix Theory in molecular dynamics analysis.

    PubMed

    Palese, Luigi Leonardo

    2015-01-01

    It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. PUFKEY: A High-Security and High-Throughput Hardware True Random Number Generator for Sensor Networks

    PubMed Central

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-01-01

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks. PMID:26501283

  14. PUFKEY: a high-security and high-throughput hardware true random number generator for sensor networks.

    PubMed

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-10-16

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.

  15. Temporal evolution of financial-market correlations.

    PubMed

    Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S

    2011-08-01

    We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.

  16. Temporal evolution of financial-market correlations

    NASA Astrophysics Data System (ADS)

    Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.

    2011-08-01

    We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.

  17. Multi-scale assessment of human-induced changes to ...

    EPA Pesticide Factsheets

    Context: Land use change and forest degradation have myriad effects on tropical ecosystems. Yet their consequences for low-order streams remain very poorly understood, including in the world´s largest freshwater basin, the Amazon.Objectives: Determine the degree to which physical and chemical characteristics of the instream habitat of low-order Amazonian streams change in response to past local- and catchment-level anthropogenic disturbances. Methods: To do so, we collected field instream habitat (i.e., physical habitat and water quality) and landscape data from 99 stream sites in two eastern Brazilian Amazon regions. We used random forest regression trees to assess the relative importance of different predictor variables in determining changes in instream habitat response variables. Adaptations the USEPA’s National Aquatic Resource Survey (NARS) designs, field methods, and approaches for assessing ecological condition have been applied in state and basin stream surveys throughout the U.S., and also in countries outside of the U.S. These applications not only provide valuable tests of the NARS approaches, but generate new understandings of natural and anthropogenic controls on biota and physical habitat in streams. Results from applications in Brazil, for example, not only aid interpretation of the condition of Brazilian streams, but also refine approaches for interpreting aquatic resource surveys in the U.S. and elsewhere. In this article, the authors des

  18. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states.

    PubMed

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  19. Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2011-01-01

    The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.

  20. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    NASA Astrophysics Data System (ADS)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  1. Statistical properties of the stock and credit market: RMT and network topology

    NASA Astrophysics Data System (ADS)

    Lim, Kyuseong; Kim, Min Jae; Kim, Sehyun; Kim, Soo Yong

    We analyzed the dependence structure of the credit and stock market using random matrix theory and network topology. The dynamics of both markets have been spotlighted throughout the subprime crisis. In this study, we compared these two markets in view of the market-wide effect from random matrix theory and eigenvalue analysis. We found that the largest eigenvalue of the credit market as a whole preceded that of the stock market in the beginning of the financial crisis and that of two markets tended to be synchronized after the crisis. The correlation between the companies of both markets became considerably stronger after the crisis as well.

  2. Three-Dimensional Electromagnetic Scattering from Layered Media with Rough Interfaces for Subsurface Radar Remote Sensing

    NASA Astrophysics Data System (ADS)

    Duan, Xueyang

    The objective of this dissertation is to develop forward scattering models for active microwave remote sensing of natural features represented by layered media with rough interfaces. In particular, soil profiles are considered, for which a model of electromagnetic scattering from multilayer rough surfaces with or without buried random media is constructed. Starting from a single rough surface, radar scattering is modeled using the stabilized extended boundary condition method (SEBCM). This method solves the long-standing instability issue of the classical EBCM, and gives three-dimensional full wave solutions over large ranges of surface roughnesses with higher computational efficiency than pure numerical solutions, e.g., method of moments (MoM). Based on this single surface solution, multilayer rough surface scattering is modeled using the scattering matrix approach and the model is used for a comprehensive sensitivity analysis of the total ground scattering as a function of layer separation, subsurface statistics, and sublayer dielectric properties. The buried inhomogeneities such as rocks and vegetation roots are considered for the first time in the forward scattering model. Radar scattering from buried random media is modeled by the aggregate transition matrix using either the recursive transition matrix approach for spherical or short-length cylindrical scatterers, or the generalized iterative extended boundary condition method we developed for long cylinders or root-like cylindrical clusters. These approaches take the field interactions among scatterers into account with high computational efficiency. The aggregate transition matrix is transformed to a scattering matrix for the full solution to the layered-medium problem. This step is based on the near-to-far field transformation of the numerical plane wave expansion of the spherical harmonics and the multipole expansion of plane waves. This transformation consolidates volume scattering from the buried random medium with the scattering from layered structure in general. Combined with scattering from multilayer rough surfaces, scattering contributions from subsurfaces and vegetation roots can be then simulated. Solutions of both the rough surface scattering and random media scattering are validated numerically, experimentally, or both. The experimental validations have been carried out using a laboratory-based transmit-receive system for scattering from random media and a new bistatic tower-mounted radar system for field-based surface scattering measurements.

  3. Grafton and local bone have comparable outcomes to iliac crest bone in instrumented single-level lumbar fusions.

    PubMed

    Kang, James; An, Howard; Hilibrand, Alan; Yoon, S Tim; Kavanagh, Eoin; Boden, Scott

    2012-05-20

    Prospective multicenter randomized clinical trail. The goal of our 2-year prospective study was to perform a randomized clinical trial comparing the outcomes of Grafton demineralized bone matrix (DBM) Matrix with local bone with that of iliac crest bone graft (ICBG) in a single-level instrumented posterior lumbar fusion. There has been extensive research and development in identifying a suitable substitute to replace autologous ICBG that is associated with known morbidities. DBMs are a class of commercially available grafting agents that are prepared from allograft bone. Many such products have been commercially available for clinical use; however, their efficacy for spine fusion has been mostly based on anecdotal evidence rather than randomized controlled clinical trials. Forty-six patients were randomly assigned (2:1) to receive Grafton DBM Matrix with local bone (30 patients) or autologous ICBG (16 patients). The mean age was 64 (females [F] = 21, males [M] = 9) in the DBM group and 65 (F = 9, M = 5) in the ICBG group. An independent radiologist evaluated plain radiographs and computed tomographic scans at 6-month, 1-year, and 2-year time points. Clinical outcomes were measured using Oswestry Disability Index (ODI) and Medical Outcomes Study 36-Item Short Form Health Survey. Forty-one patients (DBM = 28 and ICBG = 13) completed the 2-year follow-up. Final fusion rates were 86% (Grafton Matrix) versus 92% (ICBG) (P = 1.0 not significant). The Grafton group showed slightly better improvement in ODI score than the ICBG group at the final 2-year follow-up (Grafton [16.2] and ICBG [22.7]); however, the difference was not statistically significant (P = 0.2346 at 24 mo). Grafton showed consistently higher physical function scores at 24 months; however, differences were not statistically significant (P = 0.0823). Similar improvements in the physical component summary scores were seen in both the Grafton and ICBG groups. There was a statistically significant greater mean intraoperative blood loss in the ICBG group than in the Grafton group (P < 0.0031). At 2-year follow-up, subjects who were randomized to Grafton Matrix and local bone achieved an 86% overall fusion rate and improvements in clinical outcomes that were comparable with those in the ICBG group.

  4. Studies on Relaxation Behavior of Corona Poled Aromatic Dipolar Molecules in a Polymer Matrix

    DTIC Science & Technology

    1990-08-03

    concentration upto 30 weight percent. Orientation As expected optically responsive molecules are randomly oriented in the polymer matrix although a small amount...INSERT Figure 4 The retention of SH intensity of the small molecule such as MNA was found to be very poor in the PMMA matrix while the larger rodlike...Polym. Prepr. Am. Chem. Soc., Div. Polym. Chem. 24(2), 309 (1983). 16.- H. Ringsdorf and H. W. Schmidt. Makromol. Chem. 185, 1327 (1984). 17. S. Musikant

  5. Comprehensive T-matrix Reference Database: A 2009-2011 Update

    NASA Technical Reports Server (NTRS)

    Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.

    2012-01-01

    The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.

  6. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  7. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  8. Random pure states: Quantifying bipartite entanglement beyond the linear statistics.

    PubMed

    Vivo, Pierpaolo; Pato, Mauricio P; Oshanin, Gleb

    2016-05-01

    We analyze the properties of entangled random pure states of a quantum system partitioned into two smaller subsystems of dimensions N and M. Framing the problem in terms of random matrices with a fixed-trace constraint, we establish, for arbitrary N≤M, a general relation between the n-point densities and the cross moments of the eigenvalues of the reduced density matrix, i.e., the so-called Schmidt eigenvalues, and the analogous functionals of the eigenvalues of the Wishart-Laguerre ensemble of the random matrix theory. This allows us to derive explicit expressions for two-level densities, and also an exact expression for the variance of von Neumann entropy at finite N,M. Then, we focus on the moments E{K^{a}} of the Schmidt number K, the reciprocal of the purity. This is a random variable supported on [1,N], which quantifies the number of degrees of freedom effectively contributing to the entanglement. We derive a wealth of analytical results for E{K^{a}} for N=2 and 3 and arbitrary M, and also for square N=M systems by spotting for the latter a connection with the probability P(x_{min}^{GUE}≥sqrt[2N]ξ) that the smallest eigenvalue x_{min}^{GUE} of an N×N matrix belonging to the Gaussian unitary ensemble is larger than sqrt[2N]ξ. As a by-product, we present an exact asymptotic expansion for P(x_{min}^{GUE}≥sqrt[2N]ξ) for finite N as ξ→∞. Our results are corroborated by numerical simulations whenever possible, with excellent agreement.

  9. Inland diatoms from the McMurdo Dry Valleys and James Ross Island, Antarctica

    USGS Publications Warehouse

    Esposito, R.M.M.; Spaulding, S.A.; McKnight, Diane M.; Van De Vijver, B.; Kopalova, K.; Lubinski, D.; Hall, B.; Whittaker, T.

    2008-01-01

    Diatom taxa present in the inland streams and lakes of the McMurdo Dry Valleys and James Ross Island, Antarctica, are presented in this paper. A total of nine taxa are illustrated, with descriptions of four new species (Luticola austroatlantica sp. nov., Luticola dolia sp. nov., Luticola laeta sp. nov., Muelleria supra sp. nov.). In the perennially ice-covered lakes of the McMurdo Dry Valleys, diatoms are confined to benthic mats within the photic zone. In streams, diatoms are attached to benthic surfaces and within the microbial mat matrix. One species, L. austroatlantica, is found on James Ross Island, of the southern Atlantic archipelago, and the McMurdo Dry Valleys. The McMurdo Dry Valley populations are at the lower range of the size spectrum for the species. Streams flow for 6-10 weeks during the austral summer, when temperatures and solar radiation allow glacial ice to melt. The diatom flora of the region is characterized by species assemblages favored under harsh conditions, with naviculoid taxa as the dominant group and several major diatom groups conspicuously absent. ?? 2008 NRC.

  10. Curvilinear immersed-boundary method for simulating unsteady flows in shallow natural streams with arbitrarily complex obstacles

    NASA Astrophysics Data System (ADS)

    Kang, Seokkoo; Borazjani, Iman; Sotiropoulos, Fotis

    2008-11-01

    Unsteady 3D simulations of flows in natural streams is a challenging task due to the complexity of the bathymetry, the shallowness of the flow, and the presence of multiple nature- and man-made obstacles. This work is motivated by the need to develop a powerful numerical method for simulating such flows using coherent-structure-resolving turbulence models. We employ the curvilinear immersed boundary method of Ge and Sotiropoulos (Journal of Computational Physics, 2007) and address the critical issue of numerical efficiency in large aspect ratio computational domains and grids such as those encountered in long and shallow open channels. We show that the matrix-free Newton-Krylov method for solving the momentum equations coupled with an algebraic multigrid method with incomplete LU preconditioner for solving the Poisson equation yield a robust and efficient procedure for obtaining time-accurate solutions in such problems. We demonstrate the potential of the numerical approach by carrying out a direct numerical simulation of flow in a long and shallow meandering stream with multiple hydraulic structures.

  11. The concept of entropy in landscape evolution

    USGS Publications Warehouse

    Leopold, Luna Bergere; Langbein, Walter Basil

    1962-01-01

    The concept of entropy is expressed in terms of probability of various states. Entropy treats of the distribution of energy. The principle is introduced that the most probable condition exists when energy in a river system is as uniformly distributed as may be permitted by physical constraints. From these general considerations equations for the longitudinal profiles of rivers are derived that are mathematically comparable to those observed in the field. The most probable river profiles approach the condition in which the downstream rate of production of entropy per unit mass is constant. Hydraulic equations are insufficient to determine the velocity, depths, and slopes of rivers that are themselves authors of their own hydraulic geometries. A solution becomes possible by introducing the concept that the distribution of energy tends toward the most probable. This solution leads to a theoretical definition of the hydraulic geometry of river channels that agrees closely with field observations. The most probable state for certain physical systems can also be illustrated by random-walk models. Average longitudinal profiles and drainage networks were so derived and these have the properties implied by the theory. The drainage networks derived from random walks have some of the principal properties demonstrated by the Horton analysis; specifically, the logarithms of stream length and stream numbers are proportional to stream order.

  12. A Note on Parameters of Random Substitutions by γ-Diagonal Matrices

    NASA Astrophysics Data System (ADS)

    Kang, Ju-Sung

    Random substitutions are very useful and practical method for privacy-preserving schemes. In this paper we obtain the exact relationship between the estimation errors and three parameters used in the random substitutions, namely the privacy assurance metric γ, the total number n of data records, and the size N of transition matrix. We also demonstrate some simulations concerning the theoretical result.

  13. Reconstitution of in vivo macrophage-tumor cell pairing and streaming motility on one-dimensional micro-patterned substrates

    PubMed Central

    Sharma, Ved P.; Beaty, Brian T.; Patsialou, Antonia; Liu, Huiping; Clarke, Michael; Cox, Dianne; Condeelis, John S.; Eddy, Robert J.

    2014-01-01

    In mammary tumors, intravital imaging techniques have uncovered an essential role for macrophages during tumor cell invasion and metastasis mediated by an epidermal growth factor (EGF)/colony stimulating factor-1 (CSF-1) paracrine loop. It was previously demonstrated that mammary tumors in mice derived from rat carcinoma cells (MTLn3) exhibited high velocity migration on extracellular matrix (ECM) fibers. These cells form paracrine loop-dependent linear assemblies of alternating host macrophages and tumor cells known as “streams.” Here, we confirm by intravital imaging that similar streams form in close association with ECM fibers in a highly metastatic patient-derived orthotopic mammary tumor (TN1). To understand the in vivo cell motility behaviors observed in streams, an in vitro model of fibrillar tumor ECM utilizing adhesive 1D micropatterned substrates was developed. MTLn3 cells on 1D fibronectin or type I collagen substrates migrated with higher velocity than on 2D substrates and displayed enhanced lamellipodial protrusion and increased motility upon local interaction and pairing with bone marrow-derived macrophages (BMMs). Inhibitors of EGF or CSF-1 signaling disrupted this interaction and reduced tumor cell velocity and protrusion, validating the requirement for an intact paracrine loop. Both TN1 and MTLn3 cells in the presence of BMMs were capable of co-assembling into linear arrays of alternating tumor cells and BMMs that resembled streams in vivo, suggesting the stream assembly is cell autonomous and can be reconstituted on 1D substrates. Our results validate the use of 1D micropatterned substrates as a simple and defined approach to study fibrillar ECM-dependent cell pairing, migration and relay chemotaxis as a complementary tool to intravital imaging. PMID:24634804

  14. Method and apparatus for fabricating a composite structure consisting of a filamentary material in a metal matrix

    DOEpatents

    Banker, J.G.; Anderson, R.C.

    1975-10-21

    A method and apparatus are provided for preparing a composite structure consisting of filamentary material within a metal matrix. The method is practiced by the steps of confining the metal for forming the matrix in a first chamber, heating the confined metal to a temperature adequate to effect melting thereof, introducing a stream of inert gas into the chamber for pressurizing the atmosphere in the chamber to a pressure greater than atmospheric pressure, confining the filamentary material in a second chamber, heating the confined filamentary material to a temperature less than the melting temperature of the metal, evacuating the second chamber to provide an atmosphere therein at a pressure, placing the second chamber in registry with the first chamber to provide for the forced flow of the molten metal into the second chamber to effect infiltration of the filamentary material with the molten metal, and thereafter cooling the metal infiltrated-filamentary material to form said composite structure.

  15. Storm water runoff concentration matrix for urban areas.

    PubMed

    Göbel, P; Dierkes, C; Coldewey, W G

    2007-04-01

    The infrastructure (roads, sidewalk, commercial and residential structures) added during the land development and urbanisation process is designed to collect precipitation and convey it out of the watershed, typically in existing surface water channels, such as streams and rivers. The quality of surface water, seepage water and ground water is influenced by pollutants that collect on impervious surfaces and that are carried by urban storm water runoff. Heavy metals, e.g. lead (Pb), zinc (Zn), copper (Cu), cadmium (Cd), polycyclic aromatic hydrocarbons (PAH), mineral oil hydrocarbons (MOH) and readily soluble salts in runoff, contribute to the degradation of water. An intensive literature search on the distribution and concentration of the surface-dependent runoff water has been compiled. Concentration variations of several pollutants derived from different surfaces have been averaged. More than 300 references providing about 1300 data for different pollutants culminate in a representative concentration matrix consisting of medians and extreme values. This matrix can be applied to long-term valuations and numerical modelling of storm water treatment facilities.

  16. Palladium and platinum-based nanoparticle functional sensor layers for selective H2 sensing

    DOEpatents

    Ohodnicki, Jr., Paul R.; Baltrus, John P.; Brown, Thomas D.

    2017-07-04

    The disclosure relates to a plasmon resonance-based method for H.sub.2 sensing in a gas stream utilizing a hydrogen sensing material. The hydrogen sensing material is comprises Pd-based or Pt-based nanoparticles having an average nanoparticle diameter of less than about 100 nanometers dispersed in an inert matrix having a bandgap greater than or equal to 5 eV, and an oxygen ion conductivity less than approximately 10.sup.-7 S/cm at a temperature of 700.degree. C. Exemplary inert matrix materials include SiO.sub.2, Al.sub.2O.sub.3, and Si.sub.3N.sub.4 as well as modifications to modify the effective refractive indices through combinations and/or doping of such materials. The hydrogen sensing material utilized in the method of this disclosure may be prepared using means known in the art for the production of nanoparticles dispersed within a supporting matrix including sol-gel based wet chemistry techniques, impregnation techniques, implantation techniques, sputtering techniques, and others.

  17. Nanocomposite thin films for high temperature optical gas sensing of hydrogen

    DOEpatents

    Ohodnicki, Jr., Paul R.; Brown, Thomas D.

    2013-04-02

    The disclosure relates to a plasmon resonance-based method for H.sub.2 sensing in a gas stream at temperatures greater than about 500.degree. C. utilizing a hydrogen sensing material. The hydrogen sensing material is comprised of gold nanoparticles having an average nanoparticle diameter of less than about 100 nanometers dispersed in an inert matrix having a bandgap greater than or equal to 5 eV, and an oxygen ion conductivity less than approximately 10.sup.-7 S/cm at a temperature of 700.degree. C. Exemplary inert matrix materials include SiO.sub.2, Al.sub.2O.sub.3, and Si.sub.3N.sub.4 as well as modifications to modify the effective refractive indices through combinations and/or doping of such materials. At high temperatures, blue shift of the plasmon resonance optical absorption peak indicates the presence of H.sub.2. The method disclosed offers significant advantage over active and reducible matrix materials typically utilized, such as yttria-stabilized zirconia (YSZ) or TiO.sub.2.

  18. INTEGRATING PROBABILISTIC AND FIXED-SITE MONITORING FOR ROBUST WATER QUALITY ASSESSMENTS

    EPA Science Inventory

    Determining the extent of water-quality degradation, controlling nonpoint sources, and defining allowable amounts of contaminants are important water-quality issues defined in the Clean Water Act that require new monitoring data. Probabilistic, randomized stream water-quality mon...

  19. Optimized Projection Matrix for Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Pi, Yiming; Cao, Zongjie

    2010-12-01

    Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  20. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  1. Design and methods of the Midwest Stream Quality Assessment (MSQA), 2013

    USGS Publications Warehouse

    Garrett, Jessica D.; Frey, Jeffrey W.; Van Metre, Peter C.; Journey, Celeste A.; Nakagaki, Naomi; Button, Daniel T.; Nowell, Lisa H.

    2017-10-18

    During 2013, the U.S. Geological Survey (USGS) National Water-Quality Assessment Project (NAWQA), in collaboration with the USGS Columbia Environmental Research Center, the U.S. Environmental Protection Agency (EPA) National Rivers and Streams Assessment (NRSA), and the EPA Office of Pesticide Programs assessed stream quality across the Midwestern United States. This Midwest Stream Quality Assessment (MSQA) simultaneously characterized watershed and stream-reach water-quality stressors along with instream biological conditions, to better understand regional stressor-effects relations. The MSQA design focused on effects from the widespread agriculture in the region and urban development because of their importance as ecological stressors of particular concern to Midwest region resource managers.A combined random stratified selection and a targeted selection based on land-use data were used to identify and select sites representing gradients in agricultural intensity across the region. During a 14-week period from May through August 2013, 100 sites were selected and sampled 12 times for contaminants, nutrients, and sediment. This 14-week water-quality “index” period culminated with an ecological survey of habitat, periphyton, benthic macroinvertebrates, and fish at all sites. Sediment was collected during the ecological survey for analysis of sediment chemistry and toxicity testing. Of the 100 sites, 50 were selected for the MSQA random stratified group from 154 NRSA sites planned for the region, and the other 50 MSQA sites were selected as targeted sites to more evenly cover agricultural and urban stressor gradients in the study area. Of the 50 targeted sites, 12 were in urbanized watersheds and 21 represented “good” biological conditions or “least disturbed” conditions. The remaining 17 targeted sites were selected to improve coverage of the agricultural intensity gradient or because of historical data collection to provide temporal context for the study.This report provides a detailed description of the MSQA study components, including surveys of ecological conditions, routine water sampling, deployment of passive polar organic compound integrative samplers, and stream sediment sampling at all sites. Component studies that were completed to provide finer scale temporal data or more extensive analysis at selected sites, included continuous water-quality monitoring, daily pesticide sampling, laboratory and in-stream water toxicity testing efforts, and deployment of passive suspended-sediment samplers.

  2. The Matrix Analogies Test: A Validity Study with the K-ABC.

    ERIC Educational Resources Information Center

    Smith, Douglas K.

    The Matrix Analogies Test-Expanded Form (MAT-EF) and Kaufman Assessment Battery for Children (K-ABC) were administered in counterbalanced order to two randomly selected samples of students in grades 2 through 5. The MAT-EF was recently developed to measure non-verbal reasoning. The samples included 26 non-handicapped second graders in a rural…

  3. Efficient Text Encryption and Hiding with Double-Random Phase-Encoding

    PubMed Central

    Sang, Jun; Ling, Shenggui; Alam, Mohammad S.

    2012-01-01

    In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003

  4. Improved cell for water-vapor electrolysis

    NASA Technical Reports Server (NTRS)

    Aylward, J. R.

    1981-01-01

    Continuous-flow electrolytic cells decompose water vapor in steam and room air into hydrogen and oxygen. Sintered iridium oxide catalytic anode coating yields dissociation rates hundredfold greater than those obtained using platinum black. Cell consists of two mirror-image cells, with dual cathode sandwiched between two anodes. Gas traverses serpentine channels within cell and is dissociated at anode. Oxygen mingles with gas stream, while hydrogen migrates through porous matrix and is liberated as gas at cathode.

  5. Thiacrown polymers for removal of mercury from waste streams

    DOEpatents

    Baumann, Theodore F.; Reynolds, John G.; Fox, Glenn A.

    2002-01-01

    Thiacrown polymers immobilized to a polystyrene-divinylbenzene matrix react with Hg.sup.2+ under a variety of conditions to efficiently and selectively remove Hg.sup.2+ ions from acidic aqueous solutions, even in the presence of a variety of other metal ions. The mercury can be recovered and the polymer regenerated. This mercury removal method has utility in the treatment of industrial wastewater, where a selective and cost-effective removal process is required.

  6. Thiacrown polymers for removal of mercury from waste streams

    DOEpatents

    Baumann, Theodore F.; Reynolds, John G.; Fox, Glenn A.

    2004-02-24

    Thiacrown polymers immobilized to a polystyrene-divinylbenzene matrix react with Hg.sup.2+ under a variety of conditions to efficiently and selectively remove Hg.sup.2+ ions from acidic aqueous solutions, even in the presence of a variety of other metal ions. The mercury can be recovered and the polymer regenerated. This mercury removal method has utility in the treatment of industrial wastewater, where a selective and cost-effective removal process is required.

  7. Detecting temporal trends in species assemblages with bootstrapping procedures and hierarchical models

    USGS Publications Warehouse

    Gotelli, Nicholas J.; Dorazio, Robert M.; Ellison, Aaron M.; Grossman, Gary D.

    2010-01-01

    Quantifying patterns of temporal trends in species assemblages is an important analytical challenge in community ecology. We describe methods of analysis that can be applied to a matrix of counts of individuals that is organized by species (rows) and time-ordered sampling periods (columns). We first developed a bootstrapping procedure to test the null hypothesis of random sampling from a stationary species abundance distribution with temporally varying sampling probabilities. This procedure can be modified to account for undetected species. We next developed a hierarchical model to estimate species-specific trends in abundance while accounting for species-specific probabilities of detection. We analysed two long-term datasets on stream fishes and grassland insects to demonstrate these methods. For both assemblages, the bootstrap test indicated that temporal trends in abundance were more heterogeneous than expected under the null model. We used the hierarchical model to estimate trends in abundance and identified sets of species in each assemblage that were steadily increasing, decreasing or remaining constant in abundance over more than a decade of standardized annual surveys. Our methods of analysis are broadly applicable to other ecological datasets, and they represent an advance over most existing procedures, which do not incorporate effects of incomplete sampling and imperfect detection.

  8. CMV matrices in random matrix theory and integrable systems: a survey

    NASA Astrophysics Data System (ADS)

    Nenciu, Irina

    2006-07-01

    We present a survey of recent results concerning a remarkable class of unitary matrices, the CMV matrices. We are particularly interested in the role they play in the theory of random matrices and integrable systems. Throughout the paper we also emphasize the analogies and connections to Jacobi matrices.

  9. Self-balanced real-time photonic scheme for ultrafast random number generation

    NASA Astrophysics Data System (ADS)

    Li, Pu; Guo, Ya; Guo, Yanqiang; Fan, Yuanlong; Guo, Xiaomin; Liu, Xianglian; Shore, K. Alan; Dubrova, Elena; Xu, Bingjie; Wang, Yuncai; Wang, Anbang

    2018-06-01

    We propose a real-time self-balanced photonic method for extracting ultrafast random numbers from broadband randomness sources. In place of electronic analog-to-digital converters (ADCs), the balanced photo-detection technology is used to directly quantize optically sampled chaotic pulses into a continuous random number stream. Benefitting from ultrafast photo-detection, our method can efficiently eliminate the generation rate bottleneck from electronic ADCs which are required in nearly all the available fast physical random number generators. A proof-of-principle experiment demonstrates that using our approach 10 Gb/s real-time and statistically unbiased random numbers are successfully extracted from a bandwidth-enhanced chaotic source. The generation rate achieved experimentally here is being limited by the bandwidth of the chaotic source. The method described has the potential to attain a real-time rate of 100 Gb/s.

  10. Impacts of acidification on macroinvertebrate communities in streams of the western Adirondack Mountains, New York, USA

    USGS Publications Warehouse

    Baldigo, Barry P.; Lawrence, G.B.; Bode, R.W.; Simonin, H.A.; Roy, K.M.; Smith, A.J.

    2009-01-01

    Limited stream chemistry and macroinvertebrate data indicate that acidic deposition has adversely affected benthic macroinvertebrate assemblages in numerous headwater streams of the western Adirondack Mountains of New York. No studies, however, have quantified the effects that acidic deposition and acidification may have had on resident fish and macroinvertebrate communities in streams of the region. As part of the Western Adirondack Stream Survey, water chemistry from 200 streams was sampled five times and macroinvertebrate communities were surveyed once from a subset of 36 streams in the Oswegatchie and Black River Basins during 2003-2005 and evaluated to: (a) document the effects that chronic and episodic acidification have on macroinvertebrate communities across the region, (b) define the relations between acidification and the health of affected species assemblages, and (c) assess indicators and thresholds of biological effects. Concentrations of inorganic Al in 66% of the 200 streams periodically reached concentrations toxic to acid-tolerant biota. A new acid biological assessment profile (acidBAP) index for macroinvertebrates, derived from percent mayfly richness and percent acid-tolerant taxa, was strongly correlated (R2 values range from 0.58 to 0.76) with concentrations of inorganic Al, pH, ANC, and base cation surplus (BCS). The BCS and acidBAP index helped remove confounding influences of natural organic acidity and to redefine acidification-effect thresholds and biological-impact categories. AcidBAP scores indicated that macroinvertebrate communities were moderately or severely impacted by acidification in 44-56% of 36 study streams, however, additional data from randomly selected streams is needed to accurately estimate the true percentage of streams in which macroinvertebrate communities are adversely affected in this, or other, regions. As biologically relevant measures of impacts caused by acidification, both BCS and acidBAP may be useful indicators of ecosystem effects and potential recovery at the local and regional scale.

  11. The cosmic microwave background radiation power spectrum as a random bit generator for symmetric- and asymmetric-key cryptography.

    PubMed

    Lee, Jeffrey S; Cleaver, Gerald B

    2017-10-01

    In this note, the Cosmic Microwave Background (CMB) Radiation is shown to be capable of functioning as a Random Bit Generator, and constitutes an effectively infinite supply of truly random one-time pad values of arbitrary length. It is further argued that the CMB power spectrum potentially conforms to the FIPS 140-2 standard. Additionally, its applicability to the generation of a (n × n) random key matrix for a Vernam cipher is established.

  12. Ubiquity and persistance of Escherichia coli in a midwestern coastal stream

    USGS Publications Warehouse

    Byappanahalli, Muruleedhara N.; Fowler, Melanie; Shively, Dawn; Whitman, Richard

    2003-01-01

    Dunes Creek, a small Lake Michigan coastal stream that drains sandy aquifers and wetlands of Indiana Dunes, has chronically elevated Escherichia coli levels along the bathing beach near its outfall. This study sought to understand the sources ofE. coli in Dunes Creek's central branch. A systematic survey of random and fixed sampling points of water and sediment was conducted over 3 years. E. coliconcentrations in Dunes Creek and beach water were significantly correlated. Weekly monitoring at 14 stations during 1999 and 2000 indicated chronic loading of E. coli throughout the stream. Significant correlations between E. coli numbers in stream water and stream sediment, submerged sediment and margin, and margin and 1 m from shore were found. Median E. coli counts were highest in stream sediments, followed by bank sediments, sediments along spring margins, stream water, and isolated pools; in forest soils, E. coli counts were more variable and relatively lower. Sediment moisture was significantly correlated with E. colicounts. Direct fecal input inadequately explains the widespread and consistent occurrence of E. coli in the Dunes Creek watershed; long-term survival or multiplication or both seem likely. The authors conclude that (i) E. coli is ubiquitous and persistent throughout the Dunes Creek basin, (ii) E. coli occurrence and distribution in riparian sediments help account for the continuous loading of the bacteria in Dunes Creek, and (iii) ditching of the stream, increased drainage, and subsequent loss of wetlands may account for the chronically high E. coli levels observed.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forrester, Peter J., E-mail: p.forrester@ms.unimelb.edu.au; Thompson, Colin J.

    The Golden-Thompson inequality, Tr (e{sup A+B}) ⩽ Tr (e{sup A}e{sup B}) for A, B Hermitian matrices, appeared in independent works by Golden and Thompson published in 1965. Both of these were motivated by considerations in statistical mechanics. In recent years the Golden-Thompson inequality has found applications to random matrix theory. In this article, we detail some historical aspects relating to Thompson's work, giving in particular a hitherto unpublished proof due to Dyson, and correspondence with Pólya. We show too how the 2 × 2 case relates to hyperbolic geometry, and how the original inequality holds true with the trace operation replaced bymore » any unitarily invariant norm. In relation to the random matrix applications, we review its use in the derivation of concentration type lemmas for sums of random matrices due to Ahlswede-Winter, and Oliveira, generalizing various classical results.« less

  14. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  15. Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, David O.

    2007-01-01

    A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less

  16. Money creation process in a random redistribution model

    NASA Astrophysics Data System (ADS)

    Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

    2014-01-01

    In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

  17. Sources and preparation of data for assessing trends in concentrations of pesticides in streams of the United States, 1992–2010

    USGS Publications Warehouse

    Martin, Jeffrey D.; Eberle, Michael; Nakagaki, Naomi

    2011-01-01

    This report updates a previously published water-quality dataset of 44 commonly used pesticides and 8 pesticide degradates suitable for a national assessment of trends in pesticide concentrations in streams of the United States. Water-quality samples collected from January 1992 through September 2010 at stream-water sites of the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) Program and the National Stream Quality Accounting Network (NASQAN) were compiled, reviewed, selected, and prepared for trend analysis. The principal steps in data review for trend analysis were to (1) identify analytical schedule, (2) verify sample-level coding, (3) exclude inappropriate samples or results, (4) review pesticide detections per sample, (5) review high pesticide concentrations, and (6) review the spatial and temporal extent of NAWQA pesticide data and selection of analytical methods for trend analysis. The principal steps in data preparation for trend analysis were to (1) select stream-water sites for trend analysis, (2) round concentrations to a consistent level of precision for the concentration range, (3) identify routine reporting levels used to report nondetections unaffected by matrix interference, (4) reassign the concentration value for routine nondetections to the maximum value of the long-term method detection level (maxLT-MDL), (5) adjust concentrations to compensate for temporal changes in bias of recovery of the gas chromatography/mass spectrometry (GCMS) analytical method, and (6) identify samples considered inappropriate for trend analysis. Samples analyzed at the USGS National Water Quality Laboratory (NWQL) by the GCMS analytical method were the most extensive in time and space and, consequently, were selected for trend analysis. Stream-water sites with 3 or more water years of data with six or more samples per year were selected for pesticide trend analysis. The selection criteria described in the report produced a dataset of 21,988 pesticide samples at 212 stream-water sites. Only 21,144 pesticide samples, however, are considered appropriate for trend analysis.

  18. Random density matrices versus random evolution of open system

    NASA Astrophysics Data System (ADS)

    Pineda, Carlos; Seligman, Thomas H.

    2015-10-01

    We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.

  19. Free Vibration of Uncertain Unsymmetrically Laminated Beams

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Goyal, Vijay K.

    2001-01-01

    Monte Carlo Simulation and Stochastic FEA are used to predict randomness in the free vibration response of thin unsymmetrically laminated beams. For the present study, it is assumed that randomness in the response is only caused by uncertainties in the ply orientations. The ply orientations may become random or uncertain during the manufacturing process. A new 16-dof beam element, based on the first-order shear deformation beam theory, is used to study the stochastic nature of the natural frequencies. Using variational principles, the element stiffness matrix and mass matrix are obtained through analytical integration. Using a random sequence a large data set is generated, containing possible random ply-orientations. This data is assumed to be symmetric. The stochastic-based finite element model for free vibrations predicts the relation between the randomness in fundamental natural frequencies and the randomness in ply-orientation. The sensitivity derivatives are calculated numerically through an exact formulation. The squared fundamental natural frequencies are expressed in terms of deterministic and probabilistic quantities, allowing to determine how sensitive they are to variations in ply angles. The predicted mean-valued fundamental natural frequency squared and the variance of the present model are in good agreement with Monte Carlo Simulation. Results, also, show that variations between plus or minus 5 degrees in ply-angles can affect free vibration response of unsymmetrically and symmetrically laminated beams.

  20. RT-MATRIX: Measuring Total Organic Carbon by Photocatalytic Oxidation of Volatile Organic Compounds

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Volatile organic compounds (VOCs) inevitably accumulate in enclosed habitats such as the International Space Station and the Crew Exploration Vehicle (CEV) as a result of human metabolism, material off-gassing, and leaking equipment. Some VOCs can negatively affect the quality of the crew's life, health, and performance; and consequently, the success of the mission. Air quality must be closely monitored to ensure a safe living and working environment. Currently, there is no reliable air quality monitoring system that meets NASA's stringent requirements for power, mass, volume, or performance. The ultimate objective of the project -- the development of a Real-Time, Miniaturized, Autonomous Total Risk Indicator System (RT.MATRIX).is to provide a portable, dual-function sensing system that simultaneously determines total organic carbon (TOC) and individual contaminants in air streams.

  1. Embedded random matrix ensembles from nuclear structure and their recent applications

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.; Chavda, N. D.

    Embedded random matrix ensembles generated by random interactions (of low body rank and usually two-body) in the presence of a one-body mean field, introduced in nuclear structure physics, are now established to be indispensable in describing statistical properties of a large number of isolated finite quantum many-particle systems. Lie algebra symmetries of the interactions, as identified from nuclear shell model and the interacting boson model, led to the introduction of a variety of embedded ensembles (EEs). These ensembles with a mean field and chaos generating two-body interaction generate in three different stages, delocalization of wave functions in the Fock space of the mean-field basis states. The last stage corresponds to what one may call thermalization and complex nuclei, as seen from many shell model calculations, lie in this region. Besides briefly describing them, their recent applications to nuclear structure are presented and they are (i) nuclear level densities with interactions; (ii) orbit occupancies; (iii) neutrinoless double beta decay nuclear transition matrix elements as transition strengths. In addition, their applications are also presented briefly that go beyond nuclear structure and they are (i) fidelity, decoherence, entanglement and thermalization in isolated finite quantum systems with interactions; (ii) quantum transport in disordered networks connected by many-body interactions with centrosymmetry; (iii) semicircle to Gaussian transition in eigenvalue densities with k-body random interactions and its relation to the Sachdev-Ye-Kitaev (SYK) model for majorana fermions.

  2. Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices

    NASA Astrophysics Data System (ADS)

    Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.

    2017-08-01

    We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.

  3. Condition for invariant spectrum of an electromagnetic wave scattered from an anisotropic random media.

    PubMed

    Li, Jia; Wu, Pinghui; Chang, Liping

    2015-08-24

    Within the accuracy of the first-order Born approximation, sufficient conditions are derived for the invariance of spectrum of an electromagnetic wave, which is generated by the scattering of an electromagnetic plane wave from an anisotropic random media. We show that the following restrictions on properties of incident fields and the anisotropic media must be simultaneously satisfied: 1) the elements of the dielectric susceptibility matrix of the media must obey the scaling law; 2) the spectral components of the incident field are proportional to each other; 3) the second moments of the elements of the dielectric susceptibility matrix of the media are inversely proportional to the frequency.

  4. Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…

  5. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  6. Thermal modelling of normal distributed nanoparticles through thickness in an inorganic material matrix

    NASA Astrophysics Data System (ADS)

    Latré, S.; Desplentere, F.; De Pooter, S.; Seveno, D.

    2017-10-01

    Nanoscale materials showing superior thermal properties have raised the interest of the building industry. By adding these materials to conventional construction materials, it is possible to decrease the total thermal conductivity by almost one order of magnitude. This conductivity is mainly influenced by the dispersion quality within the matrix material. At the industrial scale, the main challenge is to control this dispersion to reduce or even eliminate thermal bridges. This allows to reach an industrially relevant process to balance out the high material cost and their superior thermal insulation properties. Therefore, a methodology is required to measure and describe these nanoscale distributions within the inorganic matrix material. These distributions are either random or normally distributed through thickness within the matrix material. We show that the influence of these distributions is meaningful and modifies the thermal conductivity of the building material. Hence, this strategy will generate a thermal model allowing to predict the thermal behavior of the nanoscale particles and their distributions. This thermal model will be validated by the hot wire technique. For the moment, a good correlation is found between the numerical results and experimental data for a randomly distributed form of nanoparticles in all directions.

  7. Design of a factorial experiment with randomization restrictions to assess medical device performance on vascular tissue.

    PubMed

    Diestelkamp, Wiebke S; Krane, Carissa M; Pinnell, Margaret F

    2011-05-20

    Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance.

  8. Comprehensive T-Matrix Reference Database: A 2007-2009 Update

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas

    2010-01-01

    The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.

  9. Least-squares analysis of the Mueller matrix.

    PubMed

    Reimer, Michael; Yevick, David

    2006-08-15

    In a single-mode fiber excited by light with a fixed polarization state, the output polarizations obtained at two different optical frequencies are related by a Mueller matrix. We examine least-squares procedures for estimating this matrix from repeated measurements of the output Stokes vector for a random set of input polarization states. We then apply these methods to the determination of polarization mode dispersion and polarization-dependent loss in an optical fiber. We find that a relatively simple formalism leads to results that are comparable with those of far more involved techniques.

  10. Finding the Stable Structures of N1-xWx with an Ab Initio High-Throughput Approach

    DTIC Science & Technology

    2015-05-26

    W. These include borides , carbides, oxides, and other nitrides. We also invented many structures to mimic the random pattern of vacancies on both the...structures. These include nitrides, oxides, borides , and carbides, as well as supercells of standard structures with atoms removed to mimic the random patter...1930). [15] R. Kiessling and Y. H. Liu, Thermal stability of the chromium, iron, and tungsten borides in streaming ammonia and the existence of a new

  11. SITE CHARACTERIZATION USING BIRD SPECIES COMPOSITION IN EASTERN OREGON, USA

    EPA Science Inventory

    We conducted riparian bird surveys at 25 randomly selected stream reaches in the John Day River Basin of eastern Oregon as part of the U.S. Environmental Protection Agency's Environmental Monitoring and Assessment Program (EMAP). At each reach along a kilometer-length transect, ...

  12. Value stream mapping of the Pap test processing procedure: a lean approach to improve quality and efficiency.

    PubMed

    Michael, Claire W; Naik, Kalyani; McVicker, Michael

    2013-05-01

    We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.

  13. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  14. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  15. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    PubMed Central

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  16. Sequential time interleaved random equivalent sampling for repetitive signal.

    PubMed

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  17. Fatigue loading history reconstruction based on the rain-flow technique

    NASA Technical Reports Server (NTRS)

    Khosrovaneh, A. K.; Dowling, N. E.

    1989-01-01

    Methods are considered for reducing a non-random fatigue loading history to a concise description and then for reconstructing a time history similar to the original. In particular, three methods of reconstruction based on a rain-flow cycle counting matrix are presented. A rain-flow matrix consists of the numbers of cycles at various peak and valley combinations. Two methods are based on a two dimensional rain-flow matrix, and the third on a three dimensional rain-flow matrix. Histories reconstructed by any of these methods produce a rain-flow matrix identical to that of the original history, and as a result the resulting time history is expected to produce a fatigue life similar to that for the original. The procedures described allow lengthy loading histories to be stored in compact form.

  18. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  19. Random matrix approach to group correlations in development country financial market

    NASA Astrophysics Data System (ADS)

    Qohar, Ulin Nuha Abdul; Lim, Kyuseong; Kim, Soo Yong; Liong, The Houw; Purqon, Acep

    2015-12-01

    Financial market is a borderless economic activity, everyone in this world has the right to participate in stock transactions. The movement of stocks is interesting to be discussed in various sciences, ranging from economists to mathe-maticians try to explain and predict the stock movement. Econophysics, as a discipline that studies the economic behavior using one of the methods in particle physics to explain stock movement. Stocks tend to be unpredictable probabilistic regarded as a probabilistic particle. Random Matrix Theory is one method used to analyze probabilistic particle is used to analyze the characteristics of the movement in the stock collection of developing country stock market shares of the correlation matrix. To obtain the characteristics of the developing country stock market and use characteristics of stock markets of developed countries as a parameter for comparison. The result shows market wide effect is not happened in Philipine market and weak in Indonesia market. Contrary, developed country (US) has strong market wide effect.

  20. Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.

    2014-01-01

    The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.

  1. Coherent Patterns in Nuclei and in Financial Markets

    NASA Astrophysics Data System (ADS)

    DroŻdŻ, S.; Kwapień, J.; Speth, J.

    2010-07-01

    In the area of traditional physics the atomic nucleus belongs to the most complex systems. It involves essentially all elements that characterize complexity including the most distinctive one whose essence is a permanent coexistence of coherent patterns and of randomness. From a more interdisciplinary perspective, these are the financial markets that represent an extreme complexity. Here, based on the matrix formalism, we set some parallels between several characteristics of complexity in the above two systems. We, in particular, refer to the concept—historically originating from nuclear physics considerations—of the random matrix theory and demonstrate its utility in quantifying characteristics of the coexistence of chaos and collectivity also for the financial markets. In this later case we show examples that illustrate mapping of the matrix formulation into the concepts originating from the graph theory. Finally, attention is drawn to some novel aspects of the financial coherence which opens room for speculation if analogous effects can be detected in the atomic nuclei or in other strongly interacting Fermi systems.

  2. Density Variations in the NW Star Stream of M31

    NASA Astrophysics Data System (ADS)

    Carlberg, R. G.; Richer, Harvey B.; McConnachie, Alan W.; Irwin, Mike; Ibata, Rodrigo A.; Dotter, Aaron L.; Chapman, Scott; Fardal, Mark; Ferguson, A. M. N.; Lewis, G. F.; Navarro, Julio F.; Puzia, Thomas H.; Valls-Gabaud, David

    2011-04-01

    The Pan Andromeda Archeological Survey (PAndAS) CFHT Megaprime survey of the M31-M33 system has found a star stream which extends about 120 kpc NW from the center of M31. The great length of the stream, and the likelihood that it does not significantly intersect the disk of M31, means that it is unusually well suited for a measurement of stream gaps and clumps along its length as a test for the predicted thousands of dark matter sub-halos. The main result of this paper is that the density of the stream varies between zero and about three times the mean along its length on scales of 2-20 kpc. The probability that the variations are random fluctuations in the star density is less than 10-5. As a control sample, we search for density variations at precisely the same location in stars with metallicity higher than the stream [Fe/H] = [0, -0.5] and find no variations above the expected shot noise. The lumpiness of the stream is not compatible with a low mass star stream in a smooth galactic potential, nor is it readily compatible with the disturbance caused by the visible M31 satellite galaxies. The stream's density variations appear to be consistent with the effects of a large population of steep mass function dark matter sub-halos, such as found in LCDM simulations, acting on an approximately 10 Gyr old star stream. The effects of a single set of halo substructure realizations are shown for illustration, reserving a statistical comparison for another study. Based on observations obtained with MegaPrime / MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institute National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.

  3. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    PubMed

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  4. Randomized placebo controlled blinded study to assess valsartan efficacy in preventing left ventricle remodeling in patients with dual chamber pacemaker--Rationale and design of the trial.

    PubMed

    Tomasik, Andrzej; Jacheć, Wojciech; Wojciechowska, Celina; Kawecki, Damian; Białkowska, Beata; Romuk, Ewa; Gabrysiak, Artur; Birkner, Ewa; Kalarus, Zbigniew; Nowalany-Kozielska, Ewa

    2015-05-01

    Dual chamber pacing is known to have detrimental effect on cardiac performance and heart failure occurring eventually is associated with increased mortality. Experimental studies of pacing in dogs have shown contractile dyssynchrony leading to diffuse alterations in extracellular matrix. In parallel, studies on experimental ischemia/reperfusion injury have shown efficacy of valsartan to inhibit activity of matrix metalloproteinase-9, to increase the activity of tissue inhibitor of matrix metalloproteinase-3 and preserve global contractility and left ventricle ejection fraction. To present rationale and design of randomized blinded trial aimed to assess whether 12 month long administration of valsartan will prevent left ventricle remodeling in patients with preserved left ventricle ejection fraction (LVEF ≥ 40%) and first implantation of dual chamber pacemaker. A total of 100 eligible patients will be randomized into three parallel arms: placebo, valsartan 80 mg/daily and valsartan 160 mg/daily added to previously used drugs. The primary endpoint will be assessment of valsartan efficacy to prevent left ventricle remodeling during 12 month follow-up. We assess patients' functional capacity, blood plasma activity of matrix metalloproteinases and their tissue inhibitors, NT-proBNP, tumor necrosis factor alpha, and Troponin T. Left ventricle function and remodeling is assessed echocardiographically: M-mode, B-mode, tissue Doppler imaging. If valsartan proves effective, it will be an attractive measure to improve long term prognosis in aging population and increasing number of pacemaker recipients. ClinicalTrials.org (NCT01805804). Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Xenogenous Collagen Matrix and/or Enamel Matrix Derivative for Treatment of Localized Gingival Recessions: A Randomized Clinical Trial. Part I: Clinical Outcomes.

    PubMed

    Sangiorgio, João Paulo Menck; Neves, Felipe Lucas da Silva; Rocha Dos Santos, Manuela; França-Grohmann, Isabela Lima; Casarin, Renato Corrêa Viana; Casati, Márcio Zaffalon; Santamaria, Mauro Pedrine; Sallum, Enilson Antonio

    2017-12-01

    Considering xenogeneic collagen matrix (CM) and enamel matrix derivative (EMD) characteristics, it is suggested that their combination could promote superior clinical outcomes in root coverage procedures. Thus, the aim of this parallel, double-masked, dual-center, randomized clinical trial is to evaluate clinical outcomes after treatment of localized gingival recession (GR) by a coronally advanced flap (CAF) combined with CM and/or EMD. Sixty-eight patients presenting one Miller Class I or II GRs were randomly assigned to receive either CAF (n = 17); CAF + CM (n = 17); CAF + EMD (n = 17), or CAF + CM + EMD (n = 17). Recession height, probing depth, clinical attachment level, and keratinized tissue width and thickness were measured at baseline and 90 days and 6 months after surgery. The obtained root coverage was 68.04% ± 24.11% for CAF; 87.20% ± 15.01% for CAF + CM; 88.77% ± 20.66% for CAF + EMD; and 91.59% ± 11.08% for CAF + CM + EMD after 6 months. Groups that received biomaterials showed greater values (P <0.05). Complete root coverage (CRC) for CAF + EMD was 70.59%, significantly superior to CAF alone (23.53%); CAF + CM (52.94%), and CAF + CM + EMD (51.47%) (P <0.05). Keratinized tissue thickness gain was significant only in CM-treated groups (P <0.05). The three approaches are superior to CAF alone for root coverage. EMD provides highest levels of CRC; however, the addition of CM increases gingival thickness. The combination approach does not seem justified.

  6. 3D polarisation speckle as a demonstration of tensor version of the van Cittert-Zernike theorem for stochastic electromagnetic beams

    NASA Astrophysics Data System (ADS)

    Ma, Ning; Zhao, Juan; Hanson, Steen G.; Takeda, Mitsuo; Wang, Wei

    2016-10-01

    Laser speckle has received extensive studies of its basic properties and associated applications. In the majority of research on speckle phenomena, the random optical field has been treated as a scalar optical field, and the main interest has been concentrated on their statistical properties and applications of its intensity distribution. Recently, statistical properties of random electric vector fields referred to as Polarization Speckle have come to attract new interest because of their importance in a variety of areas with practical applications such as biomedical optics and optical metrology. Statistical phenomena of random electric vector fields have close relevance to the theories of speckles, polarization and coherence theory. In this paper, we investigate the correlation tensor for stochastic electromagnetic fields modulated by a depolarizer consisting of a rough-surfaced retardation plate. Under the assumption that the microstructure of the scattering surface on the depolarizer is as fine as to be unresolvable in our observation region, we have derived a relationship between the polarization matrix/coherency matrix for the modulated electric fields behind the rough-surfaced retardation plate and the coherence matrix under the free space geometry. This relation is regarded as entirely analogous to the van Cittert-Zernike theorem of classical coherence theory. Within the paraxial approximation as represented by the ABCD-matrix formalism, the three-dimensional structure of the generated polarization speckle is investigated based on the correlation tensor, indicating a typical carrot structure with a much longer axial dimension than the extent in its transverse dimension.

  7. Covariance Matrix Estimation for Massive MIMO

    NASA Astrophysics Data System (ADS)

    Upadhya, Karthik; Vorobyov, Sergiy A.

    2018-04-01

    We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.

  8. Strategies for vectorizing the sparse matrix vector product on the CRAY XMP, CRAY 2, and CYBER 205

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Partridge, Harry

    1987-01-01

    Large, randomly sparse matrix vector products are important in a number of applications in computational chemistry, such as matrix diagonalization and the solution of simultaneous equations. Vectorization of this process is considered for the CRAY XMP, CRAY 2, and CYBER 205, using a matrix of dimension of 20,000 with from 1 percent to 6 percent nonzeros. Efficient scatter/gather capabilities add coding flexibility and yield significant improvements in performance. For the CYBER 205, it is shown that minor changes in the IO can reduce the CPU time by a factor of 50. Similar changes in the CRAY codes make a far smaller improvement.

  9. Political and Economic Geomorphology: The Effect of Market Forces on Stream Restoration Designs

    NASA Astrophysics Data System (ADS)

    Singh, J.; Doyle, M. W.; Lave, R.; Robertson, M.

    2013-12-01

    Stream restoration in the U.S. is increasingly driven by compensatory mitigation; impacts to streams associated with typical land development activities must be offset via restoration of streams elsewhere. This policy application creates conditions in which restored stream ';credits' are traded under market-like conditions, comparable to wetland mitigation, carbon offsets, or endangered species habitat banking. The effect of this relatively new mechanism to finance stream restoration on design and construction is unknown. This research explores whether the introduction of a credit-based mitigation apparatus results in streams designed to maximize credit yields (i.e., ';credit-chasing') rather than focusing on restoring natural systems or functions. In other words, are market-based restored streams different from those designed for non-market purposes? We quantified geomorphic characteristics (e.g. hydraulic geometry, sinuosity, profile, bed sediment, LWD) of three types of streams: (1) a random sample of non-restored reaches, (2) streams restored for compensatory mitigation, and (3) streams restored under alternative funding sources (e.g., government grant programs, non-profit activities). We also compared the location of the types of stream reaches to determine whether there is a spatiality of restored streams. Physical data were complemented with a series of semi-structured interviews with key personnel in the stream restoration industry to solicit information on the influence of policy interpretation and market-driven factors on the design process. Preliminary analysis suggests that restoration is driving a directional shift in stream morphology in North Carolina. As a simple example, in the Piedmont, non-restored and restored channels had mean sinuosity of 1.17 and 1.23, respectively (p < 0.10). In the mountain region, non-restored and restored channels had mean sinuosity of 1.07 and 1.21, respectively (p < 0.01). In addition, restored streams were disproportionately located in very small catchments, and designs seemed to be only marginally related to the location of the stream. Provisional findings also indicate that the differences between mitigation and non-mitigation designs were less than expected. Interview data support this observation; design engineers and entrepreneurial credit providers (i.e., mitigation bankers) apparently viewed the design process as a somewhat standard, non-malleable practice. Sustaining long-term relationships with regulators, who must approve the sale of restored stream credits, was seen as critically important rather than the marginal gains to be made by manipulating particular stream designs to glean more credits. Overall, preliminary results demonstrate that regulatory frameworks, economic incentives and social relationships played a key role in driving stream restoration design in North Carolina, often homogenizing design practices and limiting ';credit chasing.'

  10. The role of natural vegetative disturbance in determining stream reach characteristics in central Idaho and western Montana

    USGS Publications Warehouse

    Roper, B.B.; Jarvis, B.; Kershner, J.L.

    2007-01-01

    We evaluated the relationship between natural vegetative disturbance and changes in stream habitat and macroinvertebrate metrics within 33 randomly selected minimally managed watersheds in central Idaho and western Montana. Changes in stream reach conditions were related to vegetative disturbance for the time periods from 1985 to 1993 and 1993 to 2000, respectively, at the following three spatial scales; within the stream buffer and less than 1 km from the evaluated reach, within the watershed and within 1 km of the stream reach, and within the watershed. Data for stream reaches were based on field surveys and vegetative disturbance was generated for the watershed above the sampled reach using remotely sensed data and geographical information systems. Large scale (>100 ha) vegetative disturbance was common within the study area. Even though natural vegetative disturbance rates were high, we found that few of the measured attributes were related to the magnitude of vegetative disturbance. The three physical habitat attributes that changed significantly were sinuosity, median particle size, and percentage of undercut bank; each was related to the disturbance in the earlier (1985-1993) time frame. There was a significant relationship between changes in two macroinvertebrate metrics, abundance and percent collectors/filterers, and the magnitude of disturbance during the more recent time period (1993-2000). We did not find a consistent relationship between the location of the disturbance within the watershed and changes in stream conditions. Our findings suggest that natural vegetative disturbance within the northern Rocky Mountains is complex but likely does not result in substantial short-term changes in the characteristics of most stream reaches. ?? 2007 by the Northwest Scientific Association. All rights reserved.

  11. Novel Use of Google Glass for Procedural Wireless Vital Sign Monitoring.

    PubMed

    Liebert, Cara A; Zayed, Mohamed A; Aalami, Oliver; Tran, Jennifer; Lau, James N

    2016-08-01

    Purpose This study investigates the feasibility and potential utility of head-mounted displays for real-time wireless vital sign monitoring during surgical procedures. Methods In this randomized controlled pilot study, surgery residents (n = 14) performed simulated bedside procedures with traditional vital sign monitors and were randomized to addition of vital sign streaming to Google Glass. Time to recognition of preprogrammed vital sign deterioration and frequency of traditional monitor use was recorded. User feedback was collected by electronic survey. Results The experimental group spent 90% less time looking away from the procedural field to view traditional monitors during bronchoscopy (P = .003), and recognized critical desaturation 8.8 seconds earlier; the experimental group spent 71% (P = .01) less time looking away from the procedural field during thoracostomy, and recognized hypotension 10.5 seconds earlier. Trends toward earlier recognition of deterioration did not reach statistical significance. The majority of participants agreed that Google Glass increases situational awareness (64%), is helpful in monitoring vitals (86%), is easy to use (93%), and has potential to improve patient safety (85%). Conclusion In this early feasibility study, use of streaming to Google Glass significantly decreased time looking away from procedural fields and resulted in a nonsignificant trend toward earlier recognition of vital sign deterioration. Vital sign streaming with Google Glass or similar platforms is feasible and may enhance procedural situational awareness. © The Author(s) 2016.

  12. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  13. Carbon nanotubes within polymer matrix can synergistically enhance mechanical energy dissipation

    NASA Astrophysics Data System (ADS)

    Ashraf, Taimoor; Ranaiefar, Meelad; Khatri, Sumit; Kavosi, Jamshid; Gardea, Frank; Glaz, Bryan; Naraghi, Mohammad

    2018-03-01

    Safe operation and health of structures relies on their ability to effectively dissipate undesired vibrations, which could otherwise significantly reduce the life-time of a structure due to fatigue loads or large deformations. To address this issue, nanoscale fillers, such as carbon nanotubes (CNTs), have been utilized to dissipate mechanical energy in polymer-based nanocomposites through filler-matrix interfacial friction by benefitting from their large interface area with the matrix. In this manuscript, for the first time, we experimentally investigate the effect of CNT alignment with respect to reach other and their orientation with respect to the loading direction on vibrational damping in nanocomposites. The matrix was polystyrene (PS). A new technique was developed to fabricate PS-CNT nanocomposites which allows for controlling the angle of CNTs with respect to the far-field loading direction (misalignment angle). Samples were subjected to dynamic mechanical analysis, and the damping of the samples were measured as the ratio of the loss to storage moduli versus CNT misalignment angle. Our results defied a notion that randomly oriented CNT nanocomposites can be approximated as a combination of matrix-CNT representative volume elements with randomly aligned CNTs. Instead, our results points to major contributions of stress concentration induced by each CNT in the matrix in proximity of other CNTs on vibrational damping. The stress fields around CNTs in PS-CNT nanocomposites were studied via finite element analysis. Our findings provide significant new insights not only on vibrational damping nanocomposites, but also on their failure modes and toughness, in relation to interface phenomena.

  14. Fast Clock Recovery for Digital Communications

    NASA Technical Reports Server (NTRS)

    Tell, R. G.

    1985-01-01

    Circuit extracts clock signal from random non-return-to-zero data stream, locking onto clock within one bit period at 1-gigabitper-second data rate. Circuit used for synchronization in opticalfiber communications. Derives speed from very short response time of gallium arsenide metal/semiconductor field-effect transistors (MESFET's).

  15. Learning Circulant Sensing Kernels

    DTIC Science & Technology

    2014-03-01

    Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance. We...scale. Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance...matrices, Tropp et al.[28] de - scribes a random filter for acquiring a signal x̄; Haupt et al.[12] describes a channel estimation problem to identify a

  16. Improving performances of suboptimal greedy iterative biclustering heuristics via localization.

    PubMed

    Erten, Cesim; Sözdinler, Melih

    2010-10-15

    Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.

  17. Mercury stabilization in chemically bonded phosphate ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagh, Arun S.; Jeong, Seung-Young; Singh, Dileep

    1997-07-01

    We have investigated mercury stabilization in chemically bonded phosphate ceramic (CBPC) using four surrogate waste streams that represent U.S. Department of Energy (DOE) ash, soil, and two secondary waste streams resulting from the destruction of DOE`s high-organic wastes by the DETOX{sup SM} Wet Oxidation Process. Hg content in the waste streams was 0.1 to 0.5 wt.% (added as soluble salts). Sulfidation of Hg and its concurrent stabilization in the CBPC matrix yielded highly nonleachable waste forms. The Toxicity Characteristic Leaching Procedure showed that leaching levels were well below the U.S. Environmental Protection Agency`s regulatory limits. The American Nuclear Society`s ANSmore » 16.1 immersion test also gave very high leaching indices, indicating excellent retention of the contaminants. In particular, leaching levels of Hg in the ash waste form were below the measurement detection limit in neutral and alkaline water, negligibly low but measureable in the first 72 h of leaching in acid water, and below the detection limit after that. These studies indicate that the waste forms are stable in a wide range of chemical environments during storage. 9 refs., 5 tabs.« less

  18. The biogeodynamics of microbial landscapes

    NASA Astrophysics Data System (ADS)

    Battin, T. J.; Hödl, I.; Bertuzzo, E.; Mari, L.; Suweis, S. S.; Rinaldo, A.

    2011-12-01

    Spatial configuration is fundamental in defining the structural and functional properties of biological systems. Biofilms, surface-attached and matrix-enclosed microorganisms, are a striking example of spatial organisation. Coupled biotic and abiotic processes shape the spatial organisation across scales of the landscapes formed by these benthic biofilms in streams and rivers. Experimenting with such biofilms in streams, we found that, depending on the streambed topography and the related hydrodynamic microenvironment, biofilm landscapes form increasingly diverging spatial patterns as they grow. Strikingly, however, cluster size distributions tend to converge even in contrasting hydrodynamic microenvironments. To reproduce the observed cluster size distributions we used a continuous, size-structured population model. The model accounts for the formation, growth, erosion and merging of biofilm clusters. Our results suggest not only that hydrodynamic forcing induce the diverging patterning of the microbial landscape, but also that microorganisms have developed strategies to equally exploit spatial resources independently of the physical structure of the microenvironment where they live.

  19. Implementation of Dynamic Extensible Adaptive Locally Exchangeable Measures (IDEALEM) v 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sim, Alex; Lee, Dongeun; Wu, K. John

    2016-03-04

    Handling large streaming data is essential for various applications such as network traffic analysis, social networks, energy cost trends, and environment modeling. However, it is in general intractable to store, compute, search, and retrieve large streaming data. This software addresses a fundamental issue, which is to reduce the size of large streaming data and still obtain accurate statistical analysis. As an example, when a high-speed network such as 100 Gbps network is monitored, the collected measurement data rapidly grows so that polynomial time algorithms (e.g., Gaussian processes) become intractable. One possible solution to reduce the storage of vast amounts ofmore » measured data is to store a random sample, such as one out of 1000 network packets. However, such static sampling methods (linear sampling) have drawbacks: (1) it is not scalable for high-rate streaming data, and (2) there is no guarantee of reflecting the underlying distribution. In this software, we implemented a dynamic sampling algorithm, based on the recent technology from the relational dynamic bayesian online locally exchangeable measures, that reduces the storage of data records in a large scale, and still provides accurate analysis of large streaming data. The software can be used for both online and offline data records.« less

  20. Relationship between structural features and water chemistry in boreal headwater streams--evaluation based on results from two water management survey tools suggested for Swedish forestry.

    PubMed

    Lestander, Ragna; Löfgren, Stefan; Henrikson, Lennart; Ågren, Anneli M

    2015-04-01

    Forestry may cause adverse impacts on water quality, and the forestry planning process is a key factor for the outcome of forest operation effects on stream water. To optimise environmental considerations and to identify actions needed to improve or maintain the stream biodiversity, two silvicultural water management tools, BIS+ (biodiversity, impact, sensitivity and added values) and Blue targeting, have been developed. In this study, we evaluate the links between survey variables, based on BIS+ and Blue targeting data, and water chemistry in 173 randomly selected headwater streams in the hemiboreal zone. While BIS+ and Blue targeting cannot replace more sophisticated monitoring methods necessary for classifying water quality in streams according to the EU Water Framework Directive (WFD, 2000/60/EC), our results lend support to the idea that the BIS+ protocol can be used to prioritise the protection of riparian forests. The relationship between BIS+ and water quality indicators (concentrations of nutrients and organic matter) together with data from fish studies suggests that this field protocol can be used to give reaches with higher biodiversity and conservation values a better protection. The tools indicate an ability to mitigate forestry impacts on water quality if the operations are adjusted to this knowledge in located areas.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modeste Nguimdo, Romain, E-mail: Romain.Nguimdo@vub.ac.be; Tchitnga, Robert; Woafo, Paul

    We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bitmore » rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s =1Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.« less

  2. User-Friendly Tools for Random Matrices: An Introduction

    DTIC Science & Technology

    2012-12-03

    T 2011 , Oliveira 2010, Mackey et al . 2012, ... Joel A. Tropp, User-Friendly Tools for Random Matrices, NIPS, 3 December 2012 47 To learn more... E...the matrix product Y = AΩ 3. Construct an orthonormal basis Q for the range of Y [Ref] Halko –Martinsson–T, SIAM Rev. 2011 . Joel A. Tropp, User-Friendly...concentration inequalities...” with L. Mackey et al .. Submitted 2012. § “User-Friendly Tools for Random Matrices: An Introduction.” 2012. See also

  3. Molecular selection in a unified evolutionary sequence

    NASA Technical Reports Server (NTRS)

    Fox, S. W.

    1986-01-01

    With guidance from experiments and observations that indicate internally limited phenomena, an outline of unified evolutionary sequence is inferred. Such unification is not visible for a context of random matrix and random mutation. The sequence proceeds from Big Bang through prebiotic matter, protocells, through the evolving cell via molecular and natural selection, to mind, behavior, and society.

  4. Horizon in random matrix theory, the Hawking radiation, and flow of cold atoms.

    PubMed

    Franchini, Fabio; Kravtsov, Vladimir E

    2009-10-16

    We propose a Gaussian scalar field theory in a curved 2D metric with an event horizon as the low-energy effective theory for a weakly confined, invariant random matrix ensemble (RME). The presence of an event horizon naturally generates a bath of Hawking radiation, which introduces a finite temperature in the model in a nontrivial way. A similar mapping with a gravitational analogue model has been constructed for a Bose-Einstein condensate (BEC) pushed to flow at a velocity higher than its speed of sound, with Hawking radiation as sound waves propagating over the cold atoms. Our work suggests a threefold connection between a moving BEC system, black-hole physics and unconventional RMEs with possible experimental applications.

  5. Vertices cannot be hidden from quantum spatial search for almost all random graphs

    NASA Astrophysics Data System (ADS)

    Glos, Adam; Krawiec, Aleksandra; Kukulski, Ryszard; Puchała, Zbigniew

    2018-04-01

    In this paper, we show that all nodes can be found optimally for almost all random Erdős-Rényi G(n,p) graphs using continuous-time quantum spatial search procedure. This works for both adjacency and Laplacian matrices, though under different conditions. The first one requires p=ω (log ^8(n)/n), while the second requires p≥ (1+ɛ )log (n)/n, where ɛ >0. The proof was made by analyzing the convergence of eigenvectors corresponding to outlying eigenvalues in the \\Vert \\cdot \\Vert _∞ norm. At the same time for p<(1-ɛ )log (n)/n, the property does not hold for any matrix, due to the connectivity issues. Hence, our derivation concerning Laplacian matrix is tight.

  6. Convergence of moment expansions for expectation values with embedded random matrix ensembles and quantum chaos

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    2003-07-01

    Smoothed forms for expectation values < K> E of positive definite operators K follow from the K-density moments either directly or in many other ways each giving a series expansion (involving polynomials in E). In large spectroscopic spaces one has to partition the many particle spaces into subspaces. Partitioning leads to new expansions for expectation values. It is shown that all the expansions converge to compact forms depending on the nature of the operator K and the operation of embedded random matrix ensembles and quantum chaos in many particle spaces. Explicit results are given for occupancies < ni> E, spin-cutoff factors < JZ2> E and strength sums < O†O> E, where O is a one-body transition operator.

  7. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  8. Cluster structure in the correlation coefficient matrix can be characterized by abnormal eigenvalues

    NASA Astrophysics Data System (ADS)

    Nie, Chun-Xiao

    2018-02-01

    In a large number of previous studies, the researchers found that some of the eigenvalues of the financial correlation matrix were greater than the predicted values of the random matrix theory (RMT). Here, we call these eigenvalues as abnormal eigenvalues. In order to reveal the hidden meaning of these abnormal eigenvalues, we study the toy model with cluster structure and find that these eigenvalues are related to the cluster structure of the correlation coefficient matrix. In this paper, model-based experiments show that in most cases, the number of abnormal eigenvalues of the correlation matrix is equal to the number of clusters. In addition, empirical studies show that the sum of the abnormal eigenvalues is related to the clarity of the cluster structure and is negatively correlated with the correlation dimension.

  9. Water exchange and pressure transfer between conduits and matrix and their influence on hydrodynamics of two karst aquifers with sinking streams

    NASA Astrophysics Data System (ADS)

    Bailly-Comte, Vincent; Martin, Jonathan B.; Jourde, Hervé; Screaton, Elizabeth J.; Pistre, Séverin; Langston, Abigail

    2010-05-01

    SummaryKarst aquifers are heterogeneous media where conduits usually drain water from lower permeability volumes (matrix and fractures). For more than a century, various approaches have used flood recession curves, which integrate all hydrodynamic processes in a karst aquifer, to infer physical properties of the movement and storage of groundwater. These investigations typically only consider flow to the conduits and thus have lacked quantitative observations of how pressure transfer and water exchange between matrix and conduit during flooding could influence recession curves. We present analyses of simultaneous discharge and water level time series of two distinctly different karst systems, one with low porosity and permeability matrix rocks in southern France, and one with high porosity and permeability matrix rocks in north-central Florida (USA). We apply simple mathematical models of flood recession using time series representations of recharge, storage, and discharge processes in the karst aquifer. We show that karst spring hydrographs can be interpreted according to pressure transfer between two distinct components of the aquifer, conduit and matrix porosity, which induce two distinct responses at the spring. Water exchange between conduits and matrix porosity successively control the flow regime at the spring. This exchange is governed by hydraulic head differences between conduits and matrix, head gradients within conduits, and the contrast of permeability between conduits and matrix. These observations have consequences for physical interpretations of recession curves and modeling of karst spring flows, particularly for the relative magnitudes of base flow and quick flow from karst springs. Finally, these results suggest that similar analyses of recession curves can be applied to karst aquifers with distinct physical characteristics utilizing well and spring hydrograph data, but information must be known about the hydrodynamics and physical properties of the aquifer before the results can be correctly interpreted.

  10. Development of a fast screening and confirmatory method by liquid chromatography-quadrupole-time-of-flight mass spectrometry for glucuronide-conjugated methyltestosterone metabolite in tilapia.

    PubMed

    Amarasinghe, Kande; Chu, Pak-Sin; Evans, Eric; Reimschuessel, Renate; Hasbrouck, Nicholas; Jayasuriya, Hiranthi

    2012-05-23

    This paper describes the development of a fast method to screen and confirm methyltestosterone 17-O-glucuronide (MT-glu) in tilapia bile. The method consists of solid-phase extraction (SPE) followed by high-performance liquid chromatography-mass spectrometry. The system used was an Agilent 6530 Q-TOF with an Agilent Jet stream electrospray ionization interface. The glucuronide detected in the bile was characterized as MT-glu by comparison with a chemically synthesized standard. MT-glu was detected in bile for up to 7 days after dosing. Semiquantification was done with matrix-matched calibration curves, because MT-glu showed signal suppression due to matrix effects. This method provides a suitable tool to monitor the illegal use of methyltestosterone in tilapia culture.

  11. Interpolation algorithm for asynchronous ADC-data

    NASA Astrophysics Data System (ADS)

    Bramburger, Stefan; Zinke, Benny; Killat, Dirk

    2017-09-01

    This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT) algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  12. Comprehensive Thematic T-Matrix Reference Database: A 2014-2015 Update

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas

    2015-01-01

    The T-matrix method is one of the most versatile and efficient direct computer solvers of the macroscopic Maxwell equations and is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper is the seventh update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists a number of earlier publications overlooked previously.

  13. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  14. Matrix-specific distribution and diastereomeric profiles of hexabromocyclododecane (HBCD) in a multimedia environment: Air, soil, sludge, sediment, and fish.

    PubMed

    Jo, Hyeyeong; Son, Min-Hui; Seo, Sung-Hee; Chang, Yoon-Seok

    2017-07-01

    Hexabromocyclododecane (HBCD) contamination and its diastereomeric profile were investigated in a multi-media environment along a river at the local scale in air, soil, sludge, sediment, and fish samples. The spatial distribution of HBCD in each matrix showed a different result. The highest concentrations of HBCD in air and soil were detected near a general industrial complex; in the sediment and sludge samples, they were detected in the down-stream region (i.e., urban area). Each matrix showed the specific distribution patterns of HBCD diastereomers, suggesting continuous inputs of contaminants, different physicochemical properties, or isomerizations. The particle phases in air, sludge, and fish matrices were dominated by α-HBCD, owing to HBCD's various isomerization processes and different degradation rate in the environment, and metabolic capabilities of the fish; in contrast, the sediment and soil matrices were dominated by γ-HBCD because of the major composition of the technical mixtures and the strong adsorption onto solid particles. Based on these results, the prevalent and matrix-specific distribution of HBCD diastereomers suggested that more careful consideration should be given to the characteristics of the matrices and their effects on the potential influence of HBCD at the diastereomeric level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.; Lin, Shu

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum-likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds derived on the symbol error probability as well as the probability of false synchronization indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  16. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds were derived on the symbol error probability as well as the probability of false synchronization that indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  17. Comparing solute and particulate transport in streams using Notre Dame Linked Experimental Ecosystem Facility (ND-LEEF)

    NASA Astrophysics Data System (ADS)

    Shogren, A.; Tank, J. L.; Aubeneau, A. F.; Bolster, D.

    2016-12-01

    in streams and rivers. These processes co-vary across systems and are thus difficult to isolate. Therefore, to improve our understanding of drivers of fine-scale transport and retention of particles and solutes in streams, we experimentally compared transport and retention dynamics of two different particles (brewers yeast, 7μm; corn pollen, 70μm), a non-reactive solute (RhodamineWT), and a biologically reactive solute, nitrate (NO3-). We conducted experiments in four semi-natural constructed streams at the Notre Dame Linked Ecosystem Experimental Facility (ND-LEEF) in South Bend, Indiana. Each of the four 50 m replicate stream was lined with a unique configuration of substrate: pea gravel (PG, D50 = 0.5cm) and cobble (COB, D50 = 5cm) and structural complexity: alternating 2m sections of PG and COB substrates (ALT) and a random 50/50 mix (MIX). We allowed the experimental streams to naturally colonize with biofilm and periphyton throughout the summer sampling season. For particles, we estimated transport distance (Sp) and deposition velocity (vdep) and for solutes, we estimated uptake lengths (Sw) and uptake velocity (vf) using a short-term pulse addition technique. Sp and vdep were variable for particles, and were most strongly predicted by biofilm colonization on substrata in each stream. Biofilm accumulation also increased uptake of the reactive solute, though in contrast to particles, there were no significant differences in Sw or vf among streams suggesting that substrate type was not the main driver of reactive solute retention. These results emphasize the dynamic relationship between the physical and biological drivers influencing particle and solute retention in streams. Differential uptake of particles and solutes highlights the non stationarity of controlling variables along spatial or temporal continua. Even in highly controlled systems like those at ND-LEEF, physical vs. biological drivers are difficult to isolate.

  18. Comprehensive T-Matrix Reference Database: A 2012 - 2013 Update

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas

    2013-01-01

    The T-matrix method is one of the most versatile, efficient, and accurate theoretical techniques widely used for numerically exact computer calculations of electromagnetic scattering by single and composite particles, discrete random media, and particles imbedded in complex environments. This paper presents the fifth update to the comprehensive database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2012. It also lists several earlier publications not incorporated in the original database, including Peter Waterman's reports from the 1960s illustrating the history of the T-matrix approach and demonstrating that John Fikioris and Peter Waterman were the true pioneers of the multi-sphere method otherwise known as the generalized Lorenz - Mie theory.

  19. How Fast Can Networks Synchronize? A Random Matrix Theory Approach

    NASA Astrophysics Data System (ADS)

    Timme, Marc; Wolf, Fred; Geisel, Theo

    2004-03-01

    Pulse-coupled oscillators constitute a paradigmatic class of dynamical systems interacting on networks because they model a variety of biological systems including flashing fireflies and chirping crickets as well as pacemaker cells of the heart and neural networks. Synchronization is one of the most simple and most prevailing kinds of collective dynamics on such networks. Here we study collective synchronization [1] of pulse-coupled oscillators interacting on asymmetric random networks. Using random matrix theory we analytically determine the speed of synchronization in such networks in dependence on the dynamical and network parameters [2]. The speed of synchronization increases with increasing coupling strengths. Surprisingly, however, it stays finite even for infinitely strong interactions. The results indicate that the speed of synchronization is limited by the connectivity of the network. We discuss the relevance of our findings to general equilibration processes on complex networks. [5mm] [1] M. Timme, F. Wolf, T. Geisel, Phys. Rev. Lett. 89:258701 (2002). [2] M. Timme, F. Wolf, T. Geisel, cond-mat/0306512 (2003).

  20. Generic dynamical features of quenched interacting quantum systems: Survival probability, density imbalance, and out-of-time-ordered correlator

    NASA Astrophysics Data System (ADS)

    Torres-Herrera, E. J.; García-García, Antonio M.; Santos, Lea F.

    2018-02-01

    We study numerically and analytically the quench dynamics of isolated many-body quantum systems. Using full random matrices from the Gaussian orthogonal ensemble, we obtain analytical expressions for the evolution of the survival probability, density imbalance, and out-of-time-ordered correlator. They are compared with numerical results for a one-dimensional-disordered model with two-body interactions and shown to bound the decay rate of this realistic system. Power-law decays are seen at intermediate times, and dips below the infinite time averages (correlation holes) occur at long times for all three quantities when the system exhibits level repulsion. The fact that these features are shared by both the random matrix and the realistic disordered model indicates that they are generic to nonintegrable interacting quantum systems out of equilibrium. Assisted by the random matrix analytical results, we propose expressions that describe extremely well the dynamics of the realistic chaotic system at different time scales.

  1. Spiked Models of Large Dimensional Random Matrices Applied to Wireless Communications and Array Signal Processing

    DTIC Science & Technology

    2013-12-14

    population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC

  2. Discovering cell types in flow cytometry data with random matrix theory

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Nussenblatt, Robert; Losert, Wolfgang

    Flow cytometry is a widely used experimental technique in immunology research. During the experiments, peripheral blood mononuclear cells (PBMC) from a single patient, labeled with multiple fluorescent stains that bind to different proteins, are illuminated by a laser. The intensity of each stain on a single cell is recorded and reflects the amount of protein expressed by that cell. The data analysis focuses on identifying specific cell types related to a disease. Different cell types can be identified by the type and amount of protein they express. To date, this has most often been done manually by labelling a protein as expressed or not while ignoring the amount of expression. Using a cross correlation matrix of stain intensities, which contains both information on the proteins expressed and their amount, has been largely ignored by researchers as it suffers from measurement noise. Here we present an algorithm to identify cell types in flow cytometry data which uses random matrix theory (RMT) to reduce noise in a cross correlation matrix. We demonstrate our method using a published flow cytometry data set. Compared with previous analysis techniques, we were able to rediscover relevant cell types in an automatic way. Department of Physics, University of Maryland, College Park, MD 20742.

  3. Probabilistic homogenization of random composite with ellipsoidal particle reinforcement by the iterative stochastic finite element method

    NASA Astrophysics Data System (ADS)

    Sokołowski, Damian; Kamiński, Marcin

    2018-01-01

    This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).

  4. Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.

    2014-12-01

    Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.

  5. Acellular dermal matrix graft with or without enamel matrix derivative for root coverage in smokers: a randomized clinical study.

    PubMed

    Alves, Luciana B; Costa, Priscila P; Scombatti de Souza, Sérgio Luís; de Moraes Grisi, Márcio F; Palioto, Daniela B; Taba, Mario; Novaes, Arthur B

    2012-04-01

    The aim of this randomized controlled clinical study was to compare the use of an acellular dermal matrix graft (ADMG) with or without the enamel matrix derivative (EMD) in smokers to evaluate which procedure would provide better root coverage. Nineteen smokers with bilateral Miller Class I or II gingival recessions ≥3 mm were selected. The test group was treated with an association of ADMG and EMD, and the control group with ADMG alone. Probing depth, relative clinical attachment level, gingival recession height, gingival recession width, keratinized tissue width and keratinized tissue thickness were evaluated before the surgeries and after 6 months. Wilcoxon test was used for the statistical analysis at significance level of 5%. No significant differences were found between groups in all parameters at baseline. The mean gain recession height between baseline and 6 months and the complete root coverage favored the test group (p = 0.042, p = 0.019 respectively). Smoking may negatively affect the results achieved through periodontal plastic procedures; however, the association of ADMG and EMD is beneficial in the root coverage of gingival recessions in smokers, 6 months after the surgery. © 2012 John Wiley & Sons A/S.

  6. Influence of Hydrological Flow Paths on Rates and Forms of Nitrogen Losses from Mediterranean Watersheds

    NASA Astrophysics Data System (ADS)

    Lohse, K. A.; Sanderman, J.; Amundson, R. G.

    2005-12-01

    Patterns of precipitation and runoff in California are changing and likely to influence the structure and functioning of watersheds. Studies have demonstrated that hydrologic flushing during seasonal transitions in Mediterranean ecosystems can exert a strong control on nitrogen (N) export, yet few studies have examined the influence of different hydrological flow paths on rates and forms of nitrogen (N) losses. Here we illuminate the influence of variations in precipitation and hydrological pathways on the rate and form of N export along a toposequence of a well-characterized Mediterranean catchment in northern California. As a part of a larger study examining particulate and dissolved carbon loss, we analyzed seasonal patterns of dissolved organic nitrogen (DON), nitrate and ammonium concentrations in rainfall, throughfall, matrix and preferential flow, and stream samples over the course of one water year. We also analyzed seasonal soil N dynamics along this toposequence. During the transition to the winter rain season, but prior to any soil water displacement to the stream, DON and nitrate moved through near-surface soils as preferential flow. Once hillslope soils became saturated, saturated subsurface flow flushed nitrate from the hollow resulting in high stream nitrate/DON concentrations. Between storms, stream nitrate/DON concentrations were lower and appeared to reflect deep subsurface water flow chemistry. During the transition to the wet season, rates of soil nitrate production were high in the hollow relative to the hillslope soils. In the spring, these rates systematically declined as soil moisture decreased. Results from our study suggest seasonal fluctuations in soil moisture control soil N cycling and seasonal changes in the hydrological connection between hillslope soils and streams control the seasonal production and export of hydrologic N.

  7. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  8. The open quantum Brownian motions

    NASA Astrophysics Data System (ADS)

    Bauer, Michel; Bernard, Denis; Tilloy, Antoine

    2014-09-01

    Using quantum parallelism on random walks as the original seed, we introduce new quantum stochastic processes, the open quantum Brownian motions. They describe the behaviors of quantum walkers—with internal degrees of freedom which serve as random gyroscopes—interacting with a series of probes which serve as quantum coins. These processes may also be viewed as the scaling limit of open quantum random walks and we develop this approach along three different lines: the quantum trajectory, the quantum dynamical map and the quantum stochastic differential equation. We also present a study of the simplest case, with a two level system as an internal gyroscope, illustrating the interplay between the ballistic and diffusive behaviors at work in these processes. Notation H_z : orbital (walker) Hilbert space, {C}^{{Z}} in the discrete, L^2({R}) in the continuum H_c : internal spin (or gyroscope) Hilbert space H_sys=H_z\\otimesH_c : system Hilbert space H_p : probe (or quantum coin) Hilbert space, H_p={C}^2 \\rho^tot_t : density matrix for the total system (walker + internal spin + quantum coins) \\bar \\rho_t : reduced density matrix on H_sys : \\bar\\rho_t=\\int dxdy\\, \\bar\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | \\hat \\rho_t : system density matrix in a quantum trajectory: \\hat\\rho_t=\\int dxdy\\, \\hat\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | . If diagonal and localized in position: \\hat \\rho_t=\\rho_t\\otimes| X_t \\rangle _z\\langle X_t | ρt: internal density matrix in a simple quantum trajectory Xt: walker position in a simple quantum trajectory Bt: normalized Brownian motion ξt, \\xi_t^\\dagger : quantum noises

  9. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    NASA Astrophysics Data System (ADS)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair programs.

  10. Synchronizability of random rectangular graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong

    2015-08-15

    Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.

  11. Predicting healthcare associated infections using patients' experiences

    NASA Astrophysics Data System (ADS)

    Pratt, Michael A.; Chu, Henry

    2016-05-01

    Healthcare associated infections (HAI) are a major threat to patient safety and are costly to health systems. Our goal is to predict the HAI performance of a hospital using the patients' experience responses as input. We use four classifiers, viz. random forest, naive Bayes, artificial feedforward neural networks, and the support vector machine, to perform the prediction of six types of HAI. The six types include blood stream, urinary tract, surgical site, and intestinal infections. Experiments show that the random forest and support vector machine perform well across the six types of HAI.

  12. Movement patterns of Tenebrio beetles demonstrate empirically that correlated-random-walks have similitude with a Lévy walk.

    PubMed

    Reynolds, Andy M; Leprêtre, Lisa; Bohan, David A

    2013-11-07

    Correlated random walks are the dominant conceptual framework for modelling and interpreting organism movement patterns. Recent years have witnessed a stream of high profile publications reporting that many organisms perform Lévy walks; movement patterns that seemingly stand apart from the correlated random walk paradigm because they are discrete and scale-free rather than continuous and scale-finite. Our new study of the movement patterns of Tenebrio molitor beetles in unchanging, featureless arenas provides the first empirical support for a remarkable and deep theoretical synthesis that unites correlated random walks and Lévy walks. It demonstrates that the two models are complementary rather than competing descriptions of movement pattern data and shows that correlated random walks are a part of the Lévy walk family. It follows from this that vast numbers of Lévy walkers could be hiding in plain sight.

  13. Random Matrix Approach to Quantum Adiabatic Evolution Algorithms

    NASA Technical Reports Server (NTRS)

    Boulatov, Alexei; Smelyanskiy, Vadier N.

    2004-01-01

    We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.

  14. Random matrix theory and cross-correlations in global financial indices and local stock market indices

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo

    2013-02-01

    We analyzed cross-correlations between price fluctuations of global financial indices (20 daily stock indices over the world) and local indices (daily indices of 200 companies in the Korean stock market) by using random matrix theory (RMT). We compared eigenvalues and components of the largest and the second largest eigenvectors of the cross-correlation matrix before, during, and after the global financial the crisis in the year 2008. We find that the majority of its eigenvalues fall within the RMT bounds [ λ -, λ +], where λ - and λ + are the lower and the upper bounds of the eigenvalues of random correlation matrices. The components of the eigenvectors for the largest positive eigenvalues indicate the identical financial market mode dominating the global and local indices. On the other hand, the components of the eigenvector corresponding to the second largest eigenvalue are positive and negative values alternatively. The components before the crisis change sign during the crisis, and those during the crisis change sign after the crisis. The largest inverse participation ratio (IPR) corresponding to the smallest eigenvector is higher after the crisis than during any other periods in the global and local indices. During the global financial the crisis, the correlations among the global indices and among the local stock indices are perturbed significantly. However, the correlations between indices quickly recover the trends before the crisis.

  15. Controlling the invasive diatom Didymosphenia geminata: an ecotoxicity assessment of four potential biocides.

    PubMed

    Jellyman, P G; Clearwater, S J; Clayton, J S; Kilroy, C; Blair, N; Hickey, C W; Biggs, B J F

    2011-07-01

    In 2004, an invasive mat-forming freshwater diatom, Didymosphenia geminata (didymo), was found in New Zealand causing concern with regard to potential consequences for local freshwater ecosystems. A four-stage research program was initiated to identify methods to control D. geminata. This article reports the results of Stage 2, in which four potential control compounds [Gemex™ (a chelated copper formulation), EDTA, Hydrothol®191, and Organic Interceptor™ (a pine oil formulation)] selected in Stage 1 were evaluated for their biocidal efficacy on D. geminata and effects on non-target organisms using both artificial stream and laboratory trials. Artificial stream trials evaluated the mortality rates of D. geminata and fishes to three concentrations of the four biocides, whereas laboratory toxicity trials tested the response of green alga and cladocera to a range of biocide concentrations and exposure times. In artificial stream trials, Gemex and Organic Interceptor were the most effective biocides against D. geminata for a number of measured indices; however, exposure of fishes to Organic Interceptor resulted in high mortality rates. Laboratory toxicity testing indicated that Gemex might negatively affect sensitive stream invertebrates, based on the cladoceran sensitivity at the proposed river control dose. A decision support matrix evaluated the four biocides based on nine criteria stipulated by river stakeholders (effectiveness, non-target species impacts, stalk removal, degradation profile, risks to health and safety, ease of application, neutralization potential, cost, and local regulatory requirements) and Gemex was identified as the product warranting further refinement prior to an in-river trial.

  16. Robust reliable sampled-data control for switched systems with application to flight control

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.

    2016-11-01

    This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.

  17. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  18. Phase diagram of matrix compressed sensing

    NASA Astrophysics Data System (ADS)

    Schülke, Christophe; Schniter, Philip; Zdeborová, Lenka

    2016-12-01

    In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016), 10.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.

  19. Mesohabitat indicator species in a coastal stream of the Atlantic rainforest, Rio de Janeiro-Brazil.

    PubMed

    Rezende, Carla Ferreira; Moraes, Maíra; Manna, Luisa Resende; Leitão, Rafael Pereira; Caramaschi, Erica Pelegrinni; Mazzoni, Rosana

    2010-12-01

    The Mato Grosso is a typical Atlantic Forest stream located on the East coast of Brazil, approximately 70 km from Rio de Janeiro city. From its source at about 800m a.s.l, the stream drains a 30km2 area of the Northwestern part of the municipality of Saquarema, state of Rio de Janeiro and flows into the Saquarema Lagoon system. We hypothesized that fish species occupy distinct mesohabitats, with the prediction that their occurrences and densities differ among the microhabitats of riffles, runs and pools. A 250m-long stretch of the stream located in its uppermost part was selected for this study, where it becomes second-order. Mesohabitat description and their fish characterization were undertaken. Fish sampling was conducted by electroshocking and after their identification and counting, they were returned to the stream. For mesohabitat characterization, a Discriminant Function Analysis (DA) was applied. The total number of samples was estimated by the Zippin method and the recorded densities were used as an Indicator Species Analysis (ISA), followed by a Monte Carlo test for 1 000 permutations. The DA significantly separated the three predetermined mesohabitats (pool, riffle and run) (WL = 0.13, F = 187.70, p = 0.001). We found five species of fishes, belonging to four families and three orders. The fishes Rhamdia quelen, Phalloceros harpagos, Pimelodella lateristriga and Astyanax taeniatus are indicators of the pool environment in the Mato Grosso stream, whereas Characidium cf. vidali is an indicator of the riffle environment. The Monte Carlo test detected non-random mesohabitat use only for P. lateristriga and A. taeniatus in the pools and for Characidium cf. vidali in the riffles. We concluded that the Mato Grosso stream contains three well-defined mesohabitats, with indicator species present in two of these mesohabitats.

  20. Physical controls and predictability of stream hyporheic flow evaluated with a multiscale model

    USGS Publications Warehouse

    Stonedahl, Susa H.; Harvey, Judson W.; Detty, Joel; Aubeneau, Antoine; Packman, Aaron I.

    2012-01-01

    Improved predictions of hyporheic exchange based on easily measured physical variables are needed to improve assessment of solute transport and reaction processes in watersheds. Here we compare physically based model predictions for an Indiana stream with stream tracer results interpreted using the Transient Storage Model (TSM). We parameterized the physically based, Multiscale Model (MSM) of stream-groundwater interactions with measured stream planform and discharge, stream velocity, streambed hydraulic conductivity and porosity, and topography of the streambed at distinct spatial scales (i.e., ripple, bar, and reach scales). We predicted hyporheic exchange fluxes and hyporheic residence times using the MSM. A Continuous Time Random Walk (CTRW) model was used to convert the MSM output into predictions of in stream solute transport, which we compared with field observations and TSM parameters obtained by fitting solute transport data. MSM simulations indicated that surface-subsurface exchange through smaller topographic features such as ripples was much faster than exchange through larger topographic features such as bars. However, hyporheic exchange varies nonlinearly with groundwater discharge owing to interactions between flows induced at different topographic scales. MSM simulations showed that groundwater discharge significantly decreased both the volume of water entering the subsurface and the time it spent in the subsurface. The MSM also characterized longer timescales of exchange than were observed by the tracer-injection approach. The tracer data, and corresponding TSM fits, were limited by tracer measurement sensitivity and uncertainty in estimates of background tracer concentrations. Our results indicate that rates and patterns of hyporheic exchange are strongly influenced by a continuum of surface-subsurface hydrologic interactions over a wide range of spatial and temporal scales rather than discrete processes.

  1. Unbiased All-Optical Random-Number Generator

    NASA Astrophysics Data System (ADS)

    Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja

    2017-10-01

    The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.

  2. Not all that glitters is RMT in the forecasting of risk of portfolios in the Brazilian stock market

    NASA Astrophysics Data System (ADS)

    Sandoval, Leonidas; Bortoluzzo, Adriana Bruscato; Venezuela, Maria Kelly

    2014-09-01

    Using stocks of the Brazilian stock exchange (BM&F-Bovespa), we build portfolios of stocks based on Markowitz's theory and test the predicted and realized risks. This is done using the correlation matrices between stocks, and also using Random Matrix Theory in order to clean such correlation matrices from noise. We also calculate correlation matrices using a regression model in order to remove the effect of common market movements and their cleaned versions using Random Matrix Theory. This is done for years of both low and high volatility of the Brazilian stock market, from 2004 to 2012. The results show that the use of regression to subtract the market effect on returns greatly increases the accuracy of the prediction of risk, and that, although the cleaning of the correlation matrix often leads to portfolios that better predict risks, in periods of high volatility of the market this procedure may fail to do so. The results may be used in the assessment of the true risks when one builds a portfolio of stocks during periods of crisis.

  3. Free Fermions and the Classical Compact Groups

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  4. A random matrix approach to language acquisition

    NASA Astrophysics Data System (ADS)

    Nicolaidis, A.; Kosmidis, Kosmas; Argyrakis, Panos

    2009-12-01

    Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N~exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity.

  5. Raney Distributions and Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Liu, Dang-Zheng

    2015-03-01

    Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.

  6. Continuous exposure to low amplitude extremely low frequency electrical fields characterizing the vascular streaming potential alters elastin accumulation in vascular smooth muscle cells.

    PubMed

    Bergethon, Peter R; Kindler, Dean D; Hallock, Kevin; Blease, Susan; Toselli, Paul

    2013-07-01

    In normal development and pathology, the vascular system depends on complex interactions between cellular elements, biochemical molecules, and physical forces. The electrokinetic vascular streaming potential (EVSP) is an endogenous extremely low frequency (ELF) electrical field resulting from blood flowing past the vessel wall. While generally unrecognized, it is a ubiquitous electrical biophysical force to which the vascular tree is exposed. Extracellular matrix elastin plays a central role in normal blood vessel function and in the development of atherosclerosis. It was hypothesized that ELF fields of low amplitude would alter elastin accumulation, supporting a link between the EVSP and the biology of vascular smooth muscle cells. Neonatal rat aortic smooth muscle cell cultures were exposed chronically to electrical fields characteristic of the EVSP. Extracellular protein accumulation, DNA content, and electron microscopic (EM) evaluation were performed after 2 weeks of exposure. Stimulated cultures showed no significant change in cellular proliferation as measured by the DNA concentration. The per-DNA normalized protein in the extracellular matrix was unchanged while extracellular elastin accumulation decreased 38% on average. EM analysis showed that the stimulated cells had a 2.85-fold increase in mitochondrial number. These results support the formulation that ELF fields are a potential factor in both normal vessel biology and in the pathogenesis of atherosclerotic diseases including heart disease, stroke, and peripheral vascular disease. Copyright © 2013 Wiley Periodicals, Inc.

  7. High-resolution droplet-based fractionation of nano-LC separations onto microarrays for MALDI-MS analysis.

    PubMed

    Küster, Simon K; Pabst, Martin; Jefimovs, Konstantins; Zenobi, Renato; Dittrich, Petra S

    2014-05-20

    We present a robust droplet-based device, which enables the fractionation of ultralow flow rate nanoflow liquid chromatography (nano-LC) eluate streams at high frequencies and high peak resolution. This is achieved by directly interfacing the separation column to a micro T-junction, where the eluate stream is compartmentalized into picoliter droplets. This immediate compartmentalization prevents peak dispersion during eluate transport and conserves the chromatographic performance. Subsequently, nanoliter eluate fractions are collected at a rate of one fraction per second on a high-density microarray to retain the separation with high temporal resolution. Chromatographic separations of up to 45 min runtime can thus be archived on a single microarray possessing 2700 sample spots. The performance of this device is demonstrated by fractionating the separation of a tryptic digest of a known protein mixture onto the microarray chip and subsequently analyzing the sample archive using matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). Resulting peak widths are found to be significantly reduced compared to standard continuous flow spotting technologies as well as in comparison to a conventional nano-LC-electrospray ionization-mass spectrometry interface. Moreover, we demonstrate the advantage of our high-definition nanofractionation device by applying two different MALDI matrices to all collected fractions in an alternating fashion. Since the information that is obtained from a MALDI-MS measurement depends on the choice of MALDI matrix, we can extract complementary information from neighboring spots containing almost identical composition but different matrices.

  8. Spectrum of the Wilson Dirac operator at finite lattice spacings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akemann, G.; Damgaard, P. H.; Splittorff, K.

    2011-04-15

    We consider the effect of discretization errors on the microscopic spectrum of the Wilson Dirac operator using both chiral perturbation theory and chiral random matrix theory. A graded chiral Lagrangian is used to evaluate the microscopic spectral density of the Hermitian Wilson Dirac operator as well as the distribution of the chirality over the real eigenvalues of the Wilson Dirac operator. It is shown that a chiral random matrix theory for the Wilson Dirac operator reproduces the leading zero-momentum terms of Wilson chiral perturbation theory. All results are obtained for a fixed index of the Wilson Dirac operator. The low-energymore » constants of Wilson chiral perturbation theory are shown to be constrained by the Hermiticity properties of the Wilson Dirac operator.« less

  9. Anderson Localization in Quark-Gluon Plasma

    NASA Astrophysics Data System (ADS)

    Kovács, Tamás G.; Pittler, Ferenc

    2010-11-01

    At low temperature the low end of the QCD Dirac spectrum is well described by chiral random matrix theory. In contrast, at high temperature there is no similar statistical description of the spectrum. We show that at high temperature the lowest part of the spectrum consists of a band of statistically uncorrelated eigenvalues obeying essentially Poisson statistics and the corresponding eigenvectors are extremely localized. Going up in the spectrum the spectral density rapidly increases and the eigenvectors become more and more delocalized. At the same time the spectral statistics gradually crosses over to the bulk statistics expected from the corresponding random matrix ensemble. This phenomenon is reminiscent of Anderson localization in disordered conductors. Our findings are based on staggered Dirac spectra in quenched lattice simulations with the SU(2) gauge group.

  10. Horizon in Random Matrix Theory, the Hawking Radiation, and Flow of Cold Atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franchini, Fabio; Kravtsov, Vladimir E.

    2009-10-16

    We propose a Gaussian scalar field theory in a curved 2D metric with an event horizon as the low-energy effective theory for a weakly confined, invariant random matrix ensemble (RME). The presence of an event horizon naturally generates a bath of Hawking radiation, which introduces a finite temperature in the model in a nontrivial way. A similar mapping with a gravitational analogue model has been constructed for a Bose-Einstein condensate (BEC) pushed to flow at a velocity higher than its speed of sound, with Hawking radiation as sound waves propagating over the cold atoms. Our work suggests a threefold connectionmore » between a moving BEC system, black-hole physics and unconventional RMEs with possible experimental applications.« less

  11. Correlation analysis of the Korean stock market: Revisited to consider the influence of foreign exchange rate

    NASA Astrophysics Data System (ADS)

    Jo, Sang Kyun; Kim, Min Jae; Lim, Kyuseong; Kim, Soo Yong

    2018-02-01

    We investigated the effect of foreign exchange rate in a correlation analysis of the Korean stock market using both random matrix theory and minimum spanning tree. We collected data sets which were divided into two types of stock price, the original stock price in Korean Won and the price converted into US dollars at contemporary foreign exchange rates. Comparing the random matrix theory based on the two different prices, a few particular sectors exhibited substantial differences while other sectors changed little. The particular sectors were closely related to economic circumstances and the influence of foreign financial markets during that period. The method introduced in this paper offers a way to pinpoint the effect of exchange rate on an emerging stock market.

  12. Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation

    PubMed Central

    Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan

    2013-01-01

    The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902

  13. A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2013-01-01

    Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213

  14. Finite-time scaling at the Anderson transition for vibrations in solids

    NASA Astrophysics Data System (ADS)

    Beltukov, Y. M.; Skipetrov, S. E.

    2017-11-01

    A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.

  15. Exploiting Surface Albedos Products to Bridge the Gap Between Remote Sensing Information and Climate Models

    NASA Astrophysics Data System (ADS)

    Pinty, Bernard; Andredakis, Ioannis; Clerici, Marco; Kaminski, Thomas; Taberner, Malcolm; Stephen, Plummer

    2011-01-01

    We present results from the application of an inversion method conducted using MODIS derived broadband visible and near-infrared surface albedo products. This contribution is an extension of earlier efforts to optimally retrieve land surface fluxes and associated two- stream model parameters based on the Joint Research Centre Two-stream Inversion Package (JRC-TIP). The discussion focuses on products (based on the mean and one-sigma values of the Probability Distribution Functions (PDFs)) obtained during the summer and winter and highlight specific issues related to snowy conditions. This paper discusses the retrieved model parameters including the effective Leaf Area Index (LAI), the background brightness and the scattering efficiency of the vegetation elements. The spatial and seasonal changes exhibited by these parameters agree with common knowledge and underscore the richness of the high quality surface albedo data sets. At the same time, the opportunity to generate global maps of new products, such as the background albedo, underscores the advantages of using state of the art algorithmic approaches capable of fully exploiting accurate satellite remote sensing datasets. The detailed analyses of the retrieval uncertainties highlight the central role and contribution of the LAI, the main process parameter to interpret radiation transfer observations over vegetated surfaces. The posterior covariance matrix of the uncertainties is further exploited to quantify the knowledge gain from the ingestion of MODIS surface albedo products. The estimation of the radiation fluxes that are absorbed, transmitted and scattered by the vegetation layer and its background is achieved on the basis of the retrieved PDFs of the model parameters. The propagation of uncertainties from the observations to the model parameters is achieved via the Hessian of the cost function and yields a covariance matrix of posterior parameter uncertainties. This matrix is propagated to the radiation fluxes via the model’s Jacobian matrix of first derivatives. A definite asset of the JRC-TIP lies in its capability to control and ultimately relax a number of assumptions that are often implicit in traditional approaches. These features greatly help understand the discrepancies between the different data sets of land surface properties and fluxes that are currently available. Through a series of selected examples, the inverse procedure implemented in the JRC-TIP is shown to be robust, reliable and compliant with large scale processing requirements. Furthermore, this package ensures the physical consistency between the set of observations, the two-stream model parameters and radiation fluxes. It also documents the retrieval of associated uncertainties. The knowledge gained from the availability of remote sensing surface albedo products can be expressed in quantitative terms using a simple metric. This metric helps identify the geographical locations and periods of the year where the remote sensing products fail in reducing the uncertainty on the process model parameters as can be specified from current knowledge.

  16. 78 FR 17930 - National Rivers and Streams Assessment 2008-009 Draft Report

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... Using a statistical survey design, sites were selected at random to represent the condition of the... describes the results of the nationwide probabilistic survey that was conducted in the summers of 2008 and... conterminous United States. The draft NRSA 2008-2009 report includes information on how the survey was...

  17. A randomized controlled trial of soap opera videos streamed to smartphones to reduce risk of sexually transmitted human immunodeficiency virus (HIV) in young urban African American women.

    PubMed

    Jones, Rachel; Hoover, Donald R; Lacroix, Lorraine J

    2013-01-01

    Love, Sex, and Choices (LSC) is a soap opera video series created to reduce HIV sex risk in women. LSC was compared to text messages in a randomized trial in 238 high-risk mostly Black young urban women. 117 received 12-weekly LSC videos, 121 received 12-weekly HIV prevention messages on smartphones. Changes in unprotected sex with high risk partners were compared by mixed models. Unprotected sex with high risk men significantly declined over 6 months post-intervention for both arms, from 21-22 acts to 5-6 (p < 0.001). This reduction was 18 % greater in the video over the text arm, though this difference was not statistically significant. However, the LSC was highly popular and viewers wanted the series to continue. This is the first study to report streaming soap opera video episodes to reduce HIV risk on smartphones. LSC holds promise as an Internet intervention that could be scaled-up and combined with HIV testing. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Valuing recreational fishing quality at rivers and streams

    NASA Astrophysics Data System (ADS)

    Melstrom, Richard T.; Lupi, Frank; Esselman, Peter C.; Stevenson, R. Jan

    2015-01-01

    This paper describes an economic model that links the demand for recreational stream fishing to fish biomass. Useful measures of fishing quality are often difficult to obtain. In the past, economists have linked the demand for fishing sites to species presence-absence indicators or average self-reported catch rates. The demand model presented here takes advantage of a unique data set of statewide biomass estimates for several popular game fish species in Michigan, including trout, bass and walleye. These data are combined with fishing trip information from a 2008-2010 survey of Michigan anglers in order to estimate a demand model. Fishing sites are defined by hydrologic unit boundaries and information on fish assemblages so that each site corresponds to the area of a small subwatershed, about 100-200 square miles in size. The random utility model choice set includes nearly all fishable streams in the state. The results indicate a significant relationship between the site choice behavior of anglers and the biomass of certain species. Anglers are more likely to visit streams in watersheds high in fish abundance, particularly for brook trout and walleye. The paper includes estimates of the economic value of several quality change and site loss scenarios.

  19. Design of a factorial experiment with randomization restrictions to assess medical device performance on vascular tissue

    PubMed Central

    2011-01-01

    Background Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. Methods The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. Results The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. Conclusions The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance. PMID:21599963

  20. Quantifying economic fluctuations by adapting methods of statistical physics

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki

    2001-09-01

    The first focus of this thesis is the investigation of cross-correlations between the price fluctuations of different stocks using the conceptual framework of random matrix theory (RMT), developed in physics to describe the statistical properties of energy-level spectra of complex nuclei. RMT makes predictions for the statistical properties of matrices that are universal, i.e., do not depend on the interactions between the elements comprising the system. In physical systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system so this framework is of potential value if applied to economic systems. This thesis compares the statistics of cross-correlation matrix C-whose elements Cij are the correlation coefficients of price fluctuations of stock i and j-against the ``null hypothesis'' of a random matrix having the same symmetry properties. It is shown that comparison of the eigenvalue statistics of C with RMT results can be used to distinguish random and non-random parts of C. The non-random part of C which deviates from RMT results, provides information regarding genuine cross-correlations between stocks. The interpretations and potential practical utility of these deviations are also investigated. The second focus is the characterization of the dynamics of stock price fluctuations. The statistical properties of the changes G Δt in price over a time interval Δ t are quantified and the statistical relation between G Δt and the trading activity-measured by the number of transactions NΔ t in the interval Δt is investigated. The statistical properties of the volatility, i.e., the time dependent standard deviation of price fluctuations, is related to two microscopic quantities: NΔt and the variance W2Dt of the price changes for all transactions in the interval Δ t. In addition, the statistical relationship between G Δt and the number of shares QΔt traded in Δ t is investigated.

  1. Generation of physical random numbers by using homodyne detection

    NASA Astrophysics Data System (ADS)

    Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro

    2016-10-01

    Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.

  2. Ecological Status of Wyoming Streams, 2000-2003

    USGS Publications Warehouse

    Peterson, David A.; Hargett, Eric G.; Wright, Peter R.; Zumberge, Jeremy R.

    2007-01-01

    The ecological status of perennial streams in Wyoming was determined and compared with the status of perennial streams throughout 12 States in the western United States, using data collected as part of the Western Pilot Environmental Monitoring and Assessment Program (EMAP-West). Results for Wyoming are compared and contrasted in the context of the entire EMAP-West study area (west-wide) and climatic regions (based on aggregated ecoregions) within Wyoming. In Wyoming, ecological status, estimated as the proportion of the perennial stream length in least disturbed, most disturbed, and intermediate disturbance condition, based on ecological indicators of vertebrate and invertebrate assemblages was similar, in many cases, to the status of those assemblages determined for EMAP-West. Ecological status based on chemical and physical habitat stressors also was similar in Wyoming to west-wide proportions in many cases. Riparian disturbance was one of the most common physical stressors west-wide and in Wyoming. The estimates of riparian disturbance indicated about 90 percent of the stream length in the xeric climatic region in Wyoming was rated most disturbed, compared to about 30 percent rated most disturbed in the mountain climatic region in Wyoming. Results from analyses using a macroinvertebrate multi-metric index (MMI) and macroinvertebrate ratio of observed to expected taxa (O/E) developed specifically for the west-wide EMAP study were compared to results using a macroinvertebrate MMI and O/E developed for Wyoming. Proportions of perennial stream length in various condition categories determined from macroinvertebrate MMIs often were similar in Wyoming to proportions observed west-wide. Differences were larger, but not extreme, between west-wide and Wyoming O/E models. An aquatic life use support decision matrix developed for interpreting the Wyoming MMI and O/E model data indicated about one-half of the stream length statewide achieves the State's narrative aquatic life use criteria, and the remainder of the stream length either exceeds the criteria, indicating partial or non-support of aquatic life Wyominguses, or is undetermined. These results provide initial estimates of aquatic life use support at a statewide basis as required for 305(b) reporting, and coupled with current and future State-level probability survey designs, a foundation for tracking conditions over time at multiple scales.

  3. Gender issues in a cataract surgical population in South India.

    PubMed

    Joseph, Sanil; Ravilla, Thulasiraj; Bassett, Ken

    2013-04-01

    To investigate patterns and characteristics of men and women who used different cataract surgery payment streams in a South Indian hospital. We randomly sampled patients with age-related cataract aged 40 years and over from three routine cataract surgical service streams: walk-in paying, walk-in subsidized and free camp. Presenting visual acuity (VA) and cataract surgical details were obtained from routine hospital records. Demographic and socioeconomic factors were collected from patient interviews. Multiple logistic regression was used to investigate factors associated with use of different streams with walk-in paying as the reference group. There were 7076 eligible admissions (3742 women and 3334 men). Proportionately more women than men attended the walk-in subsidized (56%) or free camp sections (55%) compared to the walk-in paying stream (42%, odds ratio, OR, 1.40 95% confidence interval, CI, 1.25-1.57 and OR 1.33 95% CI 1.19-1.49, respectively). After adjustment for socioeconomic factors (illiteracy, not being in paid work), rural residence and poor presenting VA, OR for women compared to men for the walk-in subsided stream was 1.02, (95% CI 0.87-1.18) and for the free camp 0.94 (95% CI 0.80-1.11). Our results indicate that women are underrepresented in the paying section, reflecting their poorer socioeconomic and educational statuses.

  4. Comprehensive Thematic T-Matrix Reference Database: A 2015-2017 Update

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas

    2017-01-01

    The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.

  5. Comprehensive thematic T-matrix reference database: A 2015-2017 update

    NASA Astrophysics Data System (ADS)

    Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas

    2017-11-01

    The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.

  6. Topological Distances Between Brain Networks

    PubMed Central

    Lee, Hyekyoung; Solo, Victor; Davidson, Richard J.; Pollak, Seth D.

    2018-01-01

    Many existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers. A few extreme edge weights may severely affect the distance. Thus it is necessary to develop network distances that recognize topology. In this paper, we introduce Gromov-Hausdorff (GH) and Kolmogorov-Smirnov (KS) distances. GH-distance is often used in persistent homology based brain network models. The superior performance of KS-distance is contrasted against matrix norms and GH-distance in random network simulations with the ground truths. The KS-distance is then applied in characterizing the multimodal MRI and DTI study of maltreated children.

  7. Protein structure estimation from NMR data by matrix completion.

    PubMed

    Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing

    2017-09-01

    Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.

  8. Random lasing in dye-doped polymer dispersed liquid crystal film

    NASA Astrophysics Data System (ADS)

    Wu, Rina; Shi, Rui-xin; Wu, Xiaojiao; Wu, Jie; Dai, Qin

    2016-09-01

    A dye-doped polymer-dispersed liquid crystal film was designed and fabricated, and random lasing action was studied. A mixture of laser dye, nematic liquid crystal, chiral dopant, and PVA was used to prepare the dye-doped polymer-dispersed liquid crystal film by means of microcapsules. Scanning electron microscopy analysis showed that most liquid crystal droplets in the polymer matrix ranged from 30 μm to 40 μm, the size of the liquid crystal droplets was small. Under frequency doubled 532 nm Nd:YAG laser-pumped optical excitation, a plurality of discrete and sharp random laser radiation peaks could be measured in the range of 575-590 nm. The line-width of the lasing peak was 0.2 nm and the threshold of the random lasing was 9 mJ. Under heating, the emission peaks of random lasing disappeared. By detecting the emission light spot energy distribution, the mechanism of radiation was found to be random lasing. The random lasing radiation mechanism was then analyzed and discussed. Experimental results indicated that the size of the liquid crystal droplets is the decisive factor that influences the lasing mechanism. The surface anchor role can be ignored when the size of the liquid crystal droplets in the polymer matrix is small, which is beneficial to form multiple scattering. The transmission path of photons is similar to that in a ring cavity, providing feedback to obtain random lasing output. Project supported by the National Natural Science Foundation of China (Grant No. 61378042), the Colleges and Universities in Liaoning Province Outstanding Young Scholars Growth Plans, China (Grant No. LJQ2015093), and Shenyang Ligong University Laser and Optical Information of Liaoning Province Key Laboratory Open Funds, China.

  9. Predicting redox-sensitive contaminant concentrations in groundwater using random forest classification

    NASA Astrophysics Data System (ADS)

    Tesoriero, Anthony J.; Gronberg, Jo Ann; Juckem, Paul F.; Miller, Matthew P.; Austin, Brian P.

    2017-08-01

    Machine learning techniques were applied to a large (n > 10,000) compliance monitoring database to predict the occurrence of several redox-active constituents in groundwater across a large watershed. Specifically, random forest classification was used to determine the probabilities of detecting elevated concentrations of nitrate, iron, and arsenic in the Fox, Wolf, Peshtigo, and surrounding watersheds in northeastern Wisconsin. Random forest classification is well suited to describe the nonlinear relationships observed among several explanatory variables and the predicted probabilities of elevated concentrations of nitrate, iron, and arsenic. Maps of the probability of elevated nitrate, iron, and arsenic can be used to assess groundwater vulnerability and the vulnerability of streams to contaminants derived from groundwater. Processes responsible for elevated concentrations are elucidated using partial dependence plots. For example, an increase in the probability of elevated iron and arsenic occurred when well depths coincided with the glacial/bedrock interface, suggesting a bedrock source for these constituents. Furthermore, groundwater in contact with Ordovician bedrock has a higher likelihood of elevated iron concentrations, which supports the hypothesis that groundwater liberates iron from a sulfide-bearing secondary cement horizon of Ordovician age. Application of machine learning techniques to existing compliance monitoring data offers an opportunity to broadly assess aquifer and stream vulnerability at regional and national scales and to better understand geochemical processes responsible for observed conditions.

  10. Predicting redox-sensitive contaminant concentrations in groundwater using random forest classification

    USGS Publications Warehouse

    Tesoriero, Anthony J.; Gronberg, Jo Ann M.; Juckem, Paul F.; Miller, Matthew P.; Austin, Brian P.

    2017-01-01

    Machine learning techniques were applied to a large (n > 10,000) compliance monitoring database to predict the occurrence of several redox-active constituents in groundwater across a large watershed. Specifically, random forest classification was used to determine the probabilities of detecting elevated concentrations of nitrate, iron, and arsenic in the Fox, Wolf, Peshtigo, and surrounding watersheds in northeastern Wisconsin. Random forest classification is well suited to describe the nonlinear relationships observed among several explanatory variables and the predicted probabilities of elevated concentrations of nitrate, iron, and arsenic. Maps of the probability of elevated nitrate, iron, and arsenic can be used to assess groundwater vulnerability and the vulnerability of streams to contaminants derived from groundwater. Processes responsible for elevated concentrations are elucidated using partial dependence plots. For example, an increase in the probability of elevated iron and arsenic occurred when well depths coincided with the glacial/bedrock interface, suggesting a bedrock source for these constituents. Furthermore, groundwater in contact with Ordovician bedrock has a higher likelihood of elevated iron concentrations, which supports the hypothesis that groundwater liberates iron from a sulfide-bearing secondary cement horizon of Ordovician age. Application of machine learning techniques to existing compliance monitoring data offers an opportunity to broadly assess aquifer and stream vulnerability at regional and national scales and to better understand geochemical processes responsible for observed conditions.

  11. Properties of networks with partially structured and partially random connectivity

    NASA Astrophysics Data System (ADS)

    Ahmadian, Yashar; Fumarola, Francesco; Miller, Kenneth D.

    2015-01-01

    Networks studied in many disciplines, including neuroscience and mathematical biology, have connectivity that may be stochastic about some underlying mean connectivity represented by a non-normal matrix. Furthermore, the stochasticity may not be independent and identically distributed (iid) across elements of the connectivity matrix. More generally, the problem of understanding the behavior of stochastic matrices with nontrivial mean structure and correlations arises in many settings. We address this by characterizing large random N ×N matrices of the form A =M +L J R , where M ,L , and R are arbitrary deterministic matrices and J is a random matrix of zero-mean iid elements. M can be non-normal, and L and R allow correlations that have separable dependence on row and column indices. We first provide a general formula for the eigenvalue density of A . For A non-normal, the eigenvalues do not suffice to specify the dynamics induced by A , so we also provide general formulas for the transient evolution of the magnitude of activity and frequency power spectrum in an N -dimensional linear dynamical system with a coupling matrix given by A . These quantities can also be thought of as characterizing the stability and the magnitude of the linear response of a nonlinear network to small perturbations about a fixed point. We derive these formulas and work them out analytically for some examples of M ,L , and R motivated by neurobiological models. We also argue that the persistence as N →∞ of a finite number of randomly distributed outlying eigenvalues outside the support of the eigenvalue density of A , as previously observed, arises in regions of the complex plane Ω where there are nonzero singular values of L-1(z 1 -M ) R-1 (for z ∈Ω ) that vanish as N →∞ . When such singular values do not exist and L and R are equal to the identity, there is a correspondence in the normalized Frobenius norm (but not in the operator norm) between the support of the spectrum of A for J of norm σ and the σ pseudospectrum of M .

  12. Random matrix theory filters in portfolio optimisation: A stability and risk assessment

    NASA Astrophysics Data System (ADS)

    Daly, J.; Crane, M.; Ruskin, H. J.

    2008-07-01

    Random matrix theory (RMT) filters, applied to covariance matrices of financial returns, have recently been shown to offer improvements to the optimisation of stock portfolios. This paper studies the effect of three RMT filters on the realised portfolio risk, and on the stability of the filtered covariance matrix, using bootstrap analysis and out-of-sample testing. We propose an extension to an existing RMT filter, (based on Krzanowski stability), which is observed to reduce risk and increase stability, when compared to other RMT filters tested. We also study a scheme for filtering the covariance matrix directly, as opposed to the standard method of filtering correlation, where the latter is found to lower the realised risk, on average, by up to 6.7%. We consider both equally and exponentially weighted covariance matrices in our analysis, and observe that the overall best method out-of-sample was that of the exponentially weighted covariance, with our Krzanowski stability-based filter applied to the correlation matrix. We also find that the optimal out-of-sample decay factors, for both filtered and unfiltered forecasts, were higher than those suggested by Riskmetrics [J.P. Morgan, Reuters, Riskmetrics technical document, Technical Report, 1996. http://www.riskmetrics.com/techdoc.html], with those for the latter approaching a value of α=1. In conclusion, RMT filtering reduced the realised risk, on average, and in the majority of cases when tested out-of-sample, but increased the realised risk on a marked number of individual days-in some cases more than doubling it.

  13. Spatially patterned matrix elasticity directs stem cell fate

    NASA Astrophysics Data System (ADS)

    Yang, Chun; DelRio, Frank W.; Ma, Hao; Killaars, Anouk R.; Basta, Lena P.; Kyburz, Kyle A.; Anseth, Kristi S.

    2016-08-01

    There is a growing appreciation for the functional role of matrix mechanics in regulating stem cell self-renewal and differentiation processes. However, it is largely unknown how subcellular, spatial mechanical variations in the local extracellular environment mediate intracellular signal transduction and direct cell fate. Here, the effect of spatial distribution, magnitude, and organization of subcellular matrix mechanical properties on human mesenchymal stem cell (hMSCs) function was investigated. Exploiting a photodegradation reaction, a hydrogel cell culture substrate was fabricated with regions of spatially varied and distinct mechanical properties, which were subsequently mapped and quantified by atomic force microscopy (AFM). The variations in the underlying matrix mechanics were found to regulate cellular adhesion and transcriptional events. Highly spread, elongated morphologies and higher Yes-associated protein (YAP) activation were observed in hMSCs seeded on hydrogels with higher concentrations of stiff regions in a dose-dependent manner. However, when the spatial organization of the mechanically stiff regions was altered from a regular to randomized pattern, lower levels of YAP activation with smaller and more rounded cell morphologies were induced in hMSCs. We infer from these results that irregular, disorganized variations in matrix mechanics, compared with regular patterns, appear to disrupt actin organization, and lead to different cell fates; this was verified by observations of lower alkaline phosphatase (ALP) activity and higher expression of CD105, a stem cell marker, in hMSCs in random versus regular patterns of mechanical properties. Collectively, this material platform has allowed innovative experiments to elucidate a novel spatial mechanical dosing mechanism that correlates to both the magnitude and organization of spatial stiffness.

  14. Does hearing two dialects at different times help infants learn dialect-specific rules?

    PubMed Central

    Gonzales, Kalim; Gerken, LouAnn; Gómez, Rebecca L.

    2015-01-01

    Infants might be better at teasing apart dialects with different language rules when hearing the dialects at different times, since language learners do not always combine input heard at different times. However, no previous research has independently varied the temporal distribution of conflicting language input. Twelve-month-olds heard two artificial language streams representing different dialects—a “pure stream” whose sentences adhered to abstract grammar rules like aX bY, and a “mixed stream” wherein any a- or b-word could precede any X- or Y-word. Infants were then tested for generalization of the pure stream’s rules to novel sentences. Supporting our hypothesis, infants showed generalization when the two streams’ sentences alternated in minutes-long intervals without any perceptually salient change across streams (Experiment 2), but not when all sentences from these same streams were randomly interleaved (Experiment 3). Results are interpreted in light of temporal context effects in word learning. PMID:25880342

  15. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  16. Finite element analysis of periodic transonic flow problems

    NASA Technical Reports Server (NTRS)

    Fix, G. J.

    1978-01-01

    Flow about an oscillating thin airfoil in a transonic stream was considered. It was assumed that the flow field can be decomposed into a mean flow plus a periodic perturbation. On the surface of the airfoil the usual Neumman conditions are imposed. Two computer programs were written, both using linear basis functions over triangles for the finite element space. The first program uses a banded Gaussian elimination solver to solve the matrix problem, while the second uses an iterative technique, namely SOR. The only results obtained are for an oscillating flat plate.

  17. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    PubMed

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.

  18. A synoptic approach for analyzing erosion as a guide to land-use planning

    USGS Publications Warehouse

    Brown, William M.; Hines, Walter G.; Rickert, David A.; Beach, Gary L.

    1979-01-01

    A synoptic approach has been devised to delineate the relationships that exist' between physiographic factors, land-use activities, and resultant erosional problems. The approach involves the development of an erosional-depositional province map and a numerical impact matrix for rating the potential for erosional problems. The province map is prepared by collating data on the natural terrain factors that exert the dominant controls on erosion and deposition in each basin. In addition, existing erosional and depositional features are identified and mapped from color-infrared, high-altitude aerial imagery. The axes of the impact matrix are composed of weighting values for the terrain factors used in developing the map and by a second set of values for the prevalent land-use activities. The body of the matrix is composed of composite erosional-impact ratings resulting from the product of the factor sets. Together the province map and problem matrix serve as practical tools for estimating the erosional impact of human activities on different types of terrain. The approach has been applied to the Molalla River basin, Oregon, and has proven useful for the recognition of problem areas. The same approach is currently being used by the State of Oregon (in the 208 assessment of nonpoint-source pollution under Public Law 92-500) to evaluate the impact of land-management practices on stream quality.

  19. Random discrete linear canonical transform.

    PubMed

    Wei, Deyun; Wang, Ruikui; Li, Yuan-Min

    2016-12-01

    Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.

  20. Buses of Cuernavaca—an agent-based model for universal random matrix behavior minimizing mutual information

    NASA Astrophysics Data System (ADS)

    Warchoł, Piotr

    2018-06-01

    The public transportation system of Cuernavaca, Mexico, exhibits random matrix theory statistics. In particular, the fluctuation of times between the arrival of buses on a given bus stop, follows the Wigner surmise for the Gaussian unitary ensemble. To model this, we propose an agent-based approach in which each bus driver tries to optimize his arrival time to the next stop with respect to an estimated arrival time of his predecessor. We choose a particular form of the associated utility function and recover the appropriate distribution in numerical experiments for a certain value of the only parameter of the model. We then investigate whether this value of the parameter is otherwise distinguished within an information theoretic approach and give numerical evidence that indeed it is associated with a minimum of averaged pairwise mutual information.

Top