Sample records for binary matrix approach

  1. A laid-back trip through the Hennigian Forests

    PubMed Central

    2017-01-01

    Background This paper is a comment on the idea of matrix-free Cladistics. Demonstration of this idea’s efficiency is a major goal of the study. Within the proposed framework, the ordinary (phenetic) matrix is necessary only as “source” of Hennigian trees, not as a primary subject of the analysis. Switching from the matrix-based thinking to the matrix-free Cladistic approach clearly reveals that optimizations of the character-state changes are related not to the real processes, but to the form of the data representation. Methods We focused our study on the binary data. We wrote the simple ruby-based script FORESTER version 1.0 that helps represent a binary matrix as an array of the rooted trees (as a “Hennigian forest”). The binary representations of the genomic (DNA) data have been made by script 1001. The Average Consensus method as well as the standard Maximum Parsimony (MP) approach has been used to analyze the data. Principle findings The binary matrix may be easily re-written as a set of rooted trees (maximal relationships). The latter might be analyzed by the Average Consensus method. Paradoxically, this method, if applied to the Hennigian forests, in principle can help to identify clades despite the absence of the direct evidence from the primary data. Our approach may handle the clock- or non clock-like matrices, as well as the hypothetical, molecular or morphological data. Discussion Our proposal clearly differs from the numerous phenetic alignment-free techniques of the construction of the phylogenetic trees. Dealing with the relations, not with the actual “data” also distinguishes our approach from all optimization-based methods, if the optimization is defined as a way to reconstruct the sequences of the character-state changes on a tree, either the standard alignment-based techniques or the “direct” alignment-free procedure. We are not viewing our recent framework as an alternative to the three-taxon statement analysis (3TA), but there are two major differences between our recent proposal and the 3TA, as originally designed and implemented: (1) the 3TA deals with the three-taxon statements or minimal relationships. According to the logic of 3TA, the set of the minimal trees must be established as a binary matrix and used as an input for the parsimony program. In this paper, we operate directly with maximal relationships written just as trees, not as binary matrices, while also using the Average Consensus method instead of the MP analysis. The solely ‘reversal’-based groups can always be found by our method without the separate scoring of the putative reversals before analyses. PMID:28740753

  2. Parallel protein secondary structure prediction based on neural networks.

    PubMed

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  3. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  4. Understanding the drug release mechanism from a montmorillonite matrix and its binary mixture with a hydrophilic polymer using a compartmental modelling approach

    NASA Astrophysics Data System (ADS)

    Choiri, S.; Ainurofiq, A.

    2018-03-01

    Drug release from a montmorillonite (MMT) matrix is a complex mechanism controlled by swelling mechanism of MMT and an interaction of drug and MMT. The aim of this research was to explain a suitable model of the drug release mechanism from MMT and its binary mixture with a hydrophilic polymer in the controlled release formulation based on a compartmental modelling approach. Theophylline was used as a drug model and incorporated into MMT and a binary mixture with hydroxyl propyl methyl cellulose (HPMC) as a hydrophilic polymer, by a kneading method. The dissolution test was performed and the modelling of drug release was assisted by a WinSAAM software. A 2 model was purposed based on the swelling capability and basal spacing of MMT compartments. The model evaluation was carried out to goodness of fit and statistical parameters and models were validated by a cross-validation technique. The drug release from MMT matrix regulated by a burst release mechanism of unloaded drug, swelling ability, basal spacing of MMT compartment, and equilibrium between basal spacing and swelling compartments. Furthermore, the addition of HPMC in MMT system altered the presence of swelling compartment and equilibrium between swelling and basal spacing compartment systems. In addition, a hydrophilic polymer reduced the burst release mechanism of unloaded drug.

  5. Organic–inorganic binary mixture matrix for comprehensive laser-desorption ionization mass spectrometric analysis and imaging of medium-size molecules including phospholipids, glycerolipids, and oligosaccharides

    DOE PAGES

    Feenstra, Adam D.; Ames Lab., Ames, IA; O'Neill, Kelly C.; ...

    2016-10-13

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a widely adopted, versatile technique, especially in high-throughput analysis and imaging. However, matrix-dependent selectivity of analytes is often a severe limitation. In this work, a mixture of organic 2,5-dihydroxybenzoic acid and inorganic Fe 3O 4 nanoparticles is developed as a binary MALDI matrix to alleviate the well-known issue of triacylglycerol (TG) ion suppression by phosphatidylcholine (PC). In application to lipid standards and maize seed cross-sections, the binary matrix not only dramatically reduced the ion suppression of TG, but also efficiently desorbed and ionized a wide variety of lipids such as cationic PC, anionicmore » phosphatidylethanolamine (PE) and phosphatidylinositol (PI), and neutral digalactosyldiacylglycerol (DGDG). The binary matrix was also very efficient for large polysaccharides, which were not detected by either of the individual matrices. As a result, the usefulness of the binary matrix is demonstrated in MS imaging of maize seed sections, successfully visualizing diverse medium-size molecules and acquiring high-quality MS/MS spectra for these compounds.« less

  6. Neural network based feed-forward high density associative memory

    NASA Technical Reports Server (NTRS)

    Daud, T.; Moopenn, A.; Lamb, J. L.; Ramesham, R.; Thakoor, A. P.

    1987-01-01

    A novel thin film approach to neural-network-based high-density associative memory is described. The information is stored locally in a memory matrix of passive, nonvolatile, binary connection elements with a potential to achieve a storage density of 10 to the 9th bits/sq cm. Microswitches based on memory switching in thin film hydrogenated amorphous silicon, and alternatively in manganese oxide, have been used as programmable read-only memory elements. Low-energy switching has been ascertained in both these materials. Fabrication and testing of memory matrix is described. High-speed associative recall approaching 10 to the 7th bits/sec and high storage capacity in such a connection matrix memory system is also described.

  7. Phylogenetic Trees and Networks Reduce to Phylogenies on Binary States: Does It Furnish an Explanation to the Robustness of Phylogenetic Trees against Lateral Transfers.

    PubMed

    Thuillard, Marc; Fraix-Burnet, Didier

    2015-01-01

    This article presents an innovative approach to phylogenies based on the reduction of multistate characters to binary-state characters. We show that the reduction to binary characters' approach can be applied to both character- and distance-based phylogenies and provides a unifying framework to explain simply and intuitively the similarities and differences between distance- and character-based phylogenies. Building on these results, this article gives a possible explanation on why phylogenetic trees obtained from a distance matrix or a set of characters are often quite reasonable despite lateral transfers of genetic material between taxa. In the presence of lateral transfers, outer planar networks furnish a better description of evolution than phylogenetic trees. We present a polynomial-time reconstruction algorithm for perfect outer planar networks with a fixed number of states, characters, and lateral transfers.

  8. Electronic implementation of associative memory based on neural network models

    NASA Technical Reports Server (NTRS)

    Moopenn, A.; Lambe, John; Thakoor, A. P.

    1987-01-01

    An electronic embodiment of a neural network based associative memory in the form of a binary connection matrix is described. The nature of false memory errors, their effect on the information storage capacity of binary connection matrix memories, and a novel technique to eliminate such errors with the help of asymmetrical extra connections are discussed. The stability of the matrix memory system incorporating a unique local inhibition scheme is analyzed in terms of local minimization of an energy function. The memory's stability, dynamic behavior, and recall capability are investigated using a 32-'neuron' electronic neural network memory with a 1024-programmable binary connection matrix.

  9. Predicting and understanding comprehensive drug-drug interactions via semi-nonnegative matrix factorization.

    PubMed

    Yu, Hui; Mao, Kui-Tao; Shi, Jian-Yu; Huang, Hua; Chen, Zhi; Dong, Kai; Yiu, Siu-Ming

    2018-04-11

    Drug-drug interactions (DDIs) always cause unexpected and even adverse drug reactions. It is important to identify DDIs before drugs are used in the market. However, preclinical identification of DDIs requires much money and time. Computational approaches have exhibited their abilities to predict potential DDIs on a large scale by utilizing pre-market drug properties (e.g. chemical structure). Nevertheless, none of them can predict two comprehensive types of DDIs, including enhancive and degressive DDIs, which increases and decreases the behaviors of the interacting drugs respectively. There is a lack of systematic analysis on the structural relationship among known DDIs. Revealing such a relationship is very important, because it is able to help understand how DDIs occur. Both the prediction of comprehensive DDIs and the discovery of structural relationship among them play an important guidance when making a co-prescription. In this work, treating a set of comprehensive DDIs as a signed network, we design a novel model (DDINMF) for the prediction of enhancive and degressive DDIs based on semi-nonnegative matrix factorization. Inspiringly, DDINMF achieves the conventional DDI prediction (AUROC = 0.872 and AUPR = 0.605) and the comprehensive DDI prediction (AUROC = 0.796 and AUPR = 0.579). Compared with two state-of-the-art approaches, DDINMF shows it superiority. Finally, representing DDIs as a binary network and a signed network respectively, an analysis based on NMF reveals crucial knowledge hidden among DDIs. Our approach is able to predict not only conventional binary DDIs but also comprehensive DDIs. More importantly, it reveals several key points about the DDI network: (1) both binary and signed networks show fairly clear clusters, in which both drug degree and the difference between positive degree and negative degree show significant distribution; (2) the drugs having large degrees tend to have a larger difference between positive degree and negative degree; (3) though the binary DDI network contains no information about enhancive and degressive DDIs at all, it implies some of their relationship in the comprehensive DDI matrix; (4) the occurrence of signs indicating enhancive and degressive DDIs is not random because the comprehensive DDI network is equipped with a structural balance.

  10. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-04-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  11. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-07-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  12. A biclustering algorithm for extracting bit-patterns from binary datasets.

    PubMed

    Rodriguez-Baena, Domingo S; Perez-Pulido, Antonio J; Aguilar-Ruiz, Jesus S

    2011-10-01

    Binary datasets represent a compact and simple way to store data about the relationships between a group of objects and their possible properties. In the last few years, different biclustering algorithms have been specially developed to be applied to binary datasets. Several approaches based on matrix factorization, suffix trees or divide-and-conquer techniques have been proposed to extract useful biclusters from binary data, and these approaches provide information about the distribution of patterns and intrinsic correlations. A novel approach to extracting biclusters from binary datasets, BiBit, is introduced here. The results obtained from different experiments with synthetic data reveal the excellent performance and the robustness of BiBit to density and size of input data. Also, BiBit is applied to a central nervous system embryonic tumor gene expression dataset to test the quality of the results. A novel gene expression preprocessing methodology, based on expression level layers, and the selective search performed by BiBit, based on a very fast bit-pattern processing technique, provide very satisfactory results in quality and computational cost. The power of biclustering in finding genes involved simultaneously in different cancer processes is also shown. Finally, a comparison with Bimax, one of the most cited binary biclustering algorithms, shows that BiBit is faster while providing essentially the same results. The source and binary codes, the datasets used in the experiments and the results can be found at: http://www.upo.es/eps/bigs/BiBit.html dsrodbae@upo.es Supplementary data are available at Bioinformatics online.

  13. Bi-dimensional null model analysis of presence-absence binary matrices.

    PubMed

    Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J

    2018-01-01

    Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  14. Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer.

    PubMed

    Müller, Dirk K; Pampel, André; Möller, Harald E

    2013-05-01

    Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    NASA Astrophysics Data System (ADS)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  16. Subthreshold resonances and resonances in the R -matrix method for binary reactions and in the Trojan horse method

    NASA Astrophysics Data System (ADS)

    Mukhamedzhanov, A. M.; Shubhchintak, Bertulani, C. A.

    2017-08-01

    In this paper we discuss the R -matrix approach to treat the subthreshold resonances for the single-level and one-channel and for the single-level and two-channel cases. In particular, the expression relating the asymptotic normalization coefficient (ANC) with the observable reduced width, when the subthreshold bound state is the only channel or coupled with an open channel, which is a resonance, is formulated. Since the ANC plays a very important role in nuclear astrophysics, these relations significantly enhance the power of the derived equations. We present the relationship between the resonance width and the ANC for the general case and consider two limiting cases: wide and narrow resonances. Different equations for the astrophysical S factors in the R -matrix approach are presented. After that we discuss the Trojan horse method (THM) formalism. The developed equations are obtained using the surface-integral formalism and the generalized R -matrix approach for the three-body resonant reactions. It is shown how the Trojan horse (TH) double-differential cross section can be expressed in terms of the on-the-energy-shell astrophysical S factor for the binary subreaction. Finally, we demonstrate how the THM can be used to calculate the astrophysical S factor for the neutron generator 13C(α ,n )16O in low-mass AGB stars. At astrophysically relevant energies this astrophysical S factor is controlled by the threshold level 1 /2+,Ex=6356 keV. Here, we reanalyzed recent TH data taking into account more accurately the three-body effects and using both assumptions that the threshold level is a subthreshold bound state or it is a resonance state.

  17. Estimating neighborhood variability with a binary comparison matrix.

    USGS Publications Warehouse

    Murphy, D.L.

    1985-01-01

    A technique which utilizes a binary comparison matrix has been developed to implement a neighborhood function for a raster format data base. The technique assigns an index value to the center pixel of 3- by 3-pixel neighborhoods. The binary comparison matrix provides additional information not found in two other neighborhood variability statistics; the function is sensitive to both the number of classes within the neighborhood and the frequency of pixel occurrence in each of the classes. Application of the function to a spatial data base from the Kenai National Wildlife Refuge, Alaska, demonstrates 1) the numerical distribution of the index values, and 2) the spatial patterns exhibited by the numerical values. -Author

  18. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  19. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    PubMed Central

    Király, András; Abonyi, János

    2014-01-01

    During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data) and biclustering (applied to gene expression data analysis). The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers. PMID:24616651

  20. E-beam generated holographic masks for optical vector-matrix multiplication

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Case, S. K.

    1981-01-01

    An optical vector matrix multiplication scheme that encodes the matrix elements as a holographic mask consisting of linear diffraction gratings is proposed. The binary, chrome on glass masks are fabricated by e-beam lithography. This approach results in a fairly simple optical system that promises both large numerical range and high accuracy. A partitioned computer generated hologram mask was fabricated and tested. This hologram was diagonally separated outputs, compact facets and symmetry about the axis. The resultant diffraction pattern at the output plane is shown. Since the grating fringes are written at 45 deg relative to the facet boundaries, the many on-axis sidelobes from each output are seen to be diagonally separated from the adjacent output signals.

  1. Loaded delay lines for future RF pulse compression systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.M.; Wilson, P.B.; Kroll, N.M.

    1995-05-01

    The peak power delivered by the klystrons in the NLCRA (Next Linear Collider Test Accelerator) now under construction at SLAC is enhanced by a factor of four in a SLED-II type of R.F. pulse compression system (pulse width compression ratio of six). To achieve the desired output pulse duration of 250 ns, a delay line constructed from a 36 m length of circular waveguide is used. Future colliders, however, will require even higher peak power and larger compression factors, which favors a more efficient binary pulse compression approach. Binary pulse compression, however, requires a line whose delay time is approximatelymore » proportional to the compression factor. To reduce the length of these lines to manageable proportions, periodically loaded delay lines are being analyzed using a generalized scattering matrix approach. One issue under study is the possibility of propagating two TE{sub o} modes, one with a high group velocity and one with a group velocity of the order 0.05c, for use in a single-line binary pulse compression system. Particular attention is paid to time domain pulse degradation and to Ohmic losses.« less

  2. Learning to rank image tags with limited training examples.

    PubMed

    Songhe Feng; Zheyun Feng; Rong Jin

    2015-04-01

    With an increasing number of images that are available in social media, image annotation has emerged as an important research topic due to its application in image matching and retrieval. Most studies cast image annotation into a multilabel classification problem. The main shortcoming of this approach is that it requires a large number of training images with clean and complete annotations in order to learn a reliable model for tag prediction. We address this limitation by developing a novel approach that combines the strength of tag ranking with the power of matrix recovery. Instead of having to make a binary decision for each tag, our approach ranks tags in the descending order of their relevance to the given image, significantly simplifying the problem. In addition, the proposed method aggregates the prediction models for different tags into a matrix, and casts tag ranking into a matrix recovery problem. It introduces the matrix trace norm to explicitly control the model complexity, so that a reliable prediction model can be learned for tag ranking even when the tag space is large and the number of training images is limited. Experiments on multiple well-known image data sets demonstrate the effectiveness of the proposed framework for tag ranking compared with the state-of-the-art approaches for image annotation and tag ranking.

  3. Individual and binary toxicity of anatase and rutile nanoparticles towards Ceriodaphnia dubia.

    PubMed

    Iswarya, V; Bhuvaneshwari, M; Chandrasekaran, N; Mukherjee, Amitava

    2016-09-01

    Increasing usage of engineered nanoparticles, especially Titanium dioxide (TiO2) in various commercial products has necessitated their toxicity evaluation and risk assessment, especially in the aquatic ecosystem. In the present study, a comprehensive toxicity assessment of anatase and rutile NPs (individual as well as a binary mixture) has been carried out in a freshwater matrix on Ceriodaphnia dubia under different irradiation conditions viz., visible and UV-A. Anatase and rutile NPs produced an LC50 of about 37.04 and 48mg/L, respectively, under visible irradiation. However, lesser LC50 values of about 22.56 (anatase) and 23.76 (rutile) mg/L were noted under UV-A irradiation. A toxic unit (TU) approach was followed to determine the concentrations of binary mixtures of anatase and rutile. The binary mixture resulted in an antagonistic and additive effect under visible and UV-A irradiation, respectively. Among the two different modeling approaches used in the study, Marking-Dawson model was noted to be a more appropriate model than Abbott model for the toxicity evaluation of binary mixtures. The agglomeration of NPs played a significant role in the induction of antagonistic and additive effects by the mixture based on the irradiation applied. TEM and zeta potential analysis confirmed the surface interactions between anatase and rutile NPs in the mixture. Maximum uptake was noticed at 0.25 total TU of the binary mixture under visible irradiation and 1 TU of anatase NPs for UV-A irradiation. Individual NPs showed highest uptake under UV-A than visible irradiation. In contrast, binary mixture showed a difference in the uptake pattern based on the type of irradiation exposed. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Optoelectronic Inner-Product Neural Associative Memory

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1993-01-01

    Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.

  5. Fluorescence Lectin Bar-Coding of Glycoconjugates in the Extracellular Matrix of Biofilm and Bioaggregate Forming Microorganisms.

    PubMed

    Neu, Thomas R; Kuhlicke, Ute

    2017-02-10

    Microbial biofilm systems are defined as interface-associated microorganisms embedded into a self-produced matrix. The extracellular matrix represents a continuous challenge in terms of characterization and analysis. The tools applied in more detailed studies comprise extraction/chemical analysis, molecular characterization, and visualisation using various techniques. Imaging by laser microscopy became a standard tool for biofilm analysis, and, in combination with fluorescently labelled lectins, the glycoconjugates of the matrix can be assessed. By employing this approach a wide range of pure culture biofilms from different habitats were examined using the commercially available lectins. From the results, a binary barcode pattern of lectin binding can be generated. Furthermore, the results can be fine-tuned and transferred into a heat map according to signal intensity. The lectin barcode approach is suggested as a useful tool for investigating the biofilm matrix characteristics and dynamics at various levels, e.g. bacterial cell surfaces, adhesive footprints, individual microcolonies, and the gross biofilm or bio-aggregate. Hence fluorescence lectin bar-coding (FLBC) serves as a basis for a subsequent tailor-made fluorescence lectin-binding analysis (FLBA) of a particular biofilm. So far, the lectin approach represents the only tool for in situ characterization of the glycoconjugate makeup in biofilm systems.  Furthermore, lectin staining lends itself to other fluorescence techniques in order to correlate it with cellular biofilm constituents in general and glycoconjugate producers in particular.

  6. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    PubMed

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  7. A method to generate small-scale, high-resolution sedimentary bedform architecture models representing realistic geologic facies

    DOE PAGES

    Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.

    2017-08-23

    Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less

  8. A method to generate small-scale, high-resolution sedimentary bedform architecture models representing realistic geologic facies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.

    Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less

  9. Preparation of Emulsifying Wax/GMO Nanoparticles and Evaluation as a Delivery System for Repurposing Simvastatin in Bone Regeneration.

    PubMed

    Eskinazi-Budge, Aaron; Manickavasagam, Dharani; Czech, Tori; Novak, Kimberly; Kunzler, James; Oyewumi, Moses O

    2018-05-30

    Simvastatin (Sim) is a widely known drug in the treatment of hyperlipidemia that has attracted so much attention in bone regeneration based on its potential osteoanabolic effect. However, repurposing of Sim in bone regeneration will require suitable delivery systems that can negate undesirable off-target/side effects. In this study, we have investigated a new lipid nanoparticle (NP) platform that was fabricated using a binary blend of emulsifying wax (Ewax) and glyceryl monooleate (GMO). Using the binary matrix materials, NPs loaded with Sim (0-500 µg/mL) were prepared and showed an average particle size of about 150 nm. NP size stability was dependent on Sim concentration loaded in NPs. The suitability of NPs prepared with the binary matrix materials in Sim delivery for potential application in bone regeneration was supported by biocompatibility in pre-osteoclastic and pre-osteoblastic cells. Additional data demonstrated that biofunctional Sim was released from NPs that facilitated differentiation of osteoblasts (cells that form bones) while inhibiting differentiation of osteoclasts (cells that resorb bones). The overall work demonstrated the preparation of NPs from Ewax/GMO blends and characterization to ascertain potential suitability in Sim delivery for bone regeneration. Additional studies on osteoblast and osteoclast functions are warranted to fully evaluate the efficacy simvastatin-loaded Ewax/GMO NPs using in-vitro and in-vivo approaches.

  10. A path-oriented matrix-based knowledge representation system

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan; Karamouzis, Stamos T.

    1993-01-01

    Experience has shown that designing a good representation is often the key to turning hard problems into simple ones. Most AI (Artificial Intelligence) search/representation techniques are oriented toward an infinite domain of objects and arbitrary relations among them. In reality much of what needs to be represented in AI can be expressed using a finite domain and unary or binary predicates. Well-known vector- and matrix-based representations can efficiently represent finite domains and unary/binary predicates, and allow effective extraction of path information by generalized transitive closure/path matrix computations. In order to avoid space limitations a set of abstract sparse matrix data types was developed along with a set of operations on them. This representation forms the basis of an intelligent information system for representing and manipulating relational data.

  11. A highly efficient approach to protein interactome mapping based on collaborative filtering framework.

    PubMed

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-09

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  12. A Highly Efficient Approach to Protein Interactome Mapping Based on Collaborative Filtering Framework

    PubMed Central

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-01

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly. PMID:25572661

  13. A Highly Efficient Approach to Protein Interactome Mapping Based on Collaborative Filtering Framework

    NASA Astrophysics Data System (ADS)

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-01

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  14. Compressed multi-block local binary pattern for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  15. Implementing transmission eigenchannels of disordered media by a binary-control digital micromirror device

    NASA Astrophysics Data System (ADS)

    Kim, Donggyu; Choi, Wonjun; Kim, Moonseok; Moon, Jungho; Seo, Keumyoung; Ju, Sanghyun; Choi, Wonshik

    2014-11-01

    We report a method for measuring the transmission matrix of a disordered medium using a binary-control of a digital micromirror device (DMD). With knowledge of the measured transmission matrix, we identified the transmission eigenchannels of the medium. We then used binary control of the DMD to shape the wavefront of incident waves and to experimentally couple light to individual eigenchannels. When the wave was coupled to the eigenchannel with the largest eigenvalue, in particular, we were able to achieve about two times more energy transmission than the mean transmittance of the medium. Our study provides an elaborated use of the DMD as a high-speed wavefront shaping device for controlling the multiple scattering of waves in highly scattering media.

  16. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  17. A note on implementation of decaying product correlation structures for quasi-least squares.

    PubMed

    Shults, Justine; Guerra, Matthew W

    2014-08-30

    This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.

    PubMed

    Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo

    2016-10-01

    Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.

  19. A machine-learning approach for damage detection in aircraft structures using self-powered sensor data

    NASA Astrophysics Data System (ADS)

    Salehi, Hadi; Das, Saptarshi; Chakrabartty, Shantanu; Biswas, Subir; Burgueño, Rigoberto

    2017-04-01

    This study proposes a novel strategy for damage identification in aircraft structures. The strategy was evaluated based on the simulation of the binary data generated from self-powered wireless sensors employing a pulse switching architecture. The energy-aware pulse switching communication protocol uses single pulses instead of multi-bit packets for information delivery resulting in discrete binary data. A system employing this energy-efficient technology requires dealing with time-delayed binary data due to the management of power budgets for sensing and communication. This paper presents an intelligent machine-learning framework based on combination of the low-rank matrix decomposition and pattern recognition (PR) methods. Further, data fusion is employed as part of the machine-learning framework to take into account the effect of data time delay on its interpretation. Simulated time-delayed binary data from self-powered sensors was used to determine damage indicator variables. Performance and accuracy of the damage detection strategy was examined and tested for the case of an aircraft horizontal stabilizer. Damage states were simulated on a finite element model by reducing stiffness in a region of the stabilizer's skin. The proposed strategy shows satisfactory performance to identify the presence and location of the damage, even with noisy and incomplete data. It is concluded that PR is a promising machine-learning algorithm for damage detection for time-delayed binary data from novel self-powered wireless sensors.

  20. The role of hydrodynamic stress on the phenotypic characteristics of single and binary biofilms of Pseudomonas fluorescens.

    PubMed

    Simões, M; Pereira, M O; Vieira, M J

    2007-01-01

    This study investigates the phenotype of turbulent (Re = 5,200) and laminar (Re = 2,000) flow-generated Pseudomonas fluorescens biofilms. Three P. fluorescens strains, the type strain ATCC 13525 and two strains isolated from an industrial processing plant, D3-348 and D3-350, were used throughout this study. The isolated strains were used to form single and binary biofilms. The biofilm physiology (metabolic activity, cellular density, mass, extracellular polymeric substances, structural characteristics and outer membrane proteins [OMP] expression) was compared. The results indicate that, for every situation, turbulent flow-generated biofilms were more active (p < 0.05), had more mass per cm(2) (p < 0.05), a higher cellular density (p < 0.05), distinct morphology, similar matrix proteins (p > 0.1) and identical (isolated strains -single and binary biofilms) and higher (type strain) matrix polysaccharides contents (p < 0.05) than laminar flow-generated biofilms. Flow-generated biofilms formed by the type strain revealed a considerably higher cellular density and amount of matrix polysaccharides than single and binary biofilms formed by the isolated strains (p < 0.05). Similar OMP expression was detected for the several single strains and for the binary situation, not dependent on the hydrodynamic conditions. Binary biofilms revealed an equal coexistence of the isolated strains with apparent neutral interactions. In summary, the biofilms formed by the type strain represent, apparently, the worst situation in a context of control. The results obtained clearly illustrate the importance of considering strain variation and hydrodynamics in biofilm development, and complement previous studies which have focused on physical aspects of structural and density differences.

  1. Solute transport with multisegment, equilibrium-controlled, classical reactions: Problem solvability and feed forward method's applicability for complex segments of at most binary participants

    USGS Publications Warehouse

    Rubin, Jacob

    1992-01-01

    The feed forward (FF) method derives efficient operational equations for simulating transport of reacting solutes. It has been shown to be applicable in the presence of networks with any number of homogeneous and/or heterogeneous, classical reaction segments that consist of three, at most binary participants. Using a sequential (network type after network type) exploration approach and, independently, theoretical explanations, it is demonstrated for networks with classical reaction segments containing more than three, at most binary participants that if any one of such networks leads to a solvable transport problem then the FF method is applicable. Ways of helping to avoid networks that produce problem insolvability are developed and demonstrated. A previously suggested algebraic, matrix rank procedure has been adapted and augmented to serve as the main, easy-to-apply solvability test for already postulated networks. Four network conditions that often generate insolvability have been identified and studied. Their early detection during network formulation may help to avoid postulation of insolvable networks.

  2. Dispersion of speckle suppression efficiency for binary DOE structures: spectral domain and coherent matrix approaches.

    PubMed

    Lapchuk, Anatoliy; Prygun, Olexandr; Fu, Minglei; Le, Zichun; Xiong, Qiyuan; Kryuchyn, Andriy

    2017-06-26

    We present the first general theoretical description of speckle suppression efficiency based on an active diffractive optical element (DOE). The approach is based on spectral analysis of diffracted beams and a coherent matrix. Analytical formulae are obtained for the dispersion of speckle suppression efficiency using different DOE structures and different DOE activation methods. We show that a one-sided 2D DOE structure has smaller speckle suppression range than a two-sided 1D DOE structure. Both DOE structures have sufficient speckle suppression range to suppress low-order speckles in the entire visible range, but only the two-sided 1D DOE can suppress higher-order speckles. We also show that a linear shift 2D DOE in a laser projector with a large numerical aperture has higher effective speckle suppression efficiency than the method using switching or step-wise shift DOE structures. The generalized theoretical models elucidate the mechanism and practical realization of speckle suppression.

  3. Statistical mechanics of binary mixture adsorption in metal-organic frameworks in the osmotic ensemble.

    PubMed

    Dunne, Lawrence J; Manos, George

    2018-03-13

    Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO 2 and CH 4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO 2 and CH 4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).

  4. Statistical mechanics of binary mixture adsorption in metal-organic frameworks in the osmotic ensemble

    NASA Astrophysics Data System (ADS)

    Dunne, Lawrence J.; Manos, George

    2018-03-01

    Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO2 and CH4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO2 and CH4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes. This article is part of the theme issue `Modern theoretical chemistry'.

  5. Mesoscale Polymer Dissolution Probed by Raman Spectroscopy and Molecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Tsun-Mei; Xantheas, Sotiris S.; Vasdekis, Andreas E.

    2016-10-13

    The diffusion of various solvents into a polystyrene (PS) matrix was probed experimentally by monitoring the temporal profiles of the Raman spectra and theoretically from molecular dynamics (MD) simulations of the binary system. The simulation results assist in providing a fundamental, molecular level connection between the mixing/dissolution processes and the difference = solvent – PS in the values of the Hildebrand parameter () between the two components of the binary systems: solvents having similar values of with PS (small ) exhibit fast diffusion into the polymer matrix, whereas the diffusion slows down considerably when the ’s are different (large ).more » To this end, the Hildebrand parameter was identified as a useful descriptor that governs the process of mixing in polymer – solvent binary systems. The experiments also provide insight into further refinements of the models specific to non-Fickian diffusion phenomena that need to be used in the simulations.« less

  6. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  7. Binary synaptic connections based on memory switching in a-Si:H for artificial neural networks

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Lamb, J. L.; Moopenn, A.; Khanna, S. K.

    1987-01-01

    A scheme for nonvolatile associative electronic memory storage with high information storage density is proposed which is based on neural network models and which uses a matrix of two-terminal passive interconnections (synapses). It is noted that the massive parallelism in the architecture would require the ON state of a synaptic connection to be unusually weak (highly resistive). Memory switching using a-Si:H along with ballast resistors patterned from amorphous Ge-metal alloys is investigated for a binary programmable read only memory matrix. The fabrication of a 1600 synapse test array of uniform connection strengths and a-Si:H switching elements is discussed.

  8. Integrated Circuit For Simulation Of Neural Network

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.; Khanna, Satish K.

    1988-01-01

    Ballast resistors deposited on top of circuit structure. Cascadable, programmable binary connection matrix fabricated in VLSI form as basic building block for assembly of like units into content-addressable electronic memory matrices operating somewhat like networks of neurons. Connections formed during storage of data, and data recalled from memory by prompting matrix with approximate or partly erroneous signals. Redundancy in pattern of connections causes matrix to respond with correct stored data.

  9. THE MULTI-WAVELENGTH CHARACTERISTICS OF THE TeV BINARY LS I+61°303

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, L.; Chitnis, V. R.; Shukla, A.

    2016-06-01

    We study the characteristics of the TeV binary LS I+61°303 in radio, soft X-ray, hard X-ray, and gamma-ray (GeV and TeV) energies. The long-term variability characteristics are examined as a function of the phase of the binary period of 26.496 days as well as the phase of the superorbital period of 1626 days, dividing the observations into a matrix of 10 × 10 phases of these two periods. We find that the long-term variability can be described by a sine function of the superorbital period, with the phase and amplitude systematically varying with the binary period phase. We also findmore » a definite wavelength-dependent change in this variability description. To understand the radiation mechanism, we define three states in the orbital/superorbital phase matrix and examine the wideband spectral energy distribution. The derived source parameters indicate that the emission geometry is dominated by a jet structure showing a systematic variation with the orbital/superorbital period. We suggest that LS I+61°303 is likely a microquasar with a steady jet.« less

  10. Texture operator for snow particle classification into snowflake and graupel

    NASA Astrophysics Data System (ADS)

    Nurzyńska, Karolina; Kubo, Mamoru; Muramoto, Ken-ichiro

    2012-11-01

    In order to improve the estimation of precipitation, the coefficients of Z-R relation should be determined for each snow type. Therefore, it is necessary to identify the type of falling snow. Consequently, this research addresses a problem of snow particle classification into snowflake and graupel in an automatic manner (as these types are the most common in the study region). Having correctly classified precipitation events, it is believed that it will be possible to estimate the related parameters accurately. The automatic classification system presented here describes the images with texture operators. Some of them are well-known from the literature: first order features, co-occurrence matrix, grey-tone difference matrix, run length matrix, and local binary pattern, but also a novel approach to design simple local statistic operators is introduced. In this work the following texture operators are defined: mean histogram, min-max histogram, and mean-variance histogram. Moreover, building a feature vector, which is based on the structure created in many from mentioned algorithms is also suggested. For classification, the k-nearest neighbourhood classifier was applied. The results showed that it is possible to achieve correct classification accuracy above 80% by most of the techniques. The best result of 86.06%, was achieved for operator built from a structure achieved in the middle stage of the co-occurrence matrix calculation. Next, it was noticed that describing an image with two texture operators does not improve the classification results considerably. In the best case the correct classification efficiency was 87.89% for a pair of texture operators created from local binary pattern and structure build in a middle stage of grey-tone difference matrix calculation. This also suggests that the information gathered by each texture operator is redundant. Therefore, the principal component analysis was applied in order to remove the unnecessary information and additionally reduce the length of the feature vectors. The improvement of the correct classification efficiency for up to 100% is possible for methods: min-max histogram, texture operator built from structure achieved in a middle stage of co-occurrence matrix calculation, texture operator built from a structure achieved in a middle stage of grey-tone difference matrix creation, and texture operator based on a histogram, when the feature vector stores 99% of initial information.

  11. A novel approach of an absolute coding pattern based on Hamiltonian graph

    NASA Astrophysics Data System (ADS)

    Wang, Ya'nan; Wang, Huawei; Hao, Fusheng; Liu, Liqiang

    2017-02-01

    In this paper, a novel approach of an optical type absolute rotary encoder coding pattern is presented. The concept is based on the principle of the absolute encoder to find out a unique sequence that ensures an unambiguous shaft position of any angular. We design a single-ring and a n-by-2 matrix absolute encoder coding pattern by using the variations of Hamiltonian graph principle. 12 encoding bits is used in the single-ring by a linear array CCD to achieve an 1080-position cycle encoding. Besides, a 2-by-2 matrix is used as an unit in the 2-track disk to achieve a 16-bits encoding pattern by using an area array CCD sensor (as a sample). Finally, a higher resolution can be gained by an electronic subdivision of the signals. Compared with the conventional gray or binary code pattern (for a 2n resolution), this new pattern has a higher resolution (2n*n) with less coding tracks, which means the new pattern can lead to a smaller encoder, which is essential in the industrial production.

  12. Statistical mechanics of homogeneous partly pinned fluid systems.

    PubMed

    Krakoviack, Vincent

    2010-12-01

    The homogeneous partly pinned fluid systems are simple models of a fluid confined in a disordered porous matrix obtained by arresting randomly chosen particles in a one-component bulk fluid or one of the two components of a binary mixture. In this paper, their configurational properties are investigated. It is shown that a peculiar complementarity exists between the mobile and immobile phases, which originates from the fact that the solid is prepared in presence of and in equilibrium with the adsorbed fluid. Simple identities follow, which connect different types of configurational averages, either relative to the fluid-matrix system or to the bulk fluid from which it is prepared. Crucial simplifications result for the computation of important structural quantities, both in computer simulations and in theoretical approaches. Finally, possible applications of the model in the field of dynamics in confinement or in strongly asymmetric mixtures are suggested.

  13. LISA verification binaries with updated distances from Gaia Data Release 2

    NASA Astrophysics Data System (ADS)

    Kupfer, T.; Korol, V.; Shah, S.; Nelemans, G.; Marsh, T. R.; Ramsay, G.; Groot, P. J.; Steeghs, D. T. H.; Rossi, E. M.

    2018-06-01

    Ultracompact binaries with orbital periods less than a few hours will dominate the gravitational wave signal in the mHz regime. Until recently, 10 systems were expected have a predicted gravitational wave signal strong enough to be detectable by the Laser Interferometer Space Antenna (LISA), the so-called `verification binaries'. System parameters, including distances, are needed to provide an accurate prediction of the expected gravitational wave strength to be measured by LISA. Using parallaxes from Gaia Data Release 2 we calculate signal-to-noise ratios (SNR) for ≈50 verification binary candidates. We find that 11 binaries reach a SNR≥20, two further binaries reaching a SNR≥5 and three more systems are expected to have a SNR≈5 after four years integration with LISA. For these 16 systems we present predictions of the gravitational wave amplitude (A) and parameter uncertainties from Fisher information matrix on the amplitude (A) and inclination (ι).

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Keri R.; Judge, Elizabeth J.; Barefield, James E.

    We show the analysis of light water reactor simulated used nuclear fuel using laser-induced breakdown spectroscopy (LIBS) is explored using a simplified version of the main oxide phase. The main oxide phase consists of the actinides, lanthanides, and zirconium. The purpose of this study is to develop a rapid, quantitative technique for measuring zirconium in a uranium dioxide matrix without the need to dissolve the material. A second set of materials including cerium oxide is also analyzed to determine precision and limit of detection (LOD) using LIBS in a complex matrix. Two types of samples are used in this study:more » binary and ternary oxide pellets. The ternary oxide, (U,Zr,Ce)O 2 pellets used in this study are a simplified version the main oxide phase of used nuclear fuel. The binary oxides, (U,Ce)O 2 and (U,Zr)O 2 are also examined to determine spectral emission lines for Ce and Zr, potential spectral interferences with uranium and baseline LOD values for Ce and Zr in a UO 2 matrix. In the spectral range of 200 to 800 nm, 33 cerium lines and 25 zirconium lines were identified and shown to have linear correlation values (R 2) > 0.97 for both the binary and ternary oxides. The cerium LOD in the (U,Ce)O 2 matrix ranged from 0.34 to 1.08 wt% and 0.94 to 1.22 wt% in (U,Ce,Zr)O 2 for 33 of Ce emission lines. The zirconium limit of detection in the (U,Zr)O 2 matrix ranged from 0.84 to 1.15 wt% and 0.99 to 1.10 wt% in (U,Ce,Zr)O 2 for 25 Zr lines. Finally, the effect of multiple elements in the plasma and the impact on the LOD is discussed.« less

  15. Holographic implementation of a binary associative memory for improved recognition

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.

    1998-03-01

    Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.

  16. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  17. Automated artifact detection and removal for improved tensor estimation in motion-corrupted DTI data sets using the combination of local binary patterns and 2D partial least squares.

    PubMed

    Zhou, Zhenyu; Liu, Wei; Cui, Jiali; Wang, Xunheng; Arias, Diana; Wen, Ying; Bansal, Ravi; Hao, Xuejun; Wang, Zhishun; Peterson, Bradley S; Xu, Dongrong

    2011-02-01

    Signal variation in diffusion-weighted images (DWIs) is influenced both by thermal noise and by spatially and temporally varying artifacts, such as rigid-body motion and cardiac pulsation. Motion artifacts are particularly prevalent when scanning difficult patient populations, such as human infants. Although some motion during data acquisition can be corrected using image coregistration procedures, frequently individual DWIs are corrupted beyond repair by sudden, large amplitude motion either within or outside of the imaging plane. We propose a novel approach to identify and reject outlier images automatically using local binary patterns (LBP) and 2D partial least square (2D-PLS) to estimate diffusion tensors robustly. This method uses an enhanced LBP algorithm to extract texture features from a local texture feature of the image matrix from the DWI data. Because the images have been transformed to local texture matrices, we are able to extract discriminating information that identifies outliers in the data set by extending a traditional one-dimensional PLS algorithm to a two-dimension operator. The class-membership matrix in this 2D-PLS algorithm is adapted to process samples that are image matrix, and the membership matrix thus represents varying degrees of importance of local information within the images. We also derive the analytic form of the generalized inverse of the class-membership matrix. We show that this method can effectively extract local features from brain images obtained from a large sample of human infants to identify images that are outliers in their textural features, permitting their exclusion from further processing when estimating tensors using the DWIs. This technique is shown to be superior in performance when compared with visual inspection and other common methods to address motion-related artifacts in DWI data. This technique is applicable to correct motion artifact in other magnetic resonance imaging (MRI) techniques (e.g., the bootstrapping estimation) that use univariate or multivariate regression methods to fit MRI data to a pre-specified model. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Learning Rotation-Invariant Local Binary Descriptor.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2017-08-01

    In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.

  19. Compression and compaction properties of plasticised high molecular weight hydroxypropylmethylcellulose (HPMC) as a hydrophilic matrix carrier.

    PubMed

    Hardy, I J; Cook, W G; Melia, C D

    2006-03-27

    The compression and compaction properties of plasticised high molecular weight USP2208 HPMC were investigated with the aim of improving tablet formation in HPMC matrices. Experiments were conducted on binary polymer-plasticiser mixtures containing 17 wt.% plasticiser, and on a model hydrophilic matrix formulation. A selection of common plasticisers, propylene glycol (PG) glycerol (GLY), dibutyl sebacate (DBS) and triacetin (TRI), were chosen to provide a range of plasticisation efficiencies. T(g) values of binary mixtures determined by Dynamic Mechanical Thermal Analysis (DMTA) were in rank order PG>GLY>DBS>TRI>unplasticised HPMC. Mean yield pressure, strain rate sensitivity (SRS) and plastic compaction energy were measured during the compression process, and matrix properties were monitored by tensile strength and axial expansion post-compression. Compression of HPMC:PG binary mixtures resulted in a marked reduction in mean yield pressure and a significant increase in SRS, suggesting a classical plasticisation of HPMC analogous to that produced by water. The effect of PG was also reflected in matrix properties. At compression pressures below 70 MPa, compacts had greater tensile strength than those from native polymer, and over the range 35 and 70 MPa, lower plastic compaction values showed that less energy was required to produce the compacts. Axial expansion was also reduced. Above 70 MPa tensile strength was limited to 3 MPa. These results suggest a useful improvement of HPMC compaction and matrix properties by PG plasticisation, with lowering of T(g) resulting in improved deformation and internal bonding. These effects were also detectable in the model formulation containing a minimal polymer content for an HPMC matrix. Other plasticisers were largely ineffective, matrix strength was poor and axial expansion high. The hydrophobic plasticisers (DBS, TRI) reduced yield pressure substantially, but were poor plasticisers and showed compaction mechanisms that could be attributed to phase separation. The effect of different plasticisers suggests that the deformation characteristics of this HPMC in the solid state is dominated by hydroxyl mediated bonding, rather than by hydrophobic interactions between methoxyl-rich regions.

  20. High Information Capacity Quantum Imaging

    DTIC Science & Technology

    2014-09-19

    single-pixel camera [41, 75]. An object is imaged onto a Digital Micromirror device ( DMD ), a 2D binary array of individually-addressable mirrors that...reflect light either to a single detector or a dump. Rows of the sensing matrix A consist of random, binary patterns placed sequentially on the DMD ...The single-pixel camera concept naturally adapts to imaging correlations by adding a second detector. Consider placing separate DMDs in the near-field

  1. INSPECTION MEANS FOR INDUCTION MOTORS

    DOEpatents

    Williams, A.W.

    1959-03-10

    an appartus is descripbe for inspcting electric motors and more expecially an appartus for detecting falty end rings inn suqirrel cage inductio motors while the motor is running. In its broua aspects, the mer would around ce of reference tedtor means also itons in the phase ition of the An electronic circuit for conversion of excess-3 binary coded serial decimal numbers to straight binary coded serial decimal numbers is reported. The converter of the invention in its basic form generally coded pulse words of a type having an algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance preceding a y algebraic sign digit followed serially by a plurality of decimal digits in order of decreasing significance. A switching martix is coupled to said input circuit and is internally connected to produce serial straight binary coded pulse groups indicative of the excess-3 coded input. A stepping circuit is coupled to the switching matrix and to a synchronous counter having a plurality of x decimal digit and plurality of y decimal digit indicator terminals. The stepping circuit steps the counter in synchornism with the serial binary pulse group output from the switching matrix to successively produce pulses at corresponding ones of the x and y decimal digit indicator terminals. The combinations of straight binary coded pulse groups and corresponding decimal digit indicator signals so produced comprise a basic output suitable for application to a variety of output apparatus.

  2. Carbon Dots and 9AA as a Binary Matrix for the Detection of Small Molecules by Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Chen, Yongli; Gao, Dan; Bai, Hangrui; Liu, Hongxia; Lin, Shuo; Jiang, Yuyang

    2016-07-01

    Application of matrix-assisted laser-desorption/ionization mass spectrometry (MALDI MS) to analyze small molecules have some limitations, due to the inhomogeneous analyte/matrix co-crystallization and interference of matrix-related peaks in low m/z region. In this work, carbon dots (CDs) were for the first time applied as a binary matrix with 9-Aminoacridine (9AA) in MALDI MS for small molecules analysis. By 9AA/CDs assisted desorption/ionization (D/I) process, a wide range of small molecules, including nucleosides, amino acids, oligosaccharides, peptides, and anticancer drugs with a higher sensitivity were demonstrated in the positive ion mode. A detection limit down to 5 fmol was achieved for cytidine. 9AA/CDs matrix also exhibited excellent reproducibility compared with 9AA matrix. Moreover, by exploring the ionization mechanism of the matrix, the influence factors might be attributed to the four parts: (1) the strong UV absorption of 9AA/CDs due to their π-conjugated network; (2) the carboxyl groups modified on the CDs surface act as protonation sites for proton transfer in positive ion mode; (3) the thin layer crystal of 9AA/CDs could reach a high surface temperature more easily and lower transfer energy for LDI MS; (4) CDs could serve as a matrix additive to suppress 9AA ionization. Furthermore, this matrix was allowed for the analysis of glucose as well as nucleosides in human urine, and the level of cytidine was quantified with a linear range of 0.05-5 mM (R2 > 0.99). Therefore, the 9AA/CDs matrix was proven to be an effective MALDI matrix for the analysis of small molecules with improved sensitivity and reproducibility. This work provides an alternative solution for small molecules detection that can be further used in complex samples analysis.

  3. Biclustering sparse binary genomic data.

    PubMed

    van Uitert, Miranda; Meuleman, Wouter; Wessels, Lodewyk

    2008-12-01

    Genomic datasets often consist of large, binary, sparse data matrices. In such a dataset, one is often interested in finding contiguous blocks that (mostly) contain ones. This is a biclustering problem, and while many algorithms have been proposed to deal with gene expression data, only two algorithms have been proposed that specifically deal with binary matrices. None of the gene expression biclustering algorithms can handle the large number of zeros in sparse binary matrices. The two proposed binary algorithms failed to produce meaningful results. In this article, we present a new algorithm that is able to extract biclusters from sparse, binary datasets. A powerful feature is that biclusters with different numbers of rows and columns can be detected, varying from many rows to few columns and few rows to many columns. It allows the user to guide the search towards biclusters of specific dimensions. When applying our algorithm to an input matrix derived from TRANSFAC, we find transcription factors with distinctly dissimilar binding motifs, but a clear set of common targets that are significantly enriched for GO categories.

  4. Classical space-times from the S-matrix

    NASA Astrophysics Data System (ADS)

    Neill, Duff; Rothstein, Ira Z.

    2013-12-01

    We show that classical space-times can be derived directly from the S-matrix for a theory of massive particles coupled to a massless spin two particle. As an explicit example we derive the Schwarzchild space-time as a series in GN. At no point of the derivation is any use made of the Einstein-Hilbert action or the Einstein equations. The intermediate steps involve only on-shell S-matrix elements which are generated via BCFW recursion relations and unitarity sewing techniques. The notion of a space-time metric is only introduced at the end of the calculation where it is extracted by matching the potential determined by the S-matrix to the geodesic motion of a test particle. Other static space-times such as Kerr follow in a similar manner. Furthermore, given that the procedure is action independent and depends only upon the choice of the representation of the little group, solutions to Yang-Mills (YM) theory can be generated in the same fashion. Moreover, the squaring relation between the YM and gravity three point functions shows that the seeds that generate solutions in the two theories are algebraically related. From a technical standpoint our methodology can also be utilized to calculate quantities relevant for the binary inspiral problem more efficiently then the more traditional Feynman diagram approach.

  5. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    NASA Astrophysics Data System (ADS)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  6. Analytical model of radiation-induced precipitation at the surface of dilute binary alloy

    NASA Astrophysics Data System (ADS)

    Pechenkin, V. A.; Stepanov, I. A.; Konobeev, Yu. V.

    2002-12-01

    Growth of precipitate layer at the foil surface of an undersaturated binary alloy under uniform irradiation is treated analytically. Analytical expressions for the layer growth rate, layer thickness limit and final component concentrations in the matrix are derived for coherent and incoherent precipitate-matrix interfaces. It is shown that the high temperature limit of radiation-induced precipitation is the same for both types of interfaces, whereas layer thickness limits are different. A parabolic law of the layer growth predicted for both types of interfaces is in agreement with experimental data on γ '-phase precipitation at the surface of Ni-Si dilute alloys under ion irradiation. Effect of sputtering on the precipitation rate and on the low temperature limit of precipitation under ion irradiation is discussed.

  7. Laser-induced breakdown spectroscopy of light water reactor simulated used nuclear fuel: Main oxide phase

    DOE PAGES

    Campbell, Keri R.; Judge, Elizabeth J.; Barefield, James E.; ...

    2017-04-22

    We show the analysis of light water reactor simulated used nuclear fuel using laser-induced breakdown spectroscopy (LIBS) is explored using a simplified version of the main oxide phase. The main oxide phase consists of the actinides, lanthanides, and zirconium. The purpose of this study is to develop a rapid, quantitative technique for measuring zirconium in a uranium dioxide matrix without the need to dissolve the material. A second set of materials including cerium oxide is also analyzed to determine precision and limit of detection (LOD) using LIBS in a complex matrix. Two types of samples are used in this study:more » binary and ternary oxide pellets. The ternary oxide, (U,Zr,Ce)O 2 pellets used in this study are a simplified version the main oxide phase of used nuclear fuel. The binary oxides, (U,Ce)O 2 and (U,Zr)O 2 are also examined to determine spectral emission lines for Ce and Zr, potential spectral interferences with uranium and baseline LOD values for Ce and Zr in a UO 2 matrix. In the spectral range of 200 to 800 nm, 33 cerium lines and 25 zirconium lines were identified and shown to have linear correlation values (R 2) > 0.97 for both the binary and ternary oxides. The cerium LOD in the (U,Ce)O 2 matrix ranged from 0.34 to 1.08 wt% and 0.94 to 1.22 wt% in (U,Ce,Zr)O 2 for 33 of Ce emission lines. The zirconium limit of detection in the (U,Zr)O 2 matrix ranged from 0.84 to 1.15 wt% and 0.99 to 1.10 wt% in (U,Ce,Zr)O 2 for 25 Zr lines. Finally, the effect of multiple elements in the plasma and the impact on the LOD is discussed.« less

  8. The matrix effect in secondary ion mass spectrometry

    NASA Astrophysics Data System (ADS)

    Seah, M. P.; Shard, A. G.

    2018-05-01

    Matrix effects in the secondary ion mass spectrometry (SIMS) of selected elemental systems have been analyzed to investigate the applicability of a mathematical description of the matrix effect, called here the charge transfer (CT) model. This model was originally derived for proton exchange and organic positive secondary ions, to characterise the enhancement or suppression of intensities in organic binary systems. In the systems considered in this paper protons are specifically excluded, which enables an assessment of whether the model applies for electrons as well. The present importance is in organic systems but, here we analyse simpler inorganic systems. Matrix effects in elemental systems cannot involve proton transfer if there are no protons present but may be caused by electron transfer and so electron transfer may also be involved in the matrix effects for organic systems. There are general similarities in both the magnitudes of the ion intensities as well as the matrix effects for both positive and negative secondary ions in both systems and so the CT model may be more widely applicable. Published SIMS analyses of binary elemental mixtures are analyzed. The data of Kim et al., for the Pt/Co system, provide, with good precision, data for such a system. This gives evidence for the applicability of the CT model, where electron, rather than proton, transfer is the matrix enhancing and suppressing mechanism. The published data of Prudon et al., for the important Si/Ge system, provides further evidence for the effects for both positive and negative secondary ions and allows rudimentary rules to be developed for the enhancing and suppressing species.

  9. Binary blend of glyceryl monooleate and glyceryl monostearate for magnetically induced thermo-responsive local drug delivery system.

    PubMed

    Mengesha, Abebe E; Wydra, Robert J; Hilt, J Zach; Bummer, Paul M

    2013-12-01

    To develop a novel monoglycerides-based thermal-sensitive drug delivery system, specifically for local intracavitary chemotherapy. Lipid matrices containing mixtures of glyceryl monooleate (GMO) and glyceryl monostearate (GMS) were evaluated for their potential application as magnetically induced thermo-responsive local drug delivery systems using a poorly water-soluble model drug, nifedipine (NF). Oleic acid-modified iron oxide (OA-Fe3O4) nanoparticles were embedded into the GMO-GMS matrix for remote activation of the drug release using an alternating magnetic field (AMF). The crystallization behavior of binary blends of GMO and GMS as characterized by DSC did show temperature dependent phase transition. GMO-GMS (75:25 wt%) blend showed a melting (T m ) and crystallization (T c ) points at 42°C and 37°C, respectively indicating the potential of the matrix to act as an 'on-demand' drug release. The matrix released only 35% of the loaded drug slowly in 10 days at 37°C whereas 96% release was obtained at 42°C. A concentration of 0.5% OA-Fe3O4 heated the matrix to 42.3 and 45.5°C within 5 min and 10 min of AMF exposure, respectively. The in vitro NF release profiles form the monoglycerides matrix containing 0.5% OA-Fe3O4 nanoparticles after AMF activation confirmed the thermo-responsive nature of the matrix that could provide pulsatile drug release 'on-demand'.

  10. Metal-doped semiconductor nanoparticles and methods of synthesis thereof

    NASA Technical Reports Server (NTRS)

    Ren, Zhifeng (Inventor); Wang, Wenzhong (Inventor); Chen, Gang (Inventor); Dresselhaus, Mildred (Inventor); Poudel, Bed (Inventor); Kumar, Shankar (Inventor)

    2009-01-01

    The present invention generally relates to binary or higher order semiconductor nanoparticles doped with a metallic element, and thermoelectric compositions incorporating such nanoparticles. In one aspect, the present invention provides a thermoelectric composition comprising a plurality of nanoparticles each of which includes an alloy matrix formed of a Group IV element and Group VI element and a metallic dopant distributed within the matrix.

  11. Metal-doped semiconductor nanoparticles and methods of synthesis thereof

    DOEpatents

    Ren, Zhifeng [Newton, MA; Chen, Gang [Carlisle, MA; Poudel, Bed [West Newton, MA; Kumar, Shankar [Newton, MA; Wang, Wenzhong [Beijing, CN; Dresselhaus, Mildred [Arlington, MA

    2009-09-08

    The present invention generally relates to binary or higher order semiconductor nanoparticles doped with a metallic element, and thermoelectric compositions incorporating such nanoparticles. In one aspect, the present invention provides a thermoelectric composition comprising a plurality of nanoparticles each of which includes an alloy matrix formed of a Group IV element and Group VI element and a metallic dopant distributed within the matrix.

  12. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less

  13. Preparation of proton conducting membranes containing bifunctional titania nanoparticles

    NASA Astrophysics Data System (ADS)

    Aslan, Ayşe; Bozkurt, Ayhan

    2013-07-01

    Throughout this work, the synthesis and characterization of novel proton conducting nanocomposite membranes including binary and ternary mixtures of sulfated nano-titania (TS), poly(vinyl alcohol) (PVA), and nitrilotri(methyl phosphonic acid) (NMPA) are discussed. The materials were produced by means of two different approaches where in the first, PVA and TS (10-15 nm) were admixed to form a binary system. The second method was the ternary nanocomposite membranes including PVA/TS/NMPA that were prepared at several compositions to get PVA-TS-(NMPA) x . The interaction of functional nano particles and NMPA in the host matrix was explored by FT-IR spectroscopy. The homogeneous distribution of bifunctional nanoparticles in the membrane was confirmed by SEM micrographs. The spectroscopic measurements and water/methanol uptake studies suggested a complexation between PVA and NMPA, which inhibited the leaching of the latter. The thermogravimetry analysis results verified that the presence of TS in the composite membranes suppressed the formation of phosphonic acid anhydrides up to 150 °C. The maximum proton conductivity has been measured for PVA-TS-(NMPA)3 as 0.003 S cm-1 at 150 °C.

  14. Diagnosis of Tempromandibular Disorders Using Local Binary Patterns.

    PubMed

    Haghnegahdar, A A; Kolahi, S; Khojastepour, L; Tajeripour, F

    2018-03-01

    Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages.

  15. Quantum Support Vector Machine for Big Data Classification

    NASA Astrophysics Data System (ADS)

    Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth

    2014-09-01

    Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix.

  16. Programmable synaptic chip for electronic neural networks

    NASA Technical Reports Server (NTRS)

    Moopenn, A.; Langenbacher, H.; Thakoor, A. P.; Khanna, S. K.

    1988-01-01

    A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of 'long channel' NMOSFET binary connection elements implemented in a 3-micron bulk CMOS process. Since the neurons are kept off-chip, the synaptic chip serves as a 'cascadable' building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silicon area. The performance of synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed.

  17. Fluids in porous media. IV. Quench effect on chemical potential.

    PubMed

    Qiao, C Z; Zhao, S L; Liu, H L; Dong, W

    2017-06-21

    It appears to be a common sense to measure the crowdedness of a fluid system by the densities of the species constituting it. In the present work, we show that this ceases to be valid for confined fluids under some conditions. A quite thorough investigation is made for a hard sphere (HS) fluid adsorbed in a hard sphere matrix (a quench-annealed system) and its corresponding equilibrium binary mixture. When fluid particles are larger than matrix particles, the quench-annealed system can appear much more crowded than its corresponding equilibrium binary mixture, i.e., having a much higher fluid chemical potential, even when the density of each species is strictly the same in both systems, respectively. We believe that the insight gained from this study should be useful for the design of functionalized porous materials.

  18. Bootstrapping on Undirected Binary Networks Via Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Fushing, Hsieh; Chen, Chen; Liu, Shan-Yu; Koehl, Patrice

    2014-09-01

    We propose a new method inspired from statistical mechanics for extracting geometric information from undirected binary networks and generating random networks that conform to this geometry. In this method an undirected binary network is perceived as a thermodynamic system with a collection of permuted adjacency matrices as its states. The task of extracting information from the network is then reformulated as a discrete combinatorial optimization problem of searching for its ground state. To solve this problem, we apply multiple ensembles of temperature regulated Markov chains to establish an ultrametric geometry on the network. This geometry is equipped with a tree hierarchy that captures the multiscale community structure of the network. We translate this geometry into a Parisi adjacency matrix, which has a relative low energy level and is in the vicinity of the ground state. The Parisi adjacency matrix is then further optimized by making block permutations subject to the ultrametric geometry. The optimal matrix corresponds to the macrostate of the original network. An ensemble of random networks is then generated such that each of these networks conforms to this macrostate; the corresponding algorithm also provides an estimate of the size of this ensemble. By repeating this procedure at different scales of the ultrametric geometry of the network, it is possible to compute its evolution entropy, i.e. to estimate the evolution of its complexity as we move from a coarse to a fine description of its geometric structure. We demonstrate the performance of this method on simulated as well as real data networks.

  19. Spatial modeling of households' knowledge about arsenic pollution in Bangladesh.

    PubMed

    Sarker, M Mizanur Rahman

    2012-04-01

    Arsenic in drinking water is an important public health issue in Bangladesh, which is affected by households' knowledge about arsenic threats from their drinking water. In this study, spatial statistical models were used to investigate the determinants and spatial dependence of households' knowledge about arsenic risk. The binary join matrix/binary contiguity matrix and inverse distance spatial weight matrix techniques are used to capture spatial dependence in the data. This analysis extends the spatial model by allowing spatial dependence to vary across divisions and regions. A positive spatial correlation was found in households' knowledge across neighboring districts at district, divisional and regional levels, but the strength of this spatial correlation varies considerably by spatial weight. Literacy rate, daily wage rate of agricultural labor, arsenic status, and percentage of red mark tube well usage in districts were found to contribute positively and significantly to households' knowledge. These findings have policy implications both at regional and national levels in mitigating the present arsenic crisis and to ensure arsenic-free water in Bangladesh. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Converting optical scanning holograms of real objects to binary Fourier holograms using an iterative direct binary search algorithm.

    PubMed

    Leportier, Thibault; Park, Min Chul; Kim, You Seok; Kim, Taegeun

    2015-02-09

    In this paper, we present a three-dimensional holographic imaging system. The proposed approach records a complex hologram of a real object using optical scanning holography, converts the complex form to binary data, and then reconstructs the recorded hologram using a spatial light modulator (SLM). The conversion from the recorded hologram to a binary hologram is achieved using a direct binary search algorithm. We present experimental results that verify the efficacy of our approach. To the best of our knowledge, this is the first time that a hologram of a real object has been reconstructed using a binary SLM.

  1. Structural identifiability of cyclic graphical models of biological networks with latent variables.

    PubMed

    Wang, Yulin; Lu, Na; Miao, Hongyu

    2016-06-13

    Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.

  2. Binary Hierarchical Porous Graphene/Pyrolytic Carbon Nanocomposite Matrix Loaded with Sulfur as a High-Performance Li-S Battery Cathode.

    PubMed

    Zhang, Hang; Gao, Qiuming; Qian, Weiwei; Xiao, Hong; Li, Zeyu; Ma, Li; Tian, Xuehui

    2018-06-06

    A N,O-codoped hierarchical porous nanocomposite consisting of binary reduced graphene oxide and pyrolytic carbon (rGO/PC) from chitosan is fabricated. The optimized rGO/PC possesses micropores with size distribution concentrated around 1.1 nm and plenty of meso/macropores. The Brunauer-Emmett-Teller specific surface area is 480.8 m 2 g -1 , and it possesses impressively large pore volume of 2.14 cm 3 g -1 . On the basis of the synergistic effects of the following main factors: (i) the confined space effect in the hierarchical porous binary carbonaceous matrix; (ii) the anchor effects by strong chemical bonds with codoped N and O atoms; and (iii) the good flexibility and conductivity of rGO, the rGO/PC/S holding 75 wt % S exhibits high performance as Li-S battery cathode. Specific capacity of 1625 mA h g -1 can be delivered at 0.1 C (1 C = 1675 mA g -1 ), whereas 848 mA h g -1 can be maintained after 300 cycles at 1 C. Even at high rate of 5 C, 412 mA h g -1 can be restrained after 1000 cycles.

  3. Reduced Carrier Recombination in PbS - CuInS2 Quantum Dot Solar Cells

    PubMed Central

    Sun, Zhenhua; Sitbon, Gary; Pons, Thomas; Bakulin, Artem A.; Chen, Zhuoying

    2015-01-01

    Energy loss due to carrier recombination is among the major factors limiting the performance of TiO2/PbS colloidal quantum dot (QD) heterojunction solar cells. In this work, enhanced photocurrent is achieved by incorporating another type of hole-transporting QDs, Zn-doped CuInS2 (Zn-CIS) QDs into the PbS QD matrix. Binary QD solar cells exhibit a reduced charge recombination associated with the spatial charge separation between these two types of QDs. A ~30% increase in short-circuit current density and a ~20% increase in power conversion efficiency are observed in binary QD solar cells compared to cells built from PbS QDs only. In agreement with the charge transfer process identified through ultrafast pump/probe spectroscopy between these two QD components, transient photovoltage characteristics of single-component and binary QDs solar cells reveal longer carrier recombination time constants associated with the incorporation of Zn-CIS QDs. This work presents a straightforward, solution-processed method based on the incorporation of another QDs in the PbS QD matrix to control the carrier dynamics in colloidal QD materials and enhance solar cell performance. PMID:26024021

  4. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  5. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    NASA Astrophysics Data System (ADS)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  6. Artificial intelligence systems based on texture descriptors for vaccine development.

    PubMed

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2011-02-01

    The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.

  7. Learning moment-based fast local binary descriptor

    NASA Astrophysics Data System (ADS)

    Bellarbi, Abdelkader; Zenati, Nadia; Otmane, Samir; Belghit, Hayet

    2017-03-01

    Recently, binary descriptors have attracted significant attention due to their speed and low memory consumption; however, using intensity differences to calculate the binary descriptive vector is not efficient enough. We propose an approach to binary description called POLAR_MOBIL, in which we perform binary tests between geometrical and statistical information using moments in the patch instead of the classical intensity binary test. In addition, we introduce a learning technique used to select an optimized set of binary tests with low correlation and high variance. This approach offers high distinctiveness against affine transformations and appearance changes. An extensive evaluation on well-known benchmark datasets reveals the robustness and the effectiveness of the proposed descriptor, as well as its good performance in terms of low computation complexity when compared with state-of-the-art real-time local descriptors.

  8. Prospects for Observing Ultracompact Binaries with Space-Based Gravitational Wave Interferometers and Optical Telescopes

    NASA Technical Reports Server (NTRS)

    Littenberg, T. B.; Larson, S. L.; Nelemans, G.; Cornish, N. J.

    2012-01-01

    Space-based gravitational wave interferometers are sensitive to the galactic population of ultracompact binaries. An important subset of the ultracompact binary population are those stars that can be individually resolved by both gravitational wave interferometers and electromagnetic telescopes. The aim of this paper is to quantify the multimessenger potential of space-based interferometers with arm-lengths between 1 and 5 Gm. The Fisher information matrix is used to estimate the number of binaries from a model of the Milky Way which are localized on the sky by the gravitational wave detector to within 1 and 10 deg(exp 2) and bright enough to be detected by a magnitude-limited survey.We find, depending on the choice ofGW detector characteristics, limiting magnitude and observing strategy, that up to several hundred gravitational wave sources could be detected in electromagnetic follow-up observations.

  9. Bitshuffle: Filter for improving compression of typed binary data

    NASA Astrophysics Data System (ADS)

    Masui, Kiyoshi

    2017-12-01

    Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.

  10. Application of texture analysis method for mammogram density classification

    NASA Astrophysics Data System (ADS)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  11. Diagnosis of Tempromandibular Disorders Using Local Binary Patterns

    PubMed Central

    Haghnegahdar, A.A.; Kolahi, S.; Khojastepour, L.; Tajeripour, F.

    2018-01-01

    Background: Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. Material and Methods: CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. Results: K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. Conclusion: We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages. PMID:29732343

  12. A study of the diffusional behavior of a two-phase metal matrix composite exposed to a high temperature environment

    NASA Technical Reports Server (NTRS)

    Tenney, D. R.

    1974-01-01

    The progress of diffusion-controlled filament-matrix interaction in a metal matrix composite where the filaments and matrix comprise a two-phase binary alloy system was studied by mathematically modeling compositional changes resulting from prolonged elevated temperature exposure. The analysis treats a finite, diffusion-controlled, two-phase moving-interface problem by means of a variable-grid finite-difference technique. The Ni-W system was selected as an example system. Modeling was carried out for the 1000 to 1200 C temperature range for unidirectional composites containing from 6 to 40 volume percent tungsten filaments in a Ni matrix. The results are displayed to show both the change in filament diameter and matrix composition as a function of exposure time. Compositional profiles produced between first and second nearest neighbor filaments were calculated by superposition of finite-difference solutions of the diffusion equations.

  13. Morphological, rheological and mechanical characterization of polypropylene nanocomposite blends.

    PubMed

    Rosales, C; Contreras, V; Matos, M; Perera, R; Villarreal, N; García-López, D; Pastor, J M

    2008-04-01

    In the present work, the effectiveness of styrene/ethylene-butylene/styrene rubbers grafted with maleic anhydride (MA) and a metallocene polyethylene (mPE) as toughening materials in binary and ternary blends with polypropylene and its nanocomposite as continuous phases was evaluated in terms of transmission electron microscopy (TEM), scanning electron microscopy (SEM), oscillatory shear flow and dynamic mechanical thermal analysis (DMA). The flexural modulus and heat distortion temperature values were determined as well. A metallocene polyethylene and a polyamide-6 were used as dispersed phases in these binary and ternary blends produced via melt blending in a corotating twin-screw extruder. Results showed that the compatibilized blends prepared without clay are tougher than those prepared with the nanocomposite of PP as the matrix phase and no significant changes in shear viscosity, melt elasticity, flexural or storage moduli and heat distortion temperature values were observed between them. However, the binary blend with a nanocomposite of PP as matrix and metallocene polyethylene phase exhibited better toughness, lower shear viscosity, flexural modulus, and heat distortion temperature values than that prepared with polyamide-6 as dispersed phase. These results are related to the degree of clay dispersion in the PP and to the type of morphology developed in the different blends.

  14. Mesh Denoising based on Normal Voting Tensor and Binary Optimization.

    PubMed

    Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad

    2017-08-17

    This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.

  15. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  16. The detection of cheating in multiple choice examinations

    NASA Astrophysics Data System (ADS)

    Richmond, Peter; Roehner, Bertrand M.

    2015-10-01

    Cheating in examinations is acknowledged by an increasing number of organizations to be widespread. We examine two different approaches to assess their effectiveness at detecting anomalous results, suggestive of collusion, using data taken from a number of multiple-choice examinations organized by the UK Radio Communication Foundation. Analysis of student pair overlaps of correct answers is shown to give results consistent with more orthodox statistical correlations for which confidence limits as opposed to the less familiar "Bonferroni method" can be used. A simulation approach is also developed which confirms the interpretation of the empirical approach. Then the variables Xi =(1 -Ui) Yi +Ui Z are a system of symmetric dependent binary variables (0 , 1 ; p) whose correlation matrix is ρij = r. The proof is easy and given in the paper. Let us add two remarks. • We used the expression "symmetric variables" to reflect the fact that all Xi play the same role. The expression "exchangeable variables" is often used with the same meaning. • The correlation matrix has only positive elements. This is of course imposed by the symmetry condition. ρ12 < 0 and ρ23 < 0 would imply ρ13 > 0, thus violating the symmetry requirement. In the following subsections we will be concerned with the question of uniqueness of the set of Xi generated above. Needless to say, it is useful to know whether the proposition gives the answer or only one among many. More precisely, the problem can be stated as follows.

  17. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  18. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks

    PubMed Central

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-01-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608

  19. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    PubMed

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-08-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.

  20. Development and Validation of Chemometric Spectrophotometric Methods for Simultaneous Determination of Simvastatin and Nicotinic Acid in Binary Combinations.

    PubMed

    Alahmad, Shoeb; Elfatatry, Hamed M; Mabrouk, Mokhtar M; Hammad, Sherin F; Mansour, Fotouh R

    2018-01-01

    The development and introduction of combined therapy represent a challenge for analysis due to severe overlapping of their UV spectra in case of spectroscopy or the requirement of a long tedious and high cost separation technique in case of chromatography. Quality control laboratories have to develop and validate suitable analytical procedures in order to assay such multi component preparations. New spectrophotometric methods for the simultaneous determination of simvastatin (SIM) and nicotinic acid (NIA) in binary combinations were developed. These methods are based on chemometric treatment of data, the applied chemometric techniques are multivariate methods including classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS). In these techniques, the concentration data matrix were prepared by using the synthetic mixtures containing SIM and NIA dissolved in ethanol. The absorbance data matrix corresponding to the concentration data matrix was obtained by measuring the absorbance at 12 wavelengths in the range 216 - 240 nm at 2 nm intervals in the zero-order. The spectrophotometric procedures do not require any separation step. The accuracy, precision and the linearity ranges of the methods have been determined and validated by analyzing synthetic mixtures containing the studied drugs. Chemometric spectrophotometric methods have been developed in the present study for the simultaneous determination of simvastatin and nicotinic acid in their synthetic binary mixtures and in their mixtures with possible excipients present in tablet dosage form. The validation was performed successfully. The developed methods have been shown to be accurate, linear, precise, and so simple. The developed methods can be used routinely for the determination dosage form. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Extreme close approaches in hierarchical triple systems with comparable masses

    NASA Astrophysics Data System (ADS)

    Haim, Niv; Katz, Boaz

    2018-06-01

    We study close approaches in hierarchical triple systems with comparable masses using full N-body simulations, motivated by a recent model for type Ia supernovae involving direct collisions of white dwarfs (WDs). For stable hierarchical systems where the inner binary components have equal masses, we show that the ability of the inner binary to achieve very close approaches, where the separation between the components of the inner binary reaches values which are orders of magnitude smaller than the semi-major axis, can be analytically predicted from initial conditions. The rate of close approaches is found to be roughly linear with the mass of the tertiary. The rate increases in systems with unequal inner binaries by a marginal factor of ≲ 2 for mass ratios 0.5 ≤ m1/m2 ≤ 1 relevant for the inner white-dwarf binaries. For an average tertiary mass of ˜0.3M⊙ which is representative of typical M-dwarfs, the chance for clean collisions is ˜1% setting challenging constraints on the collisional model for type Ia's.

  2. Application of the Double-Tangent Construction of Coexisting Phases to Any Type of Phase Equilibrium for Binary Systems Modeled with the Gamma-Phi Approach

    ERIC Educational Resources Information Center

    Jaubert, Jean-Noël; Privat, Romain

    2014-01-01

    The double-tangent construction of coexisting phases is an elegant approach to visualize all the multiphase binary systems that satisfy the equality of chemical potentials and to select the stable state. In this paper, we show how to perform the double-tangent construction of coexisting phases for binary systems modeled with the gamma-phi…

  3. Trans*versing the DMZ: A Non-Binary Autoethnographic Exploration of Gender and Masculinity

    ERIC Educational Resources Information Center

    Stewart, Dafina-Lazarus

    2017-01-01

    Using an abductive, critical-poststructuralist autoethnographic approach, I consider the ways in which masculine of centre, non-binary/genderqueer trans* identities transverse the poles of socializing binary gender systems, structures, and norms which inform higher education. In this paper, I assert that non-binary genderqueer identities are…

  4. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  5. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models

    PubMed Central

    Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong

    2016-01-01

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471

  6. RPBS: Rotational Projected Binary Structure for point cloud representation

    NASA Astrophysics Data System (ADS)

    Fang, Bin; Zhou, Zhiwei; Ma, Tao; Hu, Fangyu; Quan, Siwen; Ma, Jie

    2018-03-01

    In this paper, we proposed a novel three-dimension local surface descriptor named RPBS for point cloud representation. First, points cropped form the query point within a predefined radius is regard as a local surface patch. Then pose normalization is done to the local surface to equip our descriptor with the invariance to rotation transformation. To obtain more information about the cropped surface, multi-view representation is formed by successively rotating it along the coordinate axis. Further, orthogonal projections to the three coordinate plane are adopted to construct two-dimension distribution matrixes, and binarization is applied to each matrix by following the rule that whether the grid is occupied, if yes, set the grid one, otherwise zero. We calculate the binary maps from all the viewpoints and concatenate them together as the final descriptor. Comparative experiments for evaluating our proposed descriptor is conducted on the standard dataset named Bologna with several state-of-the-art 3D descriptors, and results show that our descriptor achieves the best performance on feature matching experiments.

  7. Structural and dielectric behaviors of Bi4Ti3O12 - lyotropic liquid crystalline nanocolloids

    NASA Astrophysics Data System (ADS)

    Shukla, Ravi K.; Raina, K. K.

    2018-03-01

    We investigated the structural and dielectric dynamics of nanocolloids comprising lyotropic liquid crystals and bismuth titanate (Bi4Ti3O12) spherical nanoparticles (≈16-18 nm) of varying concentration 0.05 and 0.1 wt%. The lyotropic liquid crystalline mixture was prepared by a binary mixture of cetylpyridinuium chloride and ethylene glycol mixed in 5:95 wt% ratio. Binary lyotropic mixture exhibited hexagonal lyotropic phase. Structural and textural characterizations of nanocolloids infer that the nanoparticles were homogeneously dispersed in the liquid crystalline matrix and did not perturb the hexagonal ordering of the lyotropic phase. The dielectric constant and dielectric strength were found to be increased with the rise in the Bi4Ti3O12 nanoparticles concertation in the lyotropic matrix. A significant increase of one order was observed in the ac conductivity of colloidal systems as compared to the non-doped lyotropic liquid crystal. Relaxation parameters of the non-doped lyotropic liquid crystal and colloidal systems were computed and correlated with other parameters.

  8. Multiprocessor sparse L/U decomposition with controlled fill-in

    NASA Technical Reports Server (NTRS)

    Alaghband, G.; Jordan, H. F.

    1985-01-01

    Generation of the maximal compatibles of pivot elements for a class of small sparse matrices is studied. The algorithm involves a binary tree search and has a complexity exponential in the order of the matrix. Different strategies for selection of a set of compatible pivots based on the Markowitz criterion are investigated. The competing issues of parallelism and fill-in generation are studied and results are provided. A technque for obtaining an ordered compatible set directly from the ordered incompatible table is given. This technique generates a set of compatible pivots with the property of generating few fills. A new hueristic algorithm is then proposed that combines the idea of an ordered compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. Finally, an elimination set to reduce the matrix is selected. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices are presented and analyzed.

  9. Classification of skin cancer images using local binary pattern and SVM classifier

    NASA Astrophysics Data System (ADS)

    Adjed, Faouzi; Faye, Ibrahima; Ababsa, Fakhreddine; Gardezi, Syed Jamal; Dass, Sarat Chandra

    2016-11-01

    In this paper, a classification method for melanoma and non-melanoma skin cancer images has been presented using the local binary patterns (LBP). The LBP computes the local texture information from the skin cancer images, which is later used to compute some statistical features that have capability to discriminate the melanoma and non-melanoma skin tissues. Support vector machine (SVM) is applied on the feature matrix for classification into two skin image classes (malignant and benign). The method achieves good classification accuracy of 76.1% with sensitivity of 75.6% and specificity of 76.7%.

  10. Computation of elementary modes: a unifying framework and the new binary approach

    PubMed Central

    Gagneur, Julien; Klamt, Steffen

    2004-01-01

    Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509

  11. Multiclass Reduced-Set Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Tang, Benyang; Mazzoni, Dominic

    2006-01-01

    There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.

  12. An optimal algorithm for reconstructing images from binary measurements

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin

    2010-01-01

    We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.

  13. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  14. Rock Content Influence on Soil Hydraulic Properties

    NASA Astrophysics Data System (ADS)

    Parajuli, K.; Sadeghi, M.; Jones, S. B.

    2015-12-01

    Soil hydraulic properties including the soil water retention curve (SWRC) and hydraulic conductivity function are important characteristics of soil affecting a variety of soil properties and processes. The hydraulic properties are commonly measured for seived soils (i.e. particles < 2 mm), but many natural soils include rock fragments of varying size that alter bulk hydraulic properties. Relatively few studies have addressed this important problem using physically-based concepts. Motivated by this knowledge gap, we set out to describe soil hydraulic properties using binary mixtures (i.e. rock fragment inclusions in a soil matrix) based on individual properties of the rock and soil. As a first step of this study, special attention was devoted to the SWRC, where the impact of rock content on the SWRC was quantified using laboratory experiments for six different mixing ratios of soil matrix and rock. The SWRC for each mixture was obtained from water mass and water potential measurements. The resulting data for the studied mixtures yielded a family of SWRC indicating how the SWRC of the mixture is related to that of the individual media, i.e., soil and rock. A consistent model was also developed to describe the hydraulic properties of the mixture as a function of the individual properties of the rock and soil matrix. Key words: Soil hydraulic properties, rock content, binary mixture, experimental data.

  15. Growth of vertically aligned nanowires in metal-oxide nanocomposites: kinetic Monte-Carlo modeling versus experiments.

    PubMed

    Hennes, M; Schuler, V; Weng, X; Buchwald, J; Demaille, D; Zheng, Y; Vidal, F

    2018-04-26

    We employ kinetic Monte-Carlo simulations to study the growth process of metal-oxide nanocomposites obtained via sequential pulsed laser deposition. Using Ni-SrTiO3 (Ni-STO) as a model system, we reduce the complexity of the computational problem by choosing a coarse-grained approach mapping Sr, Ti and O atoms onto a single effective STO pseudo-atom species. With this ansatz, we scrutinize the kinetics of the sequential synthesis process, governed by alternating deposition and relaxation steps, and analyze the self-organization propensity of Ni atoms into straight vertically aligned nanowires embedded in the surrounding STO matrix. We finally compare the predictions of our binary toy model with experiments and demonstrate that our computational approach captures fundamental aspects of self-assembled nanowire synthesis. Despite its simplicity, our modeling strategy successfully describes the impact of relevant parameters like the concentration or laser frequency on the final nanoarchitecture of metal-oxide thin films grown via pulsed laser deposition.

  16. TernaryNet: faster deep model inference without GPUs for medical 3D segmentation using sparse and binary convolutions.

    PubMed

    Heinrich, Mattias P; Blendowski, Max; Oktay, Ozan

    2018-05-30

    Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.

  17. Zirconia toughened SiC whisker reinforced alumina composites small business innovation research

    NASA Technical Reports Server (NTRS)

    Loutfy, R. O.; Stuffle, K. L.; Withers, J. C.; Lee, C. T.

    1987-01-01

    The objective of this phase 1 project was to develop a ceramic composite with superior fracture toughness and high strength, based on combining two toughness inducing materials: zirconia for transformation toughening and SiC whiskers for reinforcement, in a controlled microstructure alumina matrix. The controlled matrix microstructure is obtained by controlling the nucleation frequency of the alumina gel with seeds (submicron alpha-alumina). The results demonstrate the technical feasibility of producing superior binary composites (Al2O3-ZrO2) and tertiary composites (Al2O3-ZrO2-SiC). Thirty-two composites were prepared, consolidated, and fracture toughness tested. Statistical analysis of the results showed that: (1) the SiC type is the key statistically significant factor for increased toughness; (2) sol-gel processing with a-alumina seed had a statistically significant effect on increasing toughness of the binary and tertiary composites compared to the corresponding mixed powder processing; and (3) ZrO2 content within the range investigated had a minor effect. Binary composites with an average critical fracture toughness of 6.6MPam sup 1/2, were obtained. Tertiary composites with critical fracture toughness in the range of 9.3 to 10.1 MPam sup 1/2 were obtained. Results indicate that these composites are superior to zirconia toughened alumina and SiC whisker reinforced alumina ceramic composites produced by conventional techniques with similar composition from published data.

  18. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods

    NASA Astrophysics Data System (ADS)

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-01

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.

  19. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods.

    PubMed

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-15

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  1. A quick responding quartz crystal microbalance sensor array based on molecular imprinted polyacrylic acids coating for selective identification of aldehydes in body odor.

    PubMed

    Jha, Sunil K; Hayashi, Kenshi

    2015-03-01

    In present work, a novel quartz crystal microbalance (QCM) sensor array has been developed for prompt identification of primary aldehydes in human body odor. Molecularly imprinted polymers (MIP) are prepared using the polyacrylic acid (PAA) polymer matrix and three organic acids (propenoic acid, hexanoic acid and octanoic acid) as template molecules, and utilized as QCM surface coating layer. The performance of MIP films is characterized by 4-element QCM sensor array (three coated with MIP layers and one with pure PAA for reference) dynamic and static responses to target aldehydes: hexanal, heptanal, and nonanal in single, binary, and tertiary mixtures at distinct concentrations. The target aldehydes were selected subsequent to characterization of body odor samples with solid phase-micro extraction gas chromatography mass spectrometer (SPME-GC-MS). The hexanoic acid and octanoic acid imprinted PAA exhibit fast response, and better sensitivity, selectivity and reproducibility than the propenoic acid, and non-imprinted PAA in array. The response time and recovery time for hexanoic acid imprinted PAA are obtained as 5 s and 12 s respectively to typical concentrations of binary and tertiary mixtures of aldehydes using the static response. Dynamic sensor array response matrix has been processed with principal component analysis (PCA) for visual, and support vector machine (SVM) classifier for quantitative identification of target odors. Aldehyde odors were identified successfully in principal component (PC) space. SVM classifier results maximum recognition rate 79% for three classes of binary odors and 83% including single, binary, and tertiary odor classes in 3-fold cross validation. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. A formulation of a matrix sparsity approach for the quantum ordered search algorithm

    NASA Astrophysics Data System (ADS)

    Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran

    One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.

  3. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models.

    PubMed

    Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong

    2016-04-07

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  4. Bayesian inference for unidirectional misclassification of a binary response trait.

    PubMed

    Xia, Michelle; Gustafson, Paul

    2018-03-15

    When assessing association between a binary trait and some covariates, the binary response may be subject to unidirectional misclassification. Unidirectional misclassification can occur when revealing a particular level of the trait is associated with a type of cost, such as a social desirability or financial cost. The feasibility of addressing misclassification is commonly obscured by model identification issues. The current paper attempts to study the efficacy of inference when the binary response variable is subject to unidirectional misclassification. From a theoretical perspective, we demonstrate that the key model parameters possess identifiability, except for the case with a single binary covariate. From a practical standpoint, the logistic model with quantitative covariates can be weakly identified, in the sense that the Fisher information matrix may be near singular. This can make learning some parameters difficult under certain parameter settings, even with quite large samples. In other cases, the stronger identification enables the model to provide more effective adjustment for unidirectional misclassification. An extension to the Poisson approximation of the binomial model reveals the identifiability of the Poisson and zero-inflated Poisson models. For fully identified models, the proposed method adjusts for misclassification based on learning from data. For binary models where there is difficulty in identification, the method is useful for sensitivity analyses on the potential impact from unidirectional misclassification. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Combinatorial techniques to efficiently investigate and optimize organic thin film processing and properties.

    PubMed

    Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner

    2013-04-08

    In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.

  6. Heat Source Characterization In A TREAT Fuel Particle Using Coupled Neutronics Binary Collision Monte-Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram

    This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less

  7. Effective Moment Feature Vectors for Protein Domain Structures

    PubMed Central

    Shi, Jian-Yu; Yiu, Siu-Ming; Zhang, Yan-Ning; Chin, Francis Yuk-Lun

    2013-01-01

    Imaging processing techniques have been shown to be useful in studying protein domain structures. The idea is to represent the pairwise distances of any two residues of the structure in a 2D distance matrix (DM). Features and/or submatrices are extracted from this DM to represent a domain. Existing approaches, however, may involve a large number of features (100–400) or complicated mathematical operations. Finding fewer but more effective features is always desirable. In this paper, based on some key observations on DMs, we are able to decompose a DM image into four basic binary images, each representing the structural characteristics of a fundamental secondary structure element (SSE) or a motif in the domain. Using the concept of moments in image processing, we further derive 45 structural features based on the four binary images. Together with 4 features extracted from the basic images, we represent the structure of a domain using 49 features. We show that our feature vectors can represent domain structures effectively in terms of the following. (1) We show a higher accuracy for domain classification. (2) We show a clear and consistent distribution of domains using our proposed structural vector space. (3) We are able to cluster the domains according to our moment features and demonstrate a relationship between structural variation and functional diversity. PMID:24391828

  8. Automatic histologically-closer classification of skin lesions.

    PubMed

    Rebouças Filho, Pedro Pedrosa; Peixoto, Solon Alves; Medeiros da Nóbrega, Raul Victor; Hemanth, D Jude; Medeiros, Aldisio Gonçalves; Sangaiah, Arun Kumar; de Albuquerque, Victor Hugo C

    2018-06-04

    According to the American Cancer Society, melanoma is one of the most common types of cancer in the world. In 2017, approximately 87,110 new cases of skin cancer were diagnosed in the United States alone. A dermatoscope is a tool that captures lesion images with high resolution and is one of the main clinical tools to diagnose, evaluate and monitor this disease. This paper presents a new approach to classify melanoma automatically using structural co-occurrence matrix (SCM) of main frequencies extracted from dermoscopy images. The main advantage of this approach consists in transform the SCM in an adaptive feature extractor improving his power of discrimination using only the image as parameter. The images were collected from the International Skin Imaging Collaboration (ISIC) 2016, 2017 and Pedro Hispano Hospital (PH2) datasets. Specificity (Spe), sensitivity (Sen), positive predictive value, F Score, Harmonic Mean, accuracy (Acc) and area under the curve (AUC) were used to verify the efficiency of the SCM. The results show that the SCM in the frequency domain work automatically, where it obtained better results in comparison with local binary patterns, gray-level co-occurrence matrix and invariant moments of Hu as well as compared with recent works with the same datasets. The results of the proposed approach were: Spe 95.23%, 92.15% and 99.4%, Sen 94.57%, 89.9% and 99.2%, Acc 94.5%, 89.93% and 99%, and AUC 92%, 90% and 99% in ISIC 2016, 2017 and PH2 datasets, respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Parameter estimation accuracies of Galactic binaries with eLISA

    NASA Astrophysics Data System (ADS)

    Błaut, Arkadiusz

    2018-09-01

    We study parameter estimation accuracy of nearly monochromatic sources of gravitational waves with the future eLISA-like detectors. eLISA will be capable of observing millions of such signals generated by orbiting pairs of compact binaries consisting of white dwarf, neutron star or black hole and to resolve and estimate parameters of several thousands of them providing crucial information regarding their orbital dynamics, formation rates and evolutionary paths. Using the Fisher matrix analysis we compare accuracies of the estimated parameters for different mission designs defined by the GOAT advisory team established to asses the scientific capabilities and the technological issues of the eLISA-like missions.

  10. Java implementation of Class Association Rule algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamura, Makio

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix and a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be appliedmore » more generally.« less

  11. Adding localization information in a fingerprint binary feature vector representation

    NASA Astrophysics Data System (ADS)

    Bringer, Julien; Despiegel, Vincent; Favre, Mélanie

    2011-06-01

    At BTAS'10, a new framework to transform a fingerprint minutiae template into a binary feature vector of fixed length is described. A fingerprint is characterized by its similarity with a fixed number set of representative local minutiae vicinities. This approach by representative leads to a fixed length binary representation, and, as the approach is local, it enables to deal with local distortions that may occur between two acquisitions. We extend this construction to incorporate additional information in the binary vector, in particular on localization of the vicinities. We explore the use of position and orientation information. The performance improvement is promising for utilization into fast identification algorithms or into privacy protection algorithms.

  12. Self-Consistent Sources for Integrable Equations Via Deformations of Binary Darboux Transformations

    NASA Astrophysics Data System (ADS)

    Chvartatskyi, Oleksandr; Dimakis, Aristophanes; Müller-Hoissen, Folkert

    2016-08-01

    We reveal the origin and structure of self-consistent source extensions of integrable equations from the perspective of binary Darboux transformations. They arise via a deformation of the potential that is central in this method. As examples, we obtain in particular matrix versions of self-consistent source extensions of the KdV, Boussinesq, sine-Gordon, nonlinear Schrödinger, KP, Davey-Stewartson, two-dimensional Toda lattice and discrete KP equation. We also recover a (2+1)-dimensional version of the Yajima-Oikawa system from a deformation of the pKP hierarchy. By construction, these systems are accompanied by a hetero binary Darboux transformation, which generates solutions of such a system from a solution of the source-free system and additionally solutions of an associated linear system and its adjoint. The essence of all this is encoded in universal equations in the framework of bidifferential calculus.

  13. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  14. Scaled Particle Theory for Multicomponent Hard Sphere Fluids Confined in Random Porous Media.

    PubMed

    Chen, W; Zhao, S L; Holovko, M; Chen, X S; Dong, W

    2016-06-23

    The formulation of scaled particle theory (SPT) is presented for a quite general model of fluids confined in a random porous media, i.e., a multicomponent hard sphere (HS) fluid in a multicomponent hard sphere or a multicomponent overlapping hard sphere (OHS) matrix. The analytical expressions for pressure, Helmholtz free energy, and chemical potential are derived. The thermodynamic consistency of the proposed theory is established. Moreover, we show that there is an isomorphism between the SPT for a multicomponent system and that for a one-component system. Results from grand canonical ensemble Monte Carlo simulations are also presented for a binary HS mixture in a one-component HS or a one-component OHS matrix. The accuracy of various variants derived from the basic SPT formulation is appraised against the simulation results. Scaled particle theory, initially formulated for a bulk HS fluid, has not only provided an analytical tool for calculating thermodynamic properties of HS fluid but also helped to gain very useful insight for elaborating other theoretical approaches such as the fundamental measure theory (FMT). We expect that the general SPT for multicomponent systems developed in this work can contribute to the study of confined fluids in a similar way.

  15. Advanced three-dimensional electron microscopy techniques in the quest for better structural and functional materials

    PubMed Central

    Schryvers, D; Cao, S; Tirry, W; Idrissi, H; Van Aert, S

    2013-01-01

    After a short review of electron tomography techniques for materials science, this overview will cover some recent results on different shape memory and nanostructured metallic systems obtained by various three-dimensional (3D) electron imaging techniques. In binary Ni–Ti, the 3D morphology and distribution of Ni4Ti3 precipitates are investigated by using FIB/SEM slice-and-view yielding 3D data stacks. Different quantification techniques will be presented including the principal ellipsoid for a given precipitate, shape classification following a Zingg scheme, particle distribution function, distance transform and water penetration. The latter is a novel approach to quantifying the expected matrix transformation in between the precipitates. The different samples investigated include a single crystal annealed with and without compression yielding layered and autocatalytic precipitation, respectively, and a polycrystal revealing different densities and sizes of the precipitates resulting in a multistage transformation process. Electron tomography was used to understand the interaction between focused ion beam-induced Frank loops and long dislocation structures in nanobeams of Al exhibiting special mechanical behaviour measured by on-chip deposition. Atomic resolution electron tomography is demonstrated on Ag nanoparticles in an Al matrix. PMID:27877554

  16. A Data Driven Network Approach to Rank Countries Production Diversity and Food Specialization

    PubMed Central

    Tu, Chengyi; Carr, Joel

    2016-01-01

    The easy access to large data sets has allowed for leveraging methodology in network physics and complexity science to disentangle patterns and processes directly from the data, leading to key insights in the behavior of systems. Here we use country specific food production data to study binary and weighted topological properties of the bipartite country-food production matrix. This country-food production matrix can be: 1) transformed into overlap matrices which embed information regarding shared production of products among countries, and or shared countries for individual products, 2) identify subsets of countries which produce similar commodities or subsets of commodities shared by a given country allowing for visualization of correlations in large networks, and 3) used to rank country fitness (the ability to produce a diverse array of products weighted on the type of food commodities) and food specialization (quantified on the number of countries producing a specific food product weighted on their fitness). Our results show that, on average, countries with high fitness produce both low and high specializion food commodities, whereas nations with low fitness tend to produce a small basket of diverse food products, typically comprised of low specializion food commodities. PMID:27832118

  17. A Data Driven Network Approach to Rank Countries Production Diversity and Food Specialization.

    PubMed

    Tu, Chengyi; Carr, Joel; Suweis, Samir

    2016-01-01

    The easy access to large data sets has allowed for leveraging methodology in network physics and complexity science to disentangle patterns and processes directly from the data, leading to key insights in the behavior of systems. Here we use country specific food production data to study binary and weighted topological properties of the bipartite country-food production matrix. This country-food production matrix can be: 1) transformed into overlap matrices which embed information regarding shared production of products among countries, and or shared countries for individual products, 2) identify subsets of countries which produce similar commodities or subsets of commodities shared by a given country allowing for visualization of correlations in large networks, and 3) used to rank country fitness (the ability to produce a diverse array of products weighted on the type of food commodities) and food specialization (quantified on the number of countries producing a specific food product weighted on their fitness). Our results show that, on average, countries with high fitness produce both low and high specializion food commodities, whereas nations with low fitness tend to produce a small basket of diverse food products, typically comprised of low specializion food commodities.

  18. Emergence of small-world structure in networks of spiking neurons through STDP plasticity.

    PubMed

    Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas

    2011-01-01

    In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.

  19. Received response based heuristic LDPC code for short-range non-line-of-sight ultraviolet communication.

    PubMed

    Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian

    2017-03-06

    Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.

  20. Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.

    PubMed

    Youji Feng; Lixin Fan; Yihong Wu

    2016-01-01

    The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key point detection and descriptor extraction. It is empirically demonstrated that the localization speed is improved by an order of magnitude as compared with state-of-the-art methods, while comparable registration rate and localization accuracy are still maintained.

  1. Contrasting performance of donor-acceptor copolymer pairs in ternary blend solar cells and two-acceptor copolymers in binary blend solar cells.

    PubMed

    Khlyabich, Petr P; Rudenko, Andrey E; Burkhart, Beate; Thompson, Barry C

    2015-02-04

    Here two contrasting approaches to polymer-fullerene solar cells are compared. In the first approach, two distinct semi-random donor-acceptor copolymers are blended with phenyl-C61-butyric acid methyl ester (PC61BM) to form ternary blend solar cells. The two poly(3-hexylthiophene)-based polymers contain either the acceptor thienopyrroledione (TPD) or diketopyrrolopyrrole (DPP). In the second approach, semi-random donor-acceptor copolymers containing both TPD and DPP acceptors in the same polymer backbone, termed two-acceptor polymers, are blended with PC61BM to give binary blend solar cells. The two approaches result in bulk heterojunction solar cells that have the same molecular active-layer components but differ in the manner in which these molecular components are mixed, either by physical mixing (ternary blend) or chemical "mixing" in the two-acceptor (binary blend) case. Optical properties and photon-to-electron conversion efficiencies of the binary and ternary blends were found to have similar features and were described as a linear combination of the individual components. At the same time, significant differences were observed in the open-circuit voltage (Voc) behaviors of binary and ternary blend solar cells. While in case of two-acceptor polymers, the Voc was found to be in the range of 0.495-0.552 V, ternary blend solar cells showed behavior inherent to organic alloy formation, displaying an intermediate, composition-dependent and tunable Voc in the range from 0.582 to 0.684 V, significantly exceeding the values achieved in the two-acceptor containing binary blend solar cells. Despite the differences between the physical and chemical mixing approaches, both pathways provided solar cells with similar power conversion efficiencies, highlighting the advantages of both pathways toward highly efficient organic solar cells.

  2. Behavior of Sn atoms in GeSn thin films during thermal annealing: Ex-situ and in-situ observations

    NASA Astrophysics Data System (ADS)

    Takase, Ryohei; Ishimaru, Manabu; Uchida, Noriyuki; Maeda, Tatsuro; Sato, Kazuhisa; Lieten, Ruben R.; Locquet, Jean-Pierre

    2016-12-01

    Thermally induced crystallization processes for amorphous GeSn thin films with Sn concentrations beyond the solubility limit of the bulk crystal Ge-Sn binary system have been examined by X-ray photoelectron spectroscopy, grazing incidence X-ray diffraction, and (scanning) transmission electron microscopy. We paid special attention to the behavior of Sn before and after recrystallization. In the as-deposited specimens, Sn atoms were homogeneously distributed in an amorphous matrix. Prior to crystallization, an amorphous-to-amorphous phase transformation associated with the rearrangement of Sn atoms was observed during heat treatment; this transformation is reversible with respect to temperature. Remarkable recrystallization occurred at temperatures above 400 °C, and Sn atoms were ejected from the crystallized GeSn matrix. The segregation of Sn became more pronounced with increasing annealing temperature, and the ejected Sn existed as a liquid phase. It was found that the molten Sn remains as a supercooled liquid below the eutectic temperature of the Ge-Sn binary system during the cooling process, and finally, β-Sn precipitates were formed at ambient temperature.

  3. Gender classification from face images by using local binary pattern and gray-level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Uzbaş, Betül; Arslan, Ahmet

    2018-04-01

    Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.

  4. Gyroid Structures at Highly Asymmetric Volume Fractions by Blending of ABC Triblock Terpolymer and AB Diblock Copolymer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Seonghyeon; Kwak, Jongheon; Choi, Chungryong

    Here, we investigated, via small angle X-ray scattering and transmission electron microscopy, the morphologies of binary blend of polyisoprene- b-polystyrene- b-poly(2-vinylpyridine) (ISP) triblock terpolymer and polyisoprene-b-polystyrene (IS) diblock copolymer. An asymmetric ISP with volume fractions ( f) of 0.12, 0.75, and 0.13 for PI, PS, and P2VP blocks, respectively, showed a new morphology: Coexistence of spheres and cylinders with tetragonal packing. Asymmetric IS with f I = 0.11 and f S =0.89 showed conventional body-centered cubic spherical microdomains. Very interestingly, a binary blend of ISP and IS with overall volume fractions of f I = 0.12, f S = 0.79,more » and f P = 0.09 exhibited core-shell double gyroid (CSG: Q 230 space group), where PI consists of thin core and PS forms thick shell, while P2VP becomes thin matrix. It is very unusual to form highly asymmetric CSG with the matrix having very small volume fraction (0.09).« less

  5. Gyroid Structures at Highly Asymmetric Volume Fractions by Blending of ABC Triblock Terpolymer and AB Diblock Copolymer

    DOE PAGES

    Ahn, Seonghyeon; Kwak, Jongheon; Choi, Chungryong; ...

    2017-11-08

    Here, we investigated, via small angle X-ray scattering and transmission electron microscopy, the morphologies of binary blend of polyisoprene- b-polystyrene- b-poly(2-vinylpyridine) (ISP) triblock terpolymer and polyisoprene-b-polystyrene (IS) diblock copolymer. An asymmetric ISP with volume fractions ( f) of 0.12, 0.75, and 0.13 for PI, PS, and P2VP blocks, respectively, showed a new morphology: Coexistence of spheres and cylinders with tetragonal packing. Asymmetric IS with f I = 0.11 and f S =0.89 showed conventional body-centered cubic spherical microdomains. Very interestingly, a binary blend of ISP and IS with overall volume fractions of f I = 0.12, f S = 0.79,more » and f P = 0.09 exhibited core-shell double gyroid (CSG: Q 230 space group), where PI consists of thin core and PS forms thick shell, while P2VP becomes thin matrix. It is very unusual to form highly asymmetric CSG with the matrix having very small volume fraction (0.09).« less

  6. Deep-UV emission at 219 nm from ultrathin MBE GaN/AlN quantum heterostructures

    NASA Astrophysics Data System (ADS)

    Islam, S. M.; Protasenko, Vladimir; Lee, Kevin; Rouvimov, Sergei; Verma, Jai; Xing, Huili Grace; Jena, Debdeep

    2017-08-01

    Deep ultraviolet (UV) optical emission below 250 nm (˜5 eV) in semiconductors is traditionally obtained from high aluminum containing AlGaN alloy quantum wells. It is shown here that high-quality epitaxial ultrathin binary GaN quantum disks embedded in an AlN matrix can produce efficient optical emission in the 219-235 nm (˜5.7-5.3 eV) spectral range, far above the bulk bandgap (3.4 eV) of GaN. The quantum confinement energy in these heterostructures is larger than the bandgaps of traditional semiconductors, made possible by the large band offsets. These molecular beam epitaxy-grown extreme quantum-confinement GaN/AlN heterostructures exhibit an internal quantum efficiency of 40% at wavelengths as short as 219 nm. These observations together with the ability to engineer the interband optical matrix elements to control the direction of photon emission in such binary quantum disk active regions offer unique advantages over alloy AlGaN quantum well counterparts for the realization of deep-UV light-emitting diodes and lasers.

  7. A new bidirectional generalization of (2+1)-dimensional matrix k-constrained Kadomtsev-Petviashvili hierarchy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvartatskyi, O. I., E-mail: alex.chvartatskyy@gmail.com; Sydorenko, Yu. M., E-mail: y-sydorenko@franko.lviv.ua

    We introduce a new bidirectional generalization of (2+1)-dimensional k-constrained Kadomtsev-Petviashvili (KP) hierarchy ((2+1)-BDk-cKPH). This new hierarchy generalizes (2+1)-dimensional k-cKP hierarchy, (t{sub A}, τ{sub B}) and (γ{sub A}, σ{sub B}) matrix hierarchies. (2+1)-BDk-cKPH contains a new matrix (1+1)-k-constrained KP hierarchy. Some members of (2+1)-BDk-cKPH are also listed. In particular, it contains matrix generalizations of Davey-Stewartson (DS) systems, (2+1)-dimensional modified Korteweg-de Vries equation and the Nizhnik equation. (2+1)-BDk-cKPH also includes new matrix (2+1)-dimensional generalizations of the Yajima-Oikawa and Melnikov systems. Binary Darboux Transformation Dressing Method is also proposed for construction of exact solutions for equations from (2+1)-BDk-cKPH. As an example the exactmore » form of multi-soliton solutions for vector generalization of the DS system is given.« less

  8. Dynamic heterogeneities and non-Gaussian behavior in two-dimensional randomly confined colloidal fluids

    NASA Astrophysics Data System (ADS)

    Schnyder, Simon K.; Skinner, Thomas O. E.; Thorneywork, Alice L.; Aarts, Dirk G. A. L.; Horbach, Jürgen; Dullens, Roel P. A.

    2017-03-01

    A binary mixture of superparamagnetic colloidal particles is confined between glass plates such that the large particles become fixed and provide a two-dimensional disordered matrix for the still mobile small particles, which form a fluid. By varying fluid and matrix area fractions and tuning the interactions between the superparamagnetic particles via an external magnetic field, different regions of the state diagram are explored. The mobile particles exhibit delocalized dynamics at small matrix area fractions and localized motion at high matrix area fractions, and the localization transition is rounded by the soft interactions [T. O. E. Skinner et al., Phys. Rev. Lett. 111, 128301 (2013), 10.1103/PhysRevLett.111.128301]. Expanding on previous work, we find the dynamics of the tracers to be strongly heterogeneous and show that molecular dynamics simulations of an ideal gas confined in a fixed matrix exhibit similar behavior. The simulations show how these soft interactions make the dynamics more heterogeneous compared to the disordered Lorentz gas and lead to strong non-Gaussian fluctuations.

  9. A Bayesian Approach for Nonlinear Structural Equation Models with Dichotomous Variables Using Logit and Probit Links

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Cai, Jing-Heng

    2010-01-01

    Analysis of ordered binary and unordered binary data has received considerable attention in social and psychological research. This article introduces a Bayesian approach, which has several nice features in practical applications, for analyzing nonlinear structural equation models with dichotomous data. We demonstrate how to use the software…

  10. Thermodynamics and kinetics of binary nucleation in ideal-gas mixtures.

    PubMed

    Alekseechkin, Nikolay V

    2015-08-07

    The nonisothermal single-component theory of droplet nucleation [N. V. Alekseechkin, Physica A 412, 186 (2014)] is extended to binary case; the droplet volume V, composition x, and temperature T are the variables of the theory. An approach based on macroscopic kinetics (in contrast to the standard microscopic model of nucleation operating with the probabilities of monomer attachment and detachment) is developed for the droplet evolution and results in the derived droplet motion equations in the space (V, x, T)—equations for V̇≡dV/dt, ẋ, and Ṫ. The work W(V, x, T) of the droplet formation is obtained in the vicinity of the saddle point as a quadratic form with diagonal matrix. Also, the problem of generalizing the single-component Kelvin equation for the equilibrium vapor pressure to binary case is solved; it is presented here as a problem of integrability of a Pfaffian equation. The equation for Ṫ is shown to be the first law of thermodynamics for the droplet, which is a consequence of Onsager's reciprocal relations and the linked-fluxes concept. As an example of ideal solution for demonstrative numerical calculations, the o-xylene-m-xylene system is employed. Both nonisothermal and enrichment effects are shown to exist; the mean steady-state overheat of droplets and their mean steady-state enrichment are calculated with the help of the 3D distribution function. Some qualitative peculiarities of the nucleation thermodynamics and kinetics in the water-sulfuric acid system are considered in the model of regular solution. It is shown that there is a small kinetic parameter in the theory due to the small amount of the acid in the vapor and, as a consequence, the nucleation process is isothermal.

  11. Fabrication of bioinspired nanostructured materials via colloidal self-assembly

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Han

    Through millions of years of evolution, nature creates unique structures and materials that exhibit remarkable performance on mechanicals, opticals, and physical properties. For instance, nacre (mother of pearl), bone and tooth show excellent combination of strong minerals and elastic proteins as reinforced materials. Structured butterfly's wing and moth's eye can selectively reflect light or absorb light without dyes. Lotus leaf and cicada's wing are superhydrophobic to prevent water accumulation. The principles of particular biological capabilities, attributed to the highly sophisticated structures with complex hierarchical designs, have been extensively studied. Recently, a large variety of novel materials have been enabled by natural-inspired designs and nanotechnologies. These advanced materials will have huge impact on practical applications. We have utilized bottom-up approaches to fabricate nacre-like nanocomposites with "brick and mortar" structures. First, we used self-assembly processes, including convective self-assembly, dip-coating, and electrophoretic deposition to form well oriented layer structure of synthesized gibbsite (aluminum hydroxide) nanoplatelets. Low viscous monomer was permeated into layered nanoplatelets and followed by photo-curing. Gibbsite-polymer composite displays 2 times higher tensile strength and 3 times higher modulus when compared with pure polymer. More improvement occurred when surface-modified gibbsite platelets were cross-linked with the polymer matrix. We observed ˜4 times higher strength and nearly 1 order of magnitude higher modulus than pure polymer. To further improve the mechanical strength and toughness of inorganicorganic nanocomposites, we exploited ultrastrong graphene oxide (GO), a single atom thick hexagonal carbon sheet with pendant oxidation groups. GO nanocomposite is made by co-filtrating GO/polyvinyl alcohol suspension on 0.2 im pore-sized membrane. It shows ˜2 times higher strength and ˜15 times higher ultimate strains than nacre and pure GO paper (also synthesized by filtration). Specifically, it exhibits ˜30 times higher fracture energy than filtrated graphene paper and nacre, ˜100 times tougher than filtrated GO paper. Besides reinforced nanocomposites, we further explored the self-assembly of spherical colloids and the templating nanofabrication of moth-eye-inspired broadband antireflection coatings. Binary crystalline structures can be easily accomplished by spin-coating double-layer nonclose-packed colloidal crystals as templates, followed by colloidal templating. The polymer matrix between self-assembled colloidal crystal has been used as a sacrificial template to define the resulting periodic binary nanostructures, including intercalated arrays of silica spheres and polymer posts, gold nanohole arrays with binary sizes, and dimple-nipple antireflection coatings. The binary-structured antireflection coatings exhibit better antireflective properties than unitary coatings. Natural optical structures and nanocomposites teach us a great deal on how to create high performance artificial materials. The bottom-up technologies developed in this thesis are scalable and compatible with standard industrial processes, promising for manufacturing high-performance materials for the benefits of human beings.

  12. Automated Robust Image Segmentation: Level Set Method Using Nonnegative Matrix Factorization with Application to Brain MRI.

    PubMed

    Dera, Dimah; Bouaynaya, Nidhal; Fathallah-Shaykh, Hassan M

    2016-07-01

    We address the problem of fully automated region discovery and robust image segmentation by devising a new deformable model based on the level set method (LSM) and the probabilistic nonnegative matrix factorization (NMF). We describe the use of NMF to calculate the number of distinct regions in the image and to derive the local distribution of the regions, which is incorporated into the energy functional of the LSM. The results demonstrate that our NMF-LSM method is superior to other approaches when applied to synthetic binary and gray-scale images and to clinical magnetic resonance images (MRI) of the human brain with and without a malignant brain tumor, glioblastoma multiforme. In particular, the NMF-LSM method is fully automated, highly accurate, less sensitive to the initial selection of the contour(s) or initial conditions, more robust to noise and model parameters, and able to detect as small distinct regions as desired. These advantages stem from the fact that the proposed method relies on histogram information instead of intensity values and does not introduce nuisance model parameters. These properties provide a general approach for automated robust region discovery and segmentation in heterogeneous images. Compared with the retrospective radiological diagnoses of two patients with non-enhancing grade 2 and 3 oligodendroglioma, the NMF-LSM detects earlier progression times and appears suitable for monitoring tumor response. The NMF-LSM method fills an important need of automated segmentation of clinical MRI.

  13. Structure-Function Network Mapping and Its Assessment via Persistent Homology

    PubMed Central

    2017-01-01

    Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways. PMID:28046127

  14. Binary Number System Training for Graduate Foreign Students at New York Institute of Technology.

    ERIC Educational Resources Information Center

    Sudsataya, Nuntawun

    This thesis describes the design, development, implementation, and evaluation of a training module to instruct graduate foreign students to learn the representation of the binary system and the method of decimal-binary conversion. The designer selected programmed instruction as the method of instruction and used the "lean" approach to…

  15. Recall of patterns using binary and gray-scale autoassociative morphological memories

    NASA Astrophysics Data System (ADS)

    Sussner, Peter

    2005-08-01

    Morphological associative memories (MAM's) belong to a class of artificial neural networks that perform the operations erosion or dilation of mathematical morphology at each node. Therefore we speak of morphological neural networks. Alternatively, the total input effect on a morphological neuron can be expressed in terms of lattice induced matrix operations in the mathematical theory of minimax algebra. Neural models of associative memories are usually concerned with the storage and the retrieval of binary or bipolar patterns. Thus far, the emphasis in research on morphological associative memory systems has been on binary models, although a number of notable features of autoassociative morphological memories (AMM's) such as optimal absolute storage capacity and one-step convergence have been shown to hold in the general, gray-scale setting. In previous papers, we gained valuable insight into the storage and recall phases of AMM's by analyzing their fixed points and basins of attraction. We have shown in particular that the fixed points of binary AMM's correspond to the lattice polynomials in the original patterns. This paper extends these results in the following ways. In the first place, we provide an exact characterization of the fixed points of gray-scale AMM's in terms of combinations of the original patterns. Secondly, we present an exact expression for the fixed point attractor that represents the output of either a binary or a gray-scale AMM upon presentation of a certain input. The results of this paper are confirmed in several experiments using binary patterns and gray-scale images.

  16. Headspace quantification of pure and aqueous solutions of binary mixtures of key volatile organic compounds in Swiss cheeses using selected ion flow tube mass spectrometry.

    PubMed

    Castada, Hardy Z; Wick, Cheryl; Harper, W James; Barringer, Sheryl

    2015-01-15

    Twelve volatile organic compounds (VOCs) have recently been identified as key compounds in Swiss cheese with split defects. It is important to know how these VOCs interact in binary mixtures and if their behavior changes with concentration in binary mixtures. Selected ion flow tube mass spectrometry (SIFT-MS) was used for the headspace analysis of VOCs commonly found in Swiss cheeses. Headspace (H/S) sampling and quantification checks using SIFT-MS and further linear regression analyses were carried out on twelve selected aqueous solutions of VOCs. Five binary mixtures of standard solutions of VOCs were also prepared and the H/S profile of each mixture was analyzed. A very good fit of linearity for the twelve VOCs (95% confidence level) confirms direct proportionality between the H/S and the aqueous concentration of the standard solutions. Henry's Law coefficients were calculated with a high degree of confidence. SIFT-MS analysis of five binary mixtures showed that the more polar compounds reduced the H/S concentration of the less polar compounds, while the addition of a less polar compound increased the H/S concentration of the more polar compound. In the binary experiment, it was shown that the behavior of a compound in the headspace can be significantly affected by the presence of another compound. Thus, the matrix effect plays a significant role in the behavior of molecules in a mixed solution. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Systematic approach for simultaneously correcting the band-gap and p - d separation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    DOE PAGES

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less

  18. Systematic approach for simultaneously correcting the band-gap and p -d separation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-01

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X =N ,P ,As ,Sb , and II-VI compounds, (Zn or Cd)X , with X =O ,S ,Se ,Te . By correcting (1) the binary band gaps at high-symmetry points Γ , L , X , (2) the separation of p -and d -orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.

  19. Grape skin phenolics as inhibitors of mammalian α-glucosidase and α-amylase--effect of food matrix and processing on efficacy.

    PubMed

    Lavelli, V; Sri Harsha, P S C; Ferranti, P; Scarafoni, A; Iametti, S

    2016-03-01

    Type-2 diabetes is continuously increasing worldwide. Hence, there is a need to develop functional foods that efficiently alleviate damage due to hyperglycaemia complications while meeting the criteria for a sustainable food processing technology. Inhibition of mammalian α-amylase and α-glucosidase was studied for white grape skin samples recovered from wineries and found to be higher than that of the drug acarbose. In white grape skins, quercetin and kaempferol derivatives, analysed by UPLC-DAD-MS, and the oligomeric series of catechin/epicatechin units and their gallic acid ester derivatives up to nonamers, analysed by MALDI-TOF-MS were identified. White grape skin was then used for enrichment of a tomato puree (3%) and a flat bread (10%). White grape skin phenolics were found in the extract obtained from the enriched foods, except for the higher mass proanthocyanidin oligomers, mainly due to their binding to the matrix and to a lesser extent to heat degradation. Proanthocyanidin solubility was lower in bread, most probably due to formation of binary proanthocyanin/protein complexes, than in tomato puree where possible formation of ternary proanthocyanidin/protein/pectin complexes can enhance solubility. Enzyme inhibition by the enriched foods was significantly higher than for unfortified foods. Hence, this in vitro approach provided a platform to study potential dietary agents to alleviate hyperglycaemia damage and suggested that grape skin phenolics could be effective even if the higher mass proanthocyanidins are bound to the food matrix.

  20. Examining Measurement Invariance and Differential Item Functioning with Discrete Latent Construct Indicators: A Note on a Multiple Testing Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Li, Tatyana; Menold, Natalja

    2018-01-01

    A latent variable modeling method for studying measurement invariance when evaluating latent constructs with multiple binary or binary scored items with no guessing is outlined. The approach extends the continuous indicator procedure described by Raykov and colleagues, utilizes similarly the false discovery rate approach to multiple testing, and…

  1. Simultaneous spectrophotometric determination of glimepiride and pioglitazone in binary mixture and combined dosage form using chemometric-assisted techniques

    NASA Astrophysics Data System (ADS)

    El-Zaher, Asmaa A.; Elkady, Ehab F.; Elwy, Hanan M.; Saleh, Mahmoud Abo El Makarim

    2017-07-01

    In the present work, pioglitazone and glimepiride, 2 widely used antidiabetics, were simultaneously determined by a chemometric-assisted UV-spectrophotometric method which was applied to a binary synthetic mixture and a pharmaceutical preparation containing both drugs. Three chemometric techniques - Concentration residual augmented classical least-squares (CRACLS), principal component regression (PCR), and partial least-squares (PLS) were implemented by using the synthetic mixtures containing the two drugs in acetonitrile. The absorbance data matrix corresponding to the concentration data matrix was obtained by the measurements of absorbencies in the range between 215 and 235 nm in the intervals with Δλ = 0.4 nm in their zero-order spectra. Then, calibration or regression was obtained by using the absorbance data matrix and concentration data matrix for the prediction of the unknown concentrations of pioglitazone and glimepiride in their mixtures. The described techniques have been validated by analyzing synthetic mixtures containing the two drugs showing good mean recovery values lying between 98 and 100%. In addition, accuracy and precision of the three methods have been assured by recovery values lying between 98 and 102% and R.S.D. % ˂0.6 for intra-day precision and ˂1.2 for inter-day precision. The proposed chemometric techniques were successfully applied to a pharmaceutical preparation containing a combination of pioglitazone and glimepiride in the ratio of 30: 4, showing good recovery values. Finally, statistical analysis was carried out to add a value to the verification of the proposed methods. It was carried out by an intrinsic comparison between the 3 chemometric techniques and by comparing values of present methods with those obtained by implementing reference pharmacopeial methods for each of pioglitazone and glimepiride.

  2. Conjunctive management of multi-reservoir network system and groundwater system

    NASA Astrophysics Data System (ADS)

    Mani, A.; Tsai, F. T. C.

    2015-12-01

    This study develops a successive mixed-integer linear fractional programming (successive MILFP) method to conjunctively manage water resources provided by a multi-reservoir network system and a groundwater system. The conjunctive management objectives are to maximize groundwater withdrawals and maximize reservoir storages while satisfying water demands and raising groundwater level to a target level. The decision variables in the management problem are reservoir releases and spills, network flows and groundwater pumping rates. Using the fractional programming approach, the objective function is defined as a ratio of total groundwater withdraws to total reservoir storage deficits from the maximum storages. Maximizing this ratio function tends to maximizing groundwater use and minimizing surface water use. This study introduces a conditional constraint on groundwater head in order to sustain aquifers from overpumping: if current groundwater level is less than a target level, groundwater head at the next time period has to be raised; otherwise, it is allowed to decrease up to a certain extent. This conditional constraint is formulated into a set of mixed binary nonlinear constraints and results in a mixed-integer nonlinear fractional programming (MINLFP) problem. To solve the MINLFP problem, we first use the response matrix approach to linearize groundwater head with respect to pumping rate and reduce the problem to an MILFP problem. Using the Charnes-Cooper transformation, the MILFP is transformed to an equivalent mixed-integer linear programming (MILP). The solution of the MILP is successively updated by updating the response matrix in every iteration. The study uses IBM CPLEX to solve the MILP problem. The methodology is applied to water resources management in northern Louisiana. This conjunctive management approach aims to recover the declining groundwater level of the stressed Sparta aquifer by using surface water from a network of four reservoirs as an alternative source of supply.

  3. Combined Endoscopic/Sonographic-Based Risk Matrix Model for Predicting One-Year Risk of Surgery: A Prospective Observational Study of a Tertiary Center Severe/Refractory Crohn's Disease Cohort.

    PubMed

    Rispo, Antonio; Imperatore, Nicola; Testa, Anna; Bucci, Luigi; Luglio, Gaetano; De Palma, Giovanni Domenico; Rea, Matilde; Nardone, Olga Maria; Caporaso, Nicola; Castiglione, Fabiana

    2018-03-08

    In the management of Crohn's Disease (CD) patients, having a simple score combining clinical, endoscopic and imaging features to predict the risk of surgery could help to tailor treatment more effectively. AIMS: to prospectively evaluate the one-year risk factors for surgery in refractory/severe CD and to generate a risk matrix for predicting the probability of surgery at one year. CD patients needing a disease re-assessment at our tertiary IBD centre underwent clinical, laboratory, endoscopy and bowel sonography (BS) examinations within one week. The optimal cut-off values in predicting surgery were identified using ROC curves for Simple Endoscopic Score for CD (SES-CD), bowel wall thickness (BWT) at BS, and small bowel CD extension at BS. Binary logistic regression and Cox's regression were then carried out. Finally, the probabilities of surgery were calculated for selected baseline levels of covariates and results were arranged in a prediction matrix. Of 100 CD patients, 30 underwent surgery within one year. SES-CD©9 (OR 15.3; p<0.001), BWT©7 mm (OR 15.8; p<0.001), small bowel CD extension at BS©33 cm (OR 8.23; p<0.001) and stricturing/penetrating behavior (OR 4.3; p<0.001) were the only independent factors predictive of surgery at one-year based on binary logistic and Cox's regressions. Our matrix model combined these risk factors and the probability of surgery ranged from 0.48% to 87.5% (sixteen combinations). Our risk matrix combining clinical, endoscopic and ultrasonographic findings can accurately predict the one-year risk of surgery in patients with severe/refractory CD requiring a disease re-evaluation. This tool could be of value in clinical practice, serving as the basis for a tailored management of CD patients.

  4. Some Applications Of Semigroups And Computer Algebra In Discrete Structures

    NASA Astrophysics Data System (ADS)

    Bijev, G.

    2009-11-01

    An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.

  5. A general microchip surface modification approach using a spin-coated polymer resist film doped with hydroxypropyl cellulose.

    PubMed

    Sun, Xiuhua; Yang, Weichun; Geng, Yanli; Woolley, Adam T

    2009-04-07

    We have developed a simple and effective method for surface modification of polymer microchips by entrapping hydroxypropyl cellulose (HPC) in a spin-coated thin film on the surface. Poly(methyl methacrylate-8.5-methacrylic acid), a widely available commercial resist formulation, was utilized as a matrix for dissolving HPC and providing adherence to native polymer surfaces. Various amounts of HPC (0.1-2.0%) dissolved in the copolymer and spun on polymer surfaces were evaluated. The modified surfaces were characterized by contact angle measurement, X-ray photoelectron spectroscopy and atomic force microscopy. The developed method was applied on both poly(methyl methacrylate) and cyclic olefin copolymer microchips. A fluorescently labeled myoglobin digest, binary protein mixture, and human serum sample were all separated in these surface-modified polymer microdevices. Our work exhibits an easy and reliable way to achieve favorable biomolecular separation performance in polymer microchips.

  6. A general microchip surface modification approach using a spin-coated polymer resist film doped with hydroxypropyl cellulose

    PubMed Central

    Sun, Xiuhua; Yang, Weichun; Geng, Yanli; Woolley, Adam T.

    2009-01-01

    We have developed a simple and effective method for surface modification of polymer microchips by entrapping hydroxypropyl cellulose (HPC) in a spin-coated thin film on the surface. Poly(methyl methacrylate-8.5-methacrylic acid), a widely available commercial resist formulation, was utilized as a matrix for dissolving HPC and providing adherence to native polymer surfaces. Various amounts of HPC (0.1–2.0%) dissolved in the copolymer and spun on polymer surfaces were evaluated. The modified surfaces were characterized by contact angle measurement, X-ray photoelectron spectroscopy and atomic force microscopy. The developed method was applied on both poly(methyl methacrylate) and cyclic olefin copolymer microchips. A fluorescently labeled myoglobin digest, binary protein mixture, and human serum sample were all separated in these surface-modified polymer microdevices. Our work exhibits an easy and reliable way to achieve favorable biomolecular separation performance in polymer microchips. PMID:19294306

  7. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator

    PubMed Central

    Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements. PMID:29209364

  8. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator.

    PubMed

    Hussain, Abid; Muhammad, Yousaf Shad; Nauman Sajid, M; Hussain, Ijaz; Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.

  9. Multiphase, multicomponent phase behavior prediction

    NASA Astrophysics Data System (ADS)

    Dadmohammadi, Younas

    Accurate prediction of phase behavior of fluid mixtures in the chemical industry is essential for designing and operating a multitude of processes. Reliable generalized predictions of phase equilibrium properties, such as pressure, temperature, and phase compositions offer an attractive alternative to costly and time consuming experimental measurements. The main purpose of this work was to assess the efficacy of recently generalized activity coefficient models based on binary experimental data to (a) predict binary and ternary vapor-liquid equilibrium systems, and (b) characterize liquid-liquid equilibrium systems. These studies were completed using a diverse binary VLE database consisting of 916 binary and 86 ternary systems involving 140 compounds belonging to 31 chemical classes. Specifically the following tasks were undertaken: First, a comprehensive assessment of the two common approaches (gamma-phi (gamma-ϕ) and phi-phi (ϕ-ϕ)) used for determining the phase behavior of vapor-liquid equilibrium systems is presented. Both the representation and predictive capabilities of these two approaches were examined, as delineated form internal and external consistency tests of 916 binary systems. For the purpose, the universal quasi-chemical (UNIQUAC) model and the Peng-Robinson (PR) equation of state (EOS) were used in this assessment. Second, the efficacy of recently developed generalized UNIQUAC and the nonrandom two-liquid (NRTL) for predicting multicomponent VLE systems were investigated. Third, the abilities of recently modified NRTL model (mNRTL2 and mNRTL1) to characterize liquid-liquid equilibria (LLE) phase conditions and attributes, including phase stability, miscibility, and consolute point coordinates, were assessed. The results of this work indicate that the ϕ-ϕ approach represents the binary VLE systems considered within three times the error of the gamma-ϕ approach. A similar trend was observed for the for the generalized model predictions using quantitative structure-property parameter generalizations (QSPR). For ternary systems, where all three constituent binary systems were available, the NRTL-QSPR, UNIQUAC-QSPR, and UNIFAC-6 models produce comparable accuracy. For systems where at least one constituent binary is missing, the UNIFAC-6 model produces larger errors than the QSPR generalized models. In general, the LLE characterization results indicate the accuracy of the modified models in reproducing the findings of the original NRTL model.

  10. Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data

    NASA Astrophysics Data System (ADS)

    Palumbo, Francesco; D'Enza, Alfonso Iodice

    The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.

  11. Noise exposure-response relationships established from repeated binary observations: Modeling approaches and applications.

    PubMed

    Schäffer, Beat; Pieren, Reto; Mendolia, Franco; Basner, Mathias; Brink, Mark

    2017-05-01

    Noise exposure-response relationships are used to estimate the effects of noise on individuals or a population. Such relationships may be derived from independent or repeated binary observations, and modeled by different statistical methods. Depending on the method by which they were established, their application in population risk assessment or estimation of individual responses may yield different results, i.e., predict "weaker" or "stronger" effects. As far as the present body of literature on noise effect studies is concerned, however, the underlying statistical methodology to establish exposure-response relationships has not always been paid sufficient attention. This paper gives an overview on two statistical approaches (subject-specific and population-averaged logistic regression analysis) to establish noise exposure-response relationships from repeated binary observations, and their appropriate applications. The considerations are illustrated with data from three noise effect studies, estimating also the magnitude of differences in results when applying exposure-response relationships derived from the two statistical approaches. Depending on the underlying data set and the probability range of the binary variable it covers, the two approaches yield similar to very different results. The adequate choice of a specific statistical approach and its application in subsequent studies, both depending on the research question, are therefore crucial.

  12. GWM-a ground-water management process for the U.S. Geological Survey modular ground-water model (MODFLOW-2000)

    USGS Publications Warehouse

    Ahlfeld, David P.; Barlow, Paul M.; Mulligan, Anne E.

    2005-01-01

    GWM is a Ground?Water Management Process for the U.S. Geological Survey modular three?dimensional ground?water model, MODFLOW?2000. GWM uses a response?matrix approach to solve several types of linear, nonlinear, and mixed?binary linear ground?water management formulations. Each management formulation consists of a set of decision variables, an objective function, and a set of constraints. Three types of decision variables are supported by GWM: flow?rate decision variables, which are withdrawal or injection rates at well sites; external decision variables, which are sources or sinks of water that are external to the flow model and do not directly affect the state variables of the simulated ground?water system (heads, streamflows, and so forth); and binary variables, which have values of 0 or 1 and are used to define the status of flow?rate or external decision variables. Flow?rate decision variables can represent wells that extend over one or more model cells and be active during one or more model stress periods; external variables also can be active during one or more stress periods. A single objective function is supported by GWM, which can be specified to either minimize or maximize the weighted sum of the three types of decision variables. Four types of constraints can be specified in a GWM formulation: upper and lower bounds on the flow?rate and external decision variables; linear summations of the three types of decision variables; hydraulic?head based constraints, including drawdowns, head differences, and head gradients; and streamflow and streamflow?depletion constraints. The Response Matrix Solution (RMS) Package of GWM uses the Ground?Water Flow Process of MODFLOW to calculate the change in head at each constraint location that results from a perturbation of a flow?rate variable; these changes are used to calculate the response coefficients. For linear management formulations, the resulting matrix of response coefficients is then combined with other components of the linear management formulation to form a complete linear formulation; the formulation is then solved by use of the simplex algorithm, which is incorporated into the RMS Package. Nonlinear formulations arise for simulated conditions that include water?table (unconfined) aquifers or head?dependent boundary conditions (such as streams, drains, or evapotranspiration from the water table). Nonlinear formulations are solved by sequential linear programming; that is, repeated linearization of the nonlinear features of the management problem. In this approach, response coefficients are recalculated for each iteration of the solution process. Mixed?binary linear (or mildly nonlinear) formulations are solved by use of the branch and bound algorithm, which is also incorporated into the RMS Package. Three sample problems are provided to demonstrate the use of GWM for typical ground?water flow management problems. These sample problems provide examples of how GWM input files are constructed to specify the decision variables, objective function, constraints, and solution process for a GWM run. The GWM Process runs with the MODFLOW?2000 Global and Ground?Water Flow Processes, but in its current form GWM cannot be used with the Observation, Sensitivity, Parameter?Estimation, or Ground?Water Transport Processes. The GWM Process is written with a modular structure so that new objective functions, constraint types, and solution algorithms can be added.

  13. Missing Data in Alcohol Clinical Trials with Binary Outcomes

    PubMed Central

    Hallgren, Kevin A.; Witkiewitz, Katie; Kranzler, Henry R.; Falk, Daniel E.; Litten, Raye Z.; O’Malley, Stephanie S.; Anton, Raymond F.

    2017-01-01

    Background Missing data are common in alcohol clinical trials for both continuous and binary endpoints. Approaches to handle missing data have been explored for continuous outcomes, yet no studies have compared missing data approaches for binary outcomes (e.g., abstinence, no heavy drinking days). The present study compares approaches to modeling binary outcomes with missing data in the COMBINE study. Method We included participants in the COMBINE Study who had complete drinking data during treatment and who were assigned to active medication or placebo conditions (N=1146). Using simulation methods, missing data were introduced under common scenarios with varying sample sizes and amounts of missing data. Logistic regression was used to estimate the effect of naltrexone (vs. placebo) in predicting any drinking and any heavy drinking outcomes at the end of treatment using four analytic approaches: complete case analysis (CCA), last observation carried forward (LOCF), the worst-case scenario of missing equals any drinking or heavy drinking (WCS), and multiple imputation (MI). In separate analyses, these approaches were compared when drinking data were manually deleted for those participants who discontinued treatment but continued to provide drinking data. Results WCS produced the greatest amount of bias in treatment effect estimates. MI usually yielded less biased estimates than WCS and CCA in the simulated data, and performed considerably better than LOCF when estimating treatment effects among individuals who discontinued treatment. Conclusions Missing data can introduce bias in treatment effect estimates in alcohol clinical trials. Researchers should utilize modern missing data methods, including MI, and avoid WCS and CCA when analyzing binary alcohol clinical trial outcomes. PMID:27254113

  14. Missing Data in Alcohol Clinical Trials with Binary Outcomes.

    PubMed

    Hallgren, Kevin A; Witkiewitz, Katie; Kranzler, Henry R; Falk, Daniel E; Litten, Raye Z; O'Malley, Stephanie S; Anton, Raymond F

    2016-07-01

    Missing data are common in alcohol clinical trials for both continuous and binary end points. Approaches to handle missing data have been explored for continuous outcomes, yet no studies have compared missing data approaches for binary outcomes (e.g., abstinence, no heavy drinking days). This study compares approaches to modeling binary outcomes with missing data in the COMBINE study. We included participants in the COMBINE study who had complete drinking data during treatment and who were assigned to active medication or placebo conditions (N = 1,146). Using simulation methods, missing data were introduced under common scenarios with varying sample sizes and amounts of missing data. Logistic regression was used to estimate the effect of naltrexone (vs. placebo) in predicting any drinking and any heavy drinking outcomes at the end of treatment using 4 analytic approaches: complete case analysis (CCA), last observation carried forward (LOCF), the worst case scenario (WCS) of missing equals any drinking or heavy drinking, and multiple imputation (MI). In separate analyses, these approaches were compared when drinking data were manually deleted for those participants who discontinued treatment but continued to provide drinking data. WCS produced the greatest amount of bias in treatment effect estimates. MI usually yielded less biased estimates than WCS and CCA in the simulated data and performed considerably better than LOCF when estimating treatment effects among individuals who discontinued treatment. Missing data can introduce bias in treatment effect estimates in alcohol clinical trials. Researchers should utilize modern missing data methods, including MI, and avoid WCS and CCA when analyzing binary alcohol clinical trial outcomes. Copyright © 2016 by the Research Society on Alcoholism.

  15. Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

    PubMed Central

    Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just

    2003-01-01

    A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531

  16. Two-way learning with one-way supervision for gene expression data.

    PubMed

    Wong, Monica H T; Mutch, David M; McNicholas, Paul D

    2017-03-04

    A family of parsimonious Gaussian mixture models for the biclustering of gene expression data is introduced. Biclustering is accommodated by adopting a mixture of factor analyzers model with a binary, row-stochastic factor loadings matrix. This particular form of factor loadings matrix results in a block-diagonal covariance matrix, which is a useful property in gene expression analyses, specifically in biomarker discovery scenarios where blood can potentially act as a surrogate tissue for other less accessible tissues. Prior knowledge of the factor loadings matrix is useful in this application and is reflected in the one-way supervised nature of the algorithm. Additionally, the factor loadings matrix can be assumed to be constant across all components because of the relationship desired between the various types of tissue samples. Parameter estimates are obtained through a variant of the expectation-maximization algorithm and the best-fitting model is selected using the Bayesian information criterion. The family of models is demonstrated using simulated data and two real microarray data sets. The first real data set is from a rat study that investigated the influence of diabetes on gene expression in different tissues. The second real data set is from a human transcriptomics study that focused on blood and immune tissues. The microarray data sets illustrate the biclustering family's performance in biomarker discovery involving peripheral blood as surrogate biopsy material. The simulation studies indicate that the algorithm identifies the correct biclusters, most optimally when the number of observation clusters is known. Moreover, the biclustering algorithm identified biclusters comprised of biologically meaningful data related to insulin resistance and immune function in the rat and human real data sets, respectively. Initial results using real data show that this biclustering technique provides a novel approach for biomarker discovery by enabling blood to be used as a surrogate for hard-to-obtain tissues.

  17. Artificial Intelligence in Astronomy

    NASA Astrophysics Data System (ADS)

    Devinney, E. J.; Prša, A.; Guinan, E. F.; Degeorge, M.

    2010-12-01

    From the perspective (and bias) as Eclipsing Binary researchers, we give a brief overview of the development of Artificial Intelligence (AI) applications, describe major application areas of AI in astronomy, and illustrate the power of an AI approach in an application developed under the EBAI (Eclipsing Binaries via Artificial Intelligence) project, which employs Artificial Neural Network technology for estimating light curve solution parameters of eclipsing binary systems.

  18. The fidelity of Kepler eclipsing binary parameters inferred by the neural network

    NASA Astrophysics Data System (ADS)

    Holanda, N.; da Silva, J. R. P.

    2018-04-01

    This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 eclipsing binary detached obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cos ω and e sin ω, but orbital inclination is clearly underestimated in neural network tests.

  19. The fidelity of Kepler eclipsing binary parameters inferred by the neural network

    NASA Astrophysics Data System (ADS)

    Holanda, N.; da Silva, J. R. P.

    2018-07-01

    This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 detached eclipsing binaries obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light-curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cosω and e sinω, but orbital inclination is clearly underestimated in neural network tests.

  20. Weight loss, ion release and initial mechanical properties of a binary calcium phosphate glass fibre/PCL composite.

    PubMed

    Ahmed, I; Parsons, A J; Palmer, G; Knowles, J C; Walker, G S; Rudd, C D

    2008-09-01

    Composites comprising a biodegradable polymeric matrix and a bioactive filler show considerable promise in the field of regenerative medicine, and could potentially serve as degradable bone fracture fixation devices, depending on the properties obtained. Therefore, glass fibres from a binary calcium phosphate (50P(2)O(5)+50CaO) glass were used to reinforce polycaprolactone, at two different volume fractions (V(f)). As-drawn, non-treated and heat-treated fibres were assessed. Weight loss, ion release and the initial mechanical properties of the fibres and composites produced have been investigated. Single fibre tensile testing revealed a fibre strength of 474MPa and a tensile modulus of 44GPa. Weibull analysis suggested a scale value of 524. The composites yielded flexural strength and modulus of up to 30MPa and 2.5GPa, respectively. These values are comparable with human trabecular bone. An 8% mass loss was seen for the lower V(f) composite, whereas for the two higher V(f) composites an approximate 20% mass loss was observed over the course of the 5week study. A plateau in the degradation profile at 350h indicated that fibre dissolution was complete at this interval. This assertion was further supported via ion release studies. The leaching of fibres from the composite created a porous structure, including continuous channels within the polymer matrix. This offers further scope for tailoring scaffold development, as cells from the surrounding tissue may be induced to migrate into the resulting porous matrix.

  1. Influence of management of variables, sampling zones and land units on LR analysis for landslide spatial prevision

    NASA Astrophysics Data System (ADS)

    Greco, R.; Sorriso-Valvo, M.

    2013-09-01

    Several authors, according to different methodological approaches, have employed logistic Regression (LR), a multivariate statistical analysis adopted to assess the spatial probability of landslide, even though its fundamental principles have remained unaltered. This study aims at assessing the influence of some of these methodological approaches on the performance of LR, through a series of sensitivity analyses developed over a test area of about 300 km2 in Calabria (southern Italy). In particular, four types of sampling (1 - the whole study area; 2 - transects running parallel to the general slope direction of the study area with a total surface of about 1/3 of the whole study area; 3 - buffers surrounding the phenomena with a 1/1 ratio between the stable and the unstable area; 4 - buffers surrounding the phenomena with a 1/2 ratio between the stable and the unstable area), two variable coding modes (1 - grouped variables; 2 - binary variables), and two types of elementary land (1 - cells units; 2 - slope units) units have been tested. The obtained results must be considered as statistically relevant in all cases (Aroc values > 70%), thus confirming the soundness of the LR analysis which maintains high predictive capacities notwithstanding the features of input data. As for the area under investigation, the best performing methodological choices are the following: (i) transects produced the best results (0 < P(y) ≤ 93.4%; Aroc = 79.5%); (ii) as for sampling modalities, binary variables (0 < P(y) ≤ 98.3%; Aroc = 80.7%) provide better performance than ordinated variables; (iii) as for the choice of elementary land units, slope units (0 < P(y) ≤ 100%; Aroc = 84.2%) have obtained better results than cells matrix.

  2. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  3. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  4. A feedforward artificial neural network based on quantum effect vector-matrix multipliers.

    PubMed

    Levy, H J; McGill, T C

    1993-01-01

    The vector-matrix multiplier is the engine of many artificial neural network implementations because it can simulate the way in which neurons collect weighted input signals from a dendritic arbor. A new technology for building analog weighting elements that is theoretically capable of densities and speeds far beyond anything that conventional VLSI in silicon could ever offer is presented. To illustrate the feasibility of such a technology, a small three-layer feedforward prototype network with five binary neurons and six tri-state synapses was built and used to perform all of the fundamental logic functions: XOR, AND, OR, and NOT.

  5. Largely enhanced dielectric properties of carbon nanotubes/polyvinylidene fluoride binary nanocomposites by loading a few boron nitride nanosheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Minhao; Zhao, Hang; He, Delong

    2016-08-15

    The ternary nanocomposites of boron nitride nanosheets (BNNSs)/carbon nanotubes (CNTs)/polyvinylidene fluoride (PVDF) are fabricated via a combination of solution casting and extrusion-injection processes. The effects of BNNSs on the electrical conductivity, dielectric behavior, and microstructure changes of CNTs/PVDF binary nanocomposites are systematically investigated. A low percolation value (f{sub c}) for the CNTs/PVDF binary system is obtained due to the integration of solution and melting blending procedures. Two kinds of CNTs/PVDF binary systems with various CNTs contents (f{sub CNTs}) as the matrix are discussed. The results reveal that compared with CNTs/PVDF binary systems at the same f{sub CNTs}, the ternary BNNSs/CNTs/PVDFmore » nanocomposites exhibit largely enhanced dielectric properties due to the improvement of the CNTs dispersion state and the conductive network. The dielectric constant of CNTs/PVDF binary nanocomposite with 6 vol. % CNTs (f{sub CNTs} < f{sub c}) shows a 79.59% enhancement from 49 to 88 after the incorporation of 3 vol. % BNNSs. For the other CNTs/PVDF system with 8 vol. % CNTs (f{sub CNTs} > f{sub c}), it displays a 43.32% improvement from 1325 to 1899 after the addition of 3 vol. % BNNSs. The presence of BNNSs facilitates the formation of the denser conductive network. Meanwhile, the ternary BNNSs/CNTs/PVDF systems exhibit a low dielectric loss. The adjustable dielectric properties could be obtained by employing the ternary systems due to the microstructure changes of nanocomposites.« less

  6. Image Retrieval using Integrated Features of Binary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Agarwal, Megha; Maheshwari, R. P.

    2011-12-01

    In this paper a new approach for image retrieval is proposed with the application of binary wavelet transform. This new approach facilitates the feature calculation with the integration of histogram and correlogram features extracted from binary wavelet subbands. Experiments are performed to evaluate and compare the performance of proposed method with the published literature. It is verified that average precision and average recall of proposed method (69.19%, 41.78%) is significantly improved compared to optimal quantized wavelet correlogram (OQWC) [6] (64.3%, 38.00%) and Gabor wavelet correlogram (GWC) [10] (64.1%, 40.6%). All the experiments are performed on Corel 1000 natural image database [20].

  7. Microstructure and mechanical properties of a single crystal NiAl alloy with Zr or Hf rich G-phase precipitates

    NASA Technical Reports Server (NTRS)

    Locci, I. E.; Noebe, R. D.; Bowman, R. R.; Miner, R. V.; Nathal, M. V.; Darolia, R.

    1991-01-01

    The possibility of producing NiAl reinforced with the G-phase (Ni16X6Si7), where X is Zr or Hf, has been investigated. The microstructure of these NiAl alloys have been characterized in the as-cast and annealed conditions. The G-phases are present as fine cuboidal precipitates (10 to 40 nm) and have lattice parameters almost four times that of NiAl. They are coherent with the matrix and fairly resistant to coarsening during annealing heat treatments. Segregation and nonuniform precipitate distribution observed in as-cast materials were eliminated by homogenization at temperatures near 1600 K. Slow cooling from these temperatures resulted in large plate shaped precipitates, denuded zones, and a loss of coherency in some of the large particles. Faster cooling produced a homogeneous fine distribution of cuboidal G-phase particles in the matrix. Preliminary mechanical properties for the Zr-doped alloy are presented and compared to binary single crystal NiAl. The presence of these precipitates appears to have an important strengthening effect at temperatures not less than 1000 K compared to binary NiAl single crystals.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yagi, Kent; Tanaka, Takahiro; Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502

    We calculate how strongly one can put constraints on alternative theories of gravity such as Brans-Dicke and massive graviton theories with LISA. We consider inspiral gravitational waves from a compact binary composed of a neutron star and an intermediate mass black hole in Brans-Dicke (BD) theory and that composed of a super massive black hole in massive graviton theories. We use the restricted second post-Newtonian waveforms including the effects of spins. We also take both precession and eccentricity of the orbit into account. For simplicity, we set the fiducial value for the spin of one of the binary constituents tomore » zero so that we can apply the approximation called simple precession. We perform the Monte Carlo simulations of 10{sup 4} binaries, estimating the determination accuracy of binary parameters including the BD parameter {omega}{sub BD} and the Compton wavelength of graviton {lambda}{sub g} for each binary using the Fisher matrix method. We find that including both the spin-spin coupling {sigma} and the eccentricity e into the binary parameters reduces the determination accuracy by an order of magnitude for the Brans-Dicke case, while it has less influence on massive graviton theories. On the other hand, including precession enhances the constraint on {omega}{sub BD} only 20% but it increases the constraint on {lambda}{sub g} by several factors. Using a (1.4+1000)M{sub {center_dot}}neutron star/black hole binary of SNR={radical}(200), one can put a constraint {omega}{sub BD}>6944, while using a (10{sup 7}+10{sup 6})M{sub {center_dot}}black hole/black hole binary at 3 Gpc, one can put {lambda}{sub g}>3.10x10{sup 21} cm, on average. The latter is 4 orders of magnitude stronger than the one obtained from the solar system experiment. These results are consistent with previous results within uncontrolled errors and indicate that the effects of precession and eccentricity must be taken carefully in the parameter estimation analysis.« less

  9. Resistive switching memory devices composed of binary transition metal oxides using sol-gel chemistry.

    PubMed

    Lee, Chanwoo; Kim, Inpyo; Choi, Wonsup; Shin, Hyunjung; Cho, Jinhan

    2009-04-21

    We describe a novel and versatile approach for preparing resistive switching memory devices based on binary transition metal oxides (TMOs). Titanium isopropoxide (TIPP) was spin-coated onto platinum (Pt)-coated silicon substrates using a sol-gel process. The sol-gel-derived layer was converted into a TiO2 film by thermal annealing. A top electrode (Ag electrode) was then coated onto the TiO2 films to complete device fabrication. When an external bias was applied to the devices, a switching phenomenon independent of the voltage polarity (i.e., unipolar switching) was observed at low operating voltages (about 0.6 VRESET and 1.4 VSET). In addition, it was confirmed that the electrical properties (i.e., retention time, cycling test and switching speed) of the sol-gel-derived devices were comparable to those of vacuum deposited devices. This approach can be extended to a variety of binary TMOs such as niobium oxides. The reported approach offers new opportunities for preparing the binary TMO-based resistive switching memory devices allowing a facile solution processing.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schäfer, Gerhard

    The current knowledge in the post-Newtonian (PN) dynamics and motion of non-spinning and spinning compact binaries will be presented based on the Arnowitt-Deser-Misner Hamiltonian approach to general relativity. The presentation will cover the binary dynamics with non-spinning components up to the 4PN order and for spinning binaries up to the next-to-next-to-leading order in the spin-orbit and spin-spin couplings. Radiation reaction will be treated for both non-spinning and spinning binaries. Explicit analytic expressions for the motion will be given, innermost stable circular orbits will be discussed.

  11. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  12. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search

    PubMed Central

    2017-01-01

    Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima. PMID:28634487

  13. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search.

    PubMed

    Huang, Xingwang; Zeng, Xuewen; Han, Rui

    2017-01-01

    Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.

  14. Fast measurement of proton exchange membrane fuel cell impedance based on pseudo-random binary sequence perturbation signals and continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Debenjak, Andrej; Boškoski, Pavle; Musizza, Bojan; Petrovčič, Janko; Juričić, Đani

    2014-05-01

    This paper proposes an approach to the estimation of PEM fuel cell impedance by utilizing pseudo-random binary sequence as a perturbation signal and continuous wavelet transform with Morlet mother wavelet. With the approach, the impedance characteristic in the frequency band from 0.1 Hz to 500 Hz is identified in 60 seconds, approximately five times faster compared to the conventional single-sine approach. The proposed approach was experimentally evaluated on a single PEM fuel cell of a larger fuel cell stack. The quality of the results remains at the same level compared to the single-sine approach.

  15. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    PubMed

    Yeh, Wei-Chang

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  16. Combining multiple decisions: applications to bioinformatics

    NASA Astrophysics Data System (ADS)

    Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.

  17. Using an Iterative Fourier Series Approach in Determining Orbital Elements of Detached Visual Binary Stars

    NASA Astrophysics Data System (ADS)

    Tupa, Peter R.; Quirin, S.; DeLeo, G. G.; McCluskey, G. E., Jr.

    2007-12-01

    We present a modified Fourier transform approach to determine the orbital parameters of detached visual binary stars. Originally inspired by Monet (ApJ 234, 275, 1979), this new method utilizes an iterative routine of refining higher order Fourier terms in a manner consistent with Keplerian motion. In most cases, this approach is not sensitive to the starting orbital parameters in the iterative loop. In many cases we have determined orbital elements even with small fragments of orbits and noisy data, although some systems show computational instabilities. The algorithm was constructed using the MAPLE mathematical software code and tested on artificially created orbits and many real binary systems, including Gliese 22 AC, Tau 51, and BU 738. This work was supported at Lehigh University by NSF-REU grant PHY-9820301.

  18. Mapping quantitative trait loci for binary trait in the F2:3 design.

    PubMed

    Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang

    2008-12-01

    In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.

  19. Solubility enhancement of miconazole nitrate: binary and ternary mixture approach.

    PubMed

    Rai, Vineet Kumar; Dwivedi, Harinath; Yadav, Narayan Prasad; Chanotiya, Chandan Singh; Saraf, Shubhini A

    2014-08-01

    Enhancement of aqueous solubility of very slightly soluble Miconazole Nitrate (MN) is required to widen its application from topical formulation to oral/mucoadhesive formulations. Aim of the present investigation was to enhance the aqueous solubility of MN using binary and ternary mixture approach. Binary mixtures such as solvent deposition, inclusion complexation and solid dispersion were adopted to enhance solubility using different polymers like lactose, beta-cyclodextrin (β-CD) and polyethylene-glycol 6000 (PEG 6000), respectively. Batches of binary mixtures with highest solubility enhancement potentials were further mixed to form ternary mixture by a simple kneading method. Drug polymer interaction and mixture morphology was studied using the Fourier transform infrared spectroscopy and the scanning electron microscopy, respectively along with their saturation solubility studies and drug release. An excellent solubility enhancement, i.e. up to 72 folds and 316 folds of MN was seen by binary and ternary mixture, respectively. Up to 99.5% drug was released in 2 h from the mixtures of MN and polymers. RESULTS revealed that solubility enhancement by binary mixtures is achieved due to surface modification and by increasing wettability of MN. Tremendous increase in solubility of MN by ternary mixture could possibly be due to blending of water soluble polymers, i.e. lactose and PEG 6000 with β-CD which was found to enhance the solubilizing nature of β-CD. Owing to the excellent solubility enhancement potential of ternary mixtures in enhancing MN solubility from 110.4 μg/ml to 57640.0 μg/ml, ternary mixture approach could prove to be promising in the development of oral/mucoadhesive formulations.

  20. Statistical inference approach to structural reconstruction of complex networks from binary time series

    NASA Astrophysics Data System (ADS)

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  1. Statistical inference approach to structural reconstruction of complex networks from binary time series.

    PubMed

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  2. Accuracy of Estimating Highly Eccentric Binary Black Hole Parameters with Gravitational-wave Detections

    NASA Astrophysics Data System (ADS)

    Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt

    2018-03-01

    Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).

  3. Face Alignment via Regressing Local Binary Features.

    PubMed

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  4. A subgradient approach for constrained binary optimization via quantum adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Karimi, Sahar; Ronagh, Pooya

    2017-08-01

    Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.

  5. Pseudorandom binary injection of levitons for electron quantum optics

    NASA Astrophysics Data System (ADS)

    Glattli, D. C.; Roulleau, P.

    2018-03-01

    The recent realization of single-electron sources lets us envision performing electron quantum optics experiments, where electrons can be viewed as flying qubits propagating in a ballistic conductor. To date, all electron sources operate in a periodic electron injection mode, leading to energy spectrum singularities in various physical observables which sometimes hide the bare nature of physical effects. To go beyond this, we propose a spread-spectrum approach where electron flying qubits are injected in a nonperiodic manner following a pseudorandom binary bit pattern. Extending the Floquet scattering theory approach from periodic to spread-spectrum drive, the shot noise of pseudorandom binary sequences of single-electron injection can be calculated for leviton and nonleviton sources. Our new approach allows us to disentangle the physics of the manipulated excitations from that of the injection protocol. In particular, the spread-spectrum approach is shown to provide better knowledge of electronic Hong-Ou-Mandel correlations and to clarify the nature of the pulse train coherence and the role of the dynamical orthogonality catastrophe for noninteger charge injection.

  6. Automatic system for radar echoes filtering based on textural features and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Hedir, Mehdia; Haddad, Boualem

    2017-10-01

    Among the very popular Artificial Intelligence (AI) techniques, Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been retained to process Ground Echoes (GE) on meteorological radar images taken from Setif (Algeria) and Bordeaux (France) with different climates and topologies. To achieve this task, AI techniques were associated with textural approaches. We used Gray Level Co-occurrence Matrix (GLCM) and Completed Local Binary Pattern (CLBP); both methods were largely used in image analysis. The obtained results show the efficiency of texture to preserve precipitations forecast on both sites with the accuracy of 98% on Bordeaux and 95% on Setif despite the AI technique used. 98% of GE are suppressed with SVM, this rate is outperforming ANN skills. CLBP approach associated to SVM eliminates 98% of GE and preserves precipitations forecast on Bordeaux site better than on Setif's, while it exhibits lower accuracy with ANN. SVM classifier is well adapted to the proposed application since the average filtering rate is 95-98% with texture and 92-93% with CLBP. These approaches allow removing Anomalous Propagations (APs) too with a better accuracy of 97.15% with texture and SVM. In fact, textural features associated to AI techniques are an efficient tool for incoherent radars to surpass spurious echoes.

  7. Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach

    DOE PAGES

    Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim; ...

    2017-07-20

    The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less

  8. Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim

    The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less

  9. Scalable non-negative matrix tri-factorization.

    PubMed

    Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž

    2017-01-01

    Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.

  10. Precision of proportion estimation with binary compressed Raman spectrum.

    PubMed

    Réfrégier, Philippe; Scotté, Camille; de Aguiar, Hilton B; Rigneault, Hervé; Galland, Frédéric

    2018-01-01

    The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer-Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound). Evolutions of the Cramer-Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.

  11. Ab-initio study of liquid systems: Concentration dependence of electrical resistivity of binary liquid alloy Rb1-xCsx

    NASA Astrophysics Data System (ADS)

    Thakur, Anil; Sharma, Nalini; Chandel, Surjeet; Ahluwalia, P. K.

    2013-02-01

    The electrical resistivity (ρL) of Rb1-XCsX binary alloys has been made calculated using Troullier Martins ab-initio pseudopotentials. The present results of the electrical resistivity (ρL) of Rb1-XCsX binary alloys have been found in good agreement with the experimental results. These results suggest that ab-initio approach for calculating electrical resistivity is quite successful in explaining the electronic transport properties of binary Liquid alloys. Hence ab-initio pseudopotentials can be used instead of model pseudopotentials having problem of transferability.

  12. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  13. Gravitational interactions of stars with supermassive black hole binaries. I. Tidal disruption events

    NASA Astrophysics Data System (ADS)

    Darbha, Siva; Coughlin, Eric R.; Kasen, Daniel; Quataert, Eliot

    2018-04-01

    Stars approaching supermassive black holes (SMBHs) in the centers of galaxies can be torn apart by strong tidal forces. We study the physics of tidal disruption by a circular, binary SMBH as a function of the binary mass ratio q = M2/M1 and separation a, exploring a large set of points in the parameter range q ∈ [0.01, 1] and a/rt1 ∈ [10, 1000]. We simulate encounters in which field stars approach the binary from the loss cone on parabolic, low angular momentum orbits. We present the rate of disruption and the orbital properties of the disrupted stars, and examine the fallback dynamics of the post-disruption debris in the "frozen-in" approximation. We conclude by calculating the time-dependent disruption rate over the lifetime of the binary. Throughout, we use a primary mass M1 = 106M⊙ as our central example. We find that the tidal disruption rate is a factor of ˜2 - 7 times larger than the rate for an isolated BH, and is independent of q for q ≳ 0.2. In the "frozen-in" model, disruptions from close, nearly equal mass binaries can produce intense tidal fallbacks: for binaries with q ≳ 0.2 and a/rt1 ˜ 100, roughly ˜18 - 40% of disruptions will have short rise times (trise ˜ 1 - 10 d) and highly super-Eddington peak return rates (\\dot{M}_{peak} / \\dot{M}_{Edd} ˜ 2 × 10^2 - 3 × 10^3).

  14. Gravitational interactions of stars with supermassive black hole binaries - I. Tidal disruption events

    NASA Astrophysics Data System (ADS)

    Darbha, Siva; Coughlin, Eric R.; Kasen, Daniel; Quataert, Eliot

    2018-07-01

    Stars approaching supermassive black holes (SMBHs) in the centres of galaxies can be torn apart by strong tidal forces. We study the physics of tidal disruption by a circular, binary SMBH as a function of the binary mass ratio q = M2/M1 and separation a, exploring a large set of points in the parameter range q ∈ [0.01, 1] and a/rt1 ∈ [10, 1000]. We simulate encounters in which field stars approach the binary from the loss cone on parabolic, low angular momentum orbits. We present the rate of disruption and the orbital properties of the disrupted stars, and examine the fallback dynamics of the post-disruption debris in the `frozen-in' approximation. We conclude by calculating the time-dependent disruption rate over the lifetime of the binary. Throughout, we use a primary mass M1 = 106 M⊙ as our central example. We find that the tidal disruption rate is a factor of ˜2-7 times larger than the rate for an isolated BH, and is independent of q for q ≳ 0.2. In the `frozen-in' model, disruptions from close, nearly equal mass binaries can produce intense tidal fallbacks: for binaries with q ≳ 0.2 and a/rt1 ˜ 100, roughly {˜ } 18-40 per cent of disruptions will have short rise times (trise ˜ 1-10 d) and highly super-Eddington peak return rates (\\dot{M}_peak / \\dot{M}_Edd ˜ 2 × 10^2-3 × 10^3).

  15. An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1989-01-01

    An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.

  16. Optical computing using optical flip-flops in Fourier processors: use in matrix multiplication and discrete linear transforms.

    PubMed

    Ando, S; Sekine, S; Mita, M; Katsuo, S

    1989-12-15

    An architecture and the algorithms for matrix multiplication using optical flip-flops (OFFs) in optical processors are proposed based on residue arithmetic. The proposed system is capable of processing all elements of matrices in parallel utilizing the information retrieving ability of optical Fourier processors. The employment of OFFs enables bidirectional data flow leading to a simpler architecture and the burden of residue-to-decimal (or residue-to-binary) conversion to operation time can be largely reduced by processing all elements in parallel. The calculated characteristics of operation time suggest a promising use of the system in a real time 2-D linear transform.

  17. Concurrent generation of multivariate mixed data with variables of dissimilar types.

    PubMed

    Amatya, Anup; Demirtas, Hakan

    2016-01-01

    Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.

  18. An experimental study of energy dependence of saturation thickness of multiply scattered gamma rays in binary alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Gurvinderjit; Singh, Bhajan, E-mail: bhajan2k1@yahoo.co.in; Sandhu, B. S.

    2015-08-28

    The present measurements are carried out to investigate the multiple scattering of 662 keV gamma photons emerging from targets of binary alloys (brass and soldering material). The scattered photons are detected by 51 mm × 51 mm NaI(Tl) scintillation detector whose response unscrambling converting the observed pulse–height distribution to a true photon energy spectrum, is obtained with the help of 10 × 10 inverse response matrix. The numbers of multiply scattered events, having same energy as in the singly scattered distribution, first increases with target thickness and then saturate. The application of response function of scintillation detector does not result in anymore » change of measured saturation thickness. Monte Carlo calculation supports the present experimental results.« less

  19. Matrix isolation infrared and Raman spectra of binary and mixed group II B fluorides

    NASA Astrophysics Data System (ADS)

    Givan, A.; Loewenschuss, A.

    1980-03-01

    Infrared and Raman spectra of all MF2 and MFX molecules (M=Zn, Cd, Hg; X=Cl, Br) and the infrared spectrum of the fluoroidide HgFI isolated in solid krypton at 20 °K are reported. The MFX species were formed in a vapor mixture of the appropriate MF2 and MX2 dihalides, vaporized, at different temperatures, from separate compartments of a double-oven crucible. The spectra are the first experimental evidence for the existence of the molecular fluorohalides. All three fundamentals of the MF2 molecules and the two stretching mode frequencies of the MFX molecules are assigned. Harmonic force constants are evaluated and isotope effects are used to discuss their geometry. Thermodynamic functions are tabulated for the binary difluorides.

  20. Study of the validity of a job-exposure matrix for psychosocial work factors: results from the national French SUMER survey.

    PubMed

    Niedhammer, Isabelle; Chastang, Jean-François; Levy, David; David, Simone; Degioanni, Stéphanie; Theorell, Töres

    2008-10-01

    To construct and evaluate the validity of a job-exposure matrix (JEM) for psychosocial work factors defined by Karasek's model using national representative data of the French working population. National sample of 24,486 men and women who filled in the Job Content Questionnaire (JCQ) by Karasek measuring the scores of psychological demands, decision latitude, and social support (individual scores) in 2003 (response rate 96.5%). Median values of the three scores in the total sample of men and women were used to define high demands, low latitude, and low support (individual binary exposures). Job title was defined by both occupation and economic activity that were coded using detailed national classifications (PCS and NAF/NACE). Two JEM measures were calculated from the individual scores of demands, latitude and support for each job title: JEM scores (mean of the individual score) and JEM binary exposures (JEM score dichotomized at the median). The analysis of the variance of the individual scores of demands, latitude, and support explained by occupations and economic activities, of the correlation and agreement between individual measures and JEM measures, and of the sensitivity and specificity of JEM exposures, as well as the study of the associations with self-reported health showed a low validity of JEM measures for psychological demands and social support, and a relatively higher validity for decision latitude compared with individual measures. Job-exposure matrix measure for decision latitude might be used as a complementary exposure assessment. Further research is needed to evaluate the validity of JEM for psychosocial work factors.

  1. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  2. Solidification and microstructures of binary ice-I/hydrate eutectic aggregates

    USGS Publications Warehouse

    McCarthy, C.; Cooper, R.F.; Kirby, S.H.; Rieck, K.D.; Stern, L.A.

    2007-01-01

    The microstructures of two-phase binary aggregates of ice-I + salt-hydrate, prepared by eutectic solidification, have been characterized by cryogenic scanning electron microscopy (CSEM). The specific binary systems studied were H2O-Na2SO4, H2O-MgSO4, H2O-NaCl, and H2O-H2SO4; these were selected based on their potential application to the study of tectonics on the Jovian moon Europa. Homogeneous liquid solutions of eutectic compositions were undercooled modestly (??T - 1-5 ??C); similarly cooled crystalline seeds of the same composition were added to circumvent the thermodynamic barrier to nucleation and to control eutectic growth under (approximately) isothermal conditions. CSEM revealed classic eutectic solidification microstructures with the hydrate phase forming continuous lamellae, discontinuous lamellae, or forming the matrix around rods of ice-I, depending on the volume fractions of the phases and their entropy of dissolving and forming a homogeneous aqueous solution. We quantify aspects of the solidification behavior and microstructures for each system and, with these data articulate anticipated effects of the microstructure on the mechanical responses of the materials.

  3. Reduction from cost-sensitive ordinal ranking to weighted binary classification.

    PubMed

    Lin, Hsuan-Tien; Li, Ling

    2012-05-01

    We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.

  4. Biomimetic porous high-density polyethylene/polyethylene- grafted-maleic anhydride scaffold with improved in vitro cytocompatibility.

    PubMed

    Sharma, Swati; Bhaskar, Nitu; Bose, Surjasarathi; Basu, Bikaramjit

    2018-05-01

    A major challenge for tissue engineering is to design and to develop a porous biocompatible scaffold, which can mimic the properties of natural tissue. As a first step towards this endeavour, we here demonstrate a distinct methodology in biomimetically synthesized porous high-density polyethylene scaffolds. Co-extrusion approach was adopted, whereby high-density polyethylene was melt mixed with polyethylene oxide to form an immiscible binary blend. Selective dissolution of polyethylene oxide from the biphasic system revealed droplet-matrix-type morphology. An attempt to stabilize such morphology against thermal and shear effects was made by the addition of polyethylene- grafted-maleic anhydride as a compatibilizer. A maximum ultimate tensile strength of 7 MPa and elastic modulus of 370 MPa were displayed by the high-density polyethylene/polyethylene oxide binary blend with 5% maleated polyethylene during uniaxial tensile loading. The cell culture experiments with murine myoblast C2C12 cell line indicated that compared to neat high-density polyethylene and high-density polyethylene/polyethylene oxide, the high-density polyethylene/polyethylene oxide with 5% polyethylene- grafted-maleic anhydride scaffold significantly increased muscle cell attachment and proliferation with distinct elongated threadlike appearance and highly stained nuclei, in vitro. This has been partly attributed to the change in surface wettability property with a reduced contact angle (∼72°) for 5% PE- g-MA blends. These findings suggest that the high-density polyethylene/polyethylene oxide with 5% polyethylene- grafted-maleic anhydride can be treated as a cell growth substrate in bioengineering applications.

  5. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  6. Minimizing embedding impact in steganography using trellis-coded quantization

    NASA Astrophysics Data System (ADS)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  7. Artistic image analysis using graph-based learning approaches.

    PubMed

    Carneiro, Gustavo

    2013-08-01

    We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.

  8. Corrigendum to "Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer" [J. Magn. Reson. 230 (2013) 88-97

    NASA Astrophysics Data System (ADS)

    Müller, Dirk K.; Pampel, André; Möller, Harald E.

    2015-12-01

    In the print version of this article initially published, reference to a funding source was missing. The following information should be added to the Acknowledgements section: This work was funded (in part) by the Helmholtz Alliance ICEMED-Imaging and Curing Environmental Metabolic Diseases, through the Initiative and Networking Fund of the Helmholtz Association.

  9. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    PubMed

    Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.

  10. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol

    PubMed Central

    Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157

  11. Study on relationship between pollen exine ornamentation pattern and germplasm evolution in flowering crabapple

    PubMed Central

    Zhang, Wang-Xiang; Zhao, Ming-Ming; Fan, Jun-Jun; Zhou, Ting; Chen, Yong-Xia; Cao, Fu-Liang

    2017-01-01

    Pollen ornamentation patterns are important in the study of plant genetic evolution and systematic taxonomy. However, they are normally difficult to quantify. Based on observations of pollen exine ornamentation characteristics of 128 flowering crabapple germplasms (44 natural species and 84 varieties), three qualitative variables with binary properties (Xi: regularity of pollen exine ornamentation; Yi: scope of ornamentation arrangement regularity; Zi: ornamentation arrangement patterns) were extracted to establish a binary three-dimensional data matrix (Xi Yi Zi) and the matrix data were converted to decimal data through weight assignment, which facilitated the unification of qualitative analysis and quantitative analysis. The result indicates that from species population to variety population and from parent population to variety population, the exine ornamentation of all three dimensions present the evolutionary trend of regular → irregular, wholly regular → partially regular, and single pattern → multiple patterns. Regarding the evolutionary degree, the regularity of ornamentation was significantly lower in both the variety population and progeny population, with a degree of decrease 0.82–1.27 times that of the regularity range of R-type ornamentation. In addition, the evolutionary degree significantly increased along Xi  → Yi → Zi. The result also has certain reference values for defining the taxonomic status of Malus species. PMID:28059122

  12. Dynamics of asymmetric binary glass formers. I. A dielectric and nuclear magnetic resonance spectroscopy study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahlau, R.; Bock, D.; Schmidtke, B.

    2014-01-28

    Dielectric spectroscopy as well as {sup 2}H and {sup 31}P nuclear magnetic resonance spectroscopy (NMR) are applied to probe the component dynamics of the binary glass former tripropyl phosphate (TPP)/polystyrene (PS/PS-d{sub 3}) in the full concentration (c{sub TPP}) range. In addition, depolarized light scattering and differential scanning calorimetry experiments are performed. Two glass transition temperatures are found: T{sub g1}(c{sub TPP}) reflects PS dynamics and shows a monotonic plasticizer effect, while the lower T{sub g2}(c{sub TPP}) exhibits a maximum and is attributed to (faster) TPP dynamics, occurring in a slowly moving or immobilized PS matrix. Dielectric spectroscopy probing solely TPP identifiesmore » two different time scales, which are attributed to two sub-ensembles. One of them, again, shows fast TPP dynamics (α{sub 2}-process), the other (α{sub 1}-process) displays time constants identical with those of the slow PS matrix. Upon heating the α{sub 1}-fraction of TPP decreases until above some temperature T{sub c} only a single α{sub 2}-population exists. Inversely, below T{sub c} a fraction of the TPP molecules is trapped by the PS matrix. At low c{sub TPP} the α{sub 2}-relaxation does not follow frequency-temperature superposition (FTS), instead it is governed by a temperature independent distribution of activation energies leading to correlation times which follow Arrhenius laws, i.e., the α{sub 2}-relaxation resembles a secondary process. Yet, {sup 31}P NMR demonstrates that it involves isotropic reorientations of TPP molecules within a slowly moving or rigid matrix of PS. At high c{sub TPP} the super-Arrhenius temperature dependence of τ{sub 2}(T), as well as FTS are recovered, known as typical of the glass transition in neat systems.« less

  13. Modeling for Matrix Multicracking Evolution of Cross-ply Ceramic-Matrix Composites Using Energy Balance Approach

    NASA Astrophysics Data System (ADS)

    Longbiao, Li

    2015-12-01

    The matrix multicracking evolution of cross-ply ceramic-matrix composites (CMCs) has been investigated using energy balance approach. The multicracking of cross-ply CMCs was classified into five modes, i.e., (1) mode 1: transverse multicracking; (2) mode 2: transverse multicracking and matrix multicracking with perfect fiber/matrix interface bonding; (3) mode 3: transverse multicracking and matrix multicracking with fiber/matrix interface debonding; (4) mode 4: matrix multicracking with perfect fiber/matrix interface bonding; and (5) mode 5: matrix multicracking with fiber/matrix interface debonding. The stress distributions of four cracking modes, i.e., mode 1, mode 2, mode 3 and mode 5, are analysed using shear-lag model. The matrix multicracking evolution of mode 1, mode 2, mode 3 and mode 5, has been determined using energy balance approach. The effects of ply thickness and fiber volume fraction on matrix multicracking evolution of cross-ply CMCs have been investigated.

  14. Lifetime of binary asteroids versus gravitational encounters and collisions

    NASA Technical Reports Server (NTRS)

    Chauvineau, Bertrand; Farinella, Paolo; Mignard, F.

    1992-01-01

    We investigate the effect on the dynamics of a binary asteroid in the case of a near encounter with a third body. The dynamics of the binary is modeled as a two-body problem perturbed by an approaching body in the following ways: near encounters and collisions with a component of the system. In each case, the typical value of the two-body energy variation is estimated, and a random walk for the cumulative effect is assumed. Results are applied to some binary asteroid candidates. The main conclusion is that the collisional disruption is the dominant effect, giving lifetimes comparable to or larger than the age of the solar system.

  15. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  16. NEUROBEHAVIORAL EVALUATIONS OF BINARY AND TERTIARY MIXTURES OF CHEMICALS: LESSIONS LEARNING.

    EPA Science Inventory

    The classical approach to the statistical analysis of binary chemical mixtures is to construct full dose-response curves for one compound in the presence of a range of doses of the second compound (isobolographic analyses). For interaction studies using more than two chemicals, ...

  17. Wave transmission approach based on modal analysis for embedded mechanical systems

    NASA Astrophysics Data System (ADS)

    Cretu, Nicolae; Nita, Gelu; Ioan Pop, Mihail

    2013-09-01

    An experimental method for determining the phase velocity in small solid samples is proposed. The method is based on measuring the resonant frequencies of a binary or ternary solid elastic system comprising the small sample of interest and a gauge material of manageable size. The wave transmission matrix of the combined system is derived and the theoretical values of its eigenvalues are used to determine the expected eigenfrequencies that, equated with the measured values, allow for the numerical estimation of the phase velocities in both materials. The known phase velocity of the gauge material is then used to asses the accuracy of the method. Using computer simulation and the experimental values for phase velocities, the theoretical values for the eigenfrequencies of the eigenmodes of the embedded elastic system are obtained, to validate the method. We conclude that the proposed experimental method may be reliably used to determine the elastic properties of small solid samples whose geometries do not allow a direct measurement of their resonant frequencies.

  18. Kernel analysis of partial least squares (PLS) regression models.

    PubMed

    Shinzawa, Hideyuki; Ritthiruangdej, Pitiporn; Ozaki, Yukihiro

    2011-05-01

    An analytical technique based on kernel matrix representation is demonstrated to provide further chemically meaningful insight into partial least squares (PLS) regression models. The kernel matrix condenses essential information about scores derived from PLS or principal component analysis (PCA). Thus, it becomes possible to establish the proper interpretation of the scores. A PLS model for the total nitrogen (TN) content in multiple Thai fish sauces is built with a set of near-infrared (NIR) transmittance spectra of the fish sauce samples. The kernel analysis of the scores effectively reveals that the variation of the spectral feature induced by the change in protein content is substantially associated with the total water content and the protein hydration. Kernel analysis is also carried out on a set of time-dependent infrared (IR) spectra representing transient evaporation of ethanol from a binary mixture solution of ethanol and oleic acid. A PLS model to predict the elapsed time is built with the IR spectra and the kernel matrix is derived from the scores. The detailed analysis of the kernel matrix provides penetrating insight into the interaction between the ethanol and the oleic acid.

  19. A comparative study for chest radiograph image retrieval using binary texture and deep learning classification.

    PubMed

    Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit

    2015-08-01

    In this work various approaches are investigated for X-ray image retrieval and specifically chest pathology retrieval. Given a query image taken from a data set of 443 images, the objective is to rank images according to similarity. Different features, including binary features, texture features, and deep learning (CNN) features are examined. In addition, two approaches are investigated for the retrieval task. One approach is based on the distance of image descriptors using the above features (hereon termed the "descriptor"-based approach); the second approach ("classification"-based approach) is based on a probability descriptor, generated by a pair-wise classification of each two classes (pathologies) and their decision values using an SVM classifier. Best results are achieved using deep learning features in a classification scheme.

  20. Characterization of coronary plaque regions in intravascular ultrasound images using a hybrid ensemble classifier.

    PubMed

    Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Shin, Eun Seok; Kim, Sung Min

    2018-01-01

    The purpose of this study was to propose a hybrid ensemble classifier to characterize coronary plaque regions in intravascular ultrasound (IVUS) images. Pixels were allocated to one of four tissues (fibrous tissue (FT), fibro-fatty tissue (FFT), necrotic core (NC), and dense calcium (DC)) through processes of border segmentation, feature extraction, feature selection, and classification. Grayscale IVUS images and their corresponding virtual histology images were acquired from 11 patients with known or suspected coronary artery disease using 20 MHz catheter. A total of 102 hybrid textural features including first order statistics (FOS), gray level co-occurrence matrix (GLCM), extended gray level run-length matrix (GLRLM), Laws, local binary pattern (LBP), intensity, and discrete wavelet features (DWF) were extracted from IVUS images. To select optimal feature sets, genetic algorithm was implemented. A hybrid ensemble classifier based on histogram and texture information was then used for plaque characterization in this study. The optimal feature set was used as input of this ensemble classifier. After tissue characterization, parameters including sensitivity, specificity, and accuracy were calculated to validate the proposed approach. A ten-fold cross validation approach was used to determine the statistical significance of the proposed method. Our experimental results showed that the proposed method had reliable performance for tissue characterization in IVUS images. The hybrid ensemble classification method outperformed other existing methods by achieving characterization accuracy of 81% for FFT and 75% for NC. In addition, this study showed that Laws features (SSV and SAV) were key indicators for coronary tissue characterization. The proposed method had high clinical applicability for image-based tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Assessing REDD+ performance of countries with low monitoring capacities: the matrix approach

    NASA Astrophysics Data System (ADS)

    Bucki, M.; Cuypers, D.; Mayaux, P.; Achard, F.; Estreguil, C.; Grassi, G.

    2012-03-01

    Estimating emissions from deforestation and degradation of forests in many developing countries is so uncertain that the effects of changes in forest management could remain within error ranges (i.e. undetectable) for several years. Meanwhile UNFCCC Parties need consistent time series of meaningful performance indicators to set credible benchmarks and allocate REDD+ incentives to the countries, programs and activities that actually reduce emissions, while providing social and environmental benefits. Introducing widespread measuring of carbon in forest land (which would be required to estimate more accurately changes in emissions from degradation and forest management) will take time and considerable resources. To ensure the overall credibility and effectiveness of REDD+, parties must consider the design of cost-effective systems which can provide reliable and comparable data on anthropogenic forest emissions. Remote sensing can provide consistent time series of land cover maps for most non-Annex-I countries, retrospectively. These maps can be analyzed to identify the forests that are intact (i.e. beyond significant human influence), and whose fragmentation could be a proxy for degradation. This binary stratification of forests biomes (intact/non-intact), a transition matrix and the use of default carbon stock change factors can then be used to provide initial estimates of trends in emission changes. A proof-of-concept is provided for one biome of the Democratic Republic of the Congo over a virtual commitment period (2005-2010). This approach could allow assessment of the performance of the five REDD+ activities (deforestation, degradation, conservation, management and enhancement of forest carbon stocks) in a spatially explicit, verifiable manner. Incentives could then be tailored to prioritize activities depending on the national context and objectives.

  2. A simple transferable adaptive potential to study phase separation in large-scale xMgO-(1-x)SiO2 binary glasses.

    PubMed

    Bidault, Xavier; Chaussedent, Stéphane; Blanc, Wilfried

    2015-10-21

    A simple transferable adaptive model is developed and it allows for the first time to simulate by molecular dynamics the separation of large phases in the MgO-SiO2 binary system, as experimentally observed and as predicted by the phase diagram, meaning that separated phases have various compositions. This is a real improvement over fixed-charge models, which are often limited to an interpretation involving the formation of pure clusters, or involving the modified random network model. Our adaptive model, efficient to reproduce known crystalline and glassy structures, allows us to track the formation of large amorphous Mg-rich Si-poor nanoparticles in an Mg-poor Si-rich matrix from a 0.1MgO-0.9SiO2 melt.

  3. Equivalent crystal theory of alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Ferrante, John

    1991-01-01

    Equivalent Crystal Theory (ECT) is a new, semi-empirical approach to calculating the energetics of a solid with defects. The theory has successfully reproduced surface energies in metals and semiconductors. The theory of binary alloys to date, both with first-principles and semi-empirical models, has not been very successful in predicting the energetics of alloys. This procedure is used to predict the heats of formation, cohesive energy, and lattice parameter of binary alloys of Cu, Ni, Al, Ag, Au, Pd, and Pt as functions of composition. The procedure accurately reproduces the heats of formation versus composition curves for a variety of binary alloys. The results are then compared with other approaches such as the embedded atom and lattice parameters of alloys from pure metal properties more accurately than Vegard's law is presented.

  4. Elasticity Dominated Surface Segregation of Small Molecules in Polymer Mixtures

    NASA Astrophysics Data System (ADS)

    Krawczyk, Jarosław; Croce, Salvatore; McLeish, T. C. B.; Chakrabarti, Buddhapriya

    2016-05-01

    We study the phenomenon of migration of the small molecular weight component of a binary polymer mixture to the free surface using mean field and self-consistent field theories. By proposing a free energy functional that incorporates polymer-matrix elasticity explicitly, we compute the migrant volume fraction and show that it decreases significantly as the sample rigidity is increased. A wetting transition, observed for high values of the miscibility parameter can be prevented by increasing the matrix rigidity. Estimated values of the bulk modulus suggest that the effect should be observable experimentally for rubberlike materials. This provides a simple way of controlling surface migration in polymer mixtures and can play an important role in industrial formulations, where surface migration often leads to decreased product functionality.

  5. IA and PA network-based computation of coordinating combat behaviors in the military MAS

    NASA Astrophysics Data System (ADS)

    Xia, Zuxun; Fang, Huijia

    2004-09-01

    In the military multi-agent system every agent needs to analyze the dependent and temporal relations among the tasks or combat behaviors for working-out its plans and getting the correct behavior sequences, it could guarantee good coordination, avoid unexpected damnification and guard against bungling the change of winning a battle due to the possible incorrect scheduling and conflicts. In this paper IA and PA network based computation of coordinating combat behaviors is put forward, and emphasize particularly on using 5x5 matrix to represent and compute the temporal binary relation (between two interval-events, two point-events or between one interval-event and one point-event), this matrix method makes the coordination computing convenience than before.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, S.; Schaffer, J. E.; Ren, Y.

    Room temperature deformation of a Ni{sub 46.7}Ti{sub 42.8}Nb{sub 10.5} alloy was studied by in-situ synchrotron X-ray diffraction. Compared to binary NiTi alloy, the Nb dissolved in the matrix significantly increased the onset stress for Stress-Induced Martensite Transformation (SIMT). The secondary phase, effectively a Nb-nanowire dispersion in a NiTi-Nb matrix, increased the elastic stiffness of the bulk material, reduced the strain anisotropy in austenite families by load sharing during SIMT, and increased the stress hysteresis by resisting reverse phase transformation during unloading. The stress hysteresis can be controlled over a wide range by heat treatment through its influences on the residualmore » stress of the Nb-nanowire dispersion and the stability of the austenite.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, S.; Schaffer, J. E.; Ren, Y.

    Room temperature deformation of a Ni46.7Ti42.8Nb10.5 alloy was studied by in-situ synchrotron X-ray diffraction. Compared to binary NiTi alloy, the Nb dissolved in the matrix significantly increased the onset stress for Stress-Induced Martensite Transformation (SIMT). The secondary phase, effectively a Nb-nanowire dispersion in a NiTi-Nb matrix, increased the elastic stiffness of the bulk material, reduced the strain anisotropy in austenite families by loading sharing during SIMT, and increased the stress hysteresis by resisting reverse phase transformation during unloading. The stress hysteresis can be controlled over a wide range by changing the heat treatment temperature through its influences on the residualmore » stress-strain state of the Nb-nanowire dispersion.« less

  8. SECULAR EVOLUTION OF BINARIES NEAR MASSIVE BLACK HOLES: FORMATION OF COMPACT BINARIES, MERGER/COLLISION PRODUCTS AND G2-LIKE OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prodan, Snezana; Antonini, Fabio; Perets, Hagai B., E-mail: sprodan@cita.utoronto.ca, E-mail: antonini@cita.utoronto.ca

    2015-02-01

    Here we discuss the evolution of binaries around massive black holes (MBHs) in nuclear stellar clusters. We focus on their secular evolution due to the perturbation by the MBHs, while simplistically accounting for their collisional evolution. Binaries with highly inclined orbits with respect to their orbits around MBHs are strongly affected by secular processes, which periodically change their eccentricities and inclinations (e.g., Kozai-Lidov cycles). During periapsis approach, dissipative processes such as tidal friction may become highly efficient, and may lead to shrinkage of a binary orbit and even to its merger. Binaries in this environment can therefore significantly change theirmore » orbital evolution due to the MBH third-body perturbative effects. Such orbital evolution may impinge on their later stellar evolution. Here we follow the secular dynamics of such binaries and its coupling to tidal evolution, as well as the stellar evolution of such binaries on longer timescales. We find that stellar binaries in the central parts of nuclear stellar clusters (NSCs) are highly likely to evolve into eccentric and/or short-period binaries, and become strongly interacting binaries either on the main sequence (at which point they may even merge), or through their later binary stellar evolution. The central parts of NSCs therefore catalyze the formation and evolution of strongly interacting binaries, and lead to the enhanced formation of blue stragglers, X-ray binaries, gravitational wave sources, and possible supernova progenitors. Induced mergers/collisions may also lead to the formation of G2-like cloud-like objects such as the one recently observed in the Galactic center.« less

  9. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  10. An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation

    NASA Astrophysics Data System (ADS)

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2011-12-01

    Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a number of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach.

  11. Aberration-free superresolution imaging via binary speckle pattern encoding and processing

    NASA Astrophysics Data System (ADS)

    Ben-Eliezer, Eyal; Marom, Emanuel

    2007-04-01

    We present an approach that provides superresolution beyond the classical limit as well as image restoration in the presence of aberrations; in particular, the ability to obtain superresolution while extending the depth of field (DOF) simultaneously is tested experimentally. It is based on an approach, recently proposed, shown to increase the resolution significantly for in-focus images by speckle encoding and decoding. In our approach, an object multiplied by a fine binary speckle pattern may be located anywhere along an extended DOF region. Since the exact magnification is not known in the presence of defocus aberration, the acquired low-resolution image is electronically processed via a parallel-branch decoding scheme, where in each branch the image is multiplied by the same high-resolution synchronized time-varying binary speckle but with different magnification. Finally, a hard-decision algorithm chooses the branch that provides the highest-resolution output image, thus achieving insensitivity to aberrations as well as DOF variations. Simulation as well as experimental results are presented, exhibiting significant resolution improvement factors.

  12. Searching Ultra-compact Pulsar Binaries with Abnormal Timing Behavior

    NASA Astrophysics Data System (ADS)

    Gong, B. P.; Li, Y. P.; Yuan, J. P.; Tian, J.; Zhang, Y. Y.; Li, D.; Jiang, B.; Li, X. D.; Wang, H. G.; Zou, Y. C.; Shao, L. J.

    2018-03-01

    Ultra-compact pulsar binaries are both ideal sources of gravitational radiation for gravitational wave detectors and laboratories for fundamental physics. However, the shortest orbital period of all radio pulsar binaries is currently 1.6 hr. The absence of pulsar binaries with a shorter orbital period is most likely due to technique limit. This paper points out that a tidal effect occurring on pulsar binaries with a short orbital period can perturb the orbital elements and result in a significant change in orbital modulation, which dramatically reduces the sensitivity of the acceleration searching that is widely used. Here a new search is proposed. The abnormal timing residual exhibited in a single pulse observation is simulated by a tidal effect occurring on an ultra-compact binary. The reproduction of the main features represented by the sharp peaks displayed in the abnormal timing behavior suggests that pulsars like PSR B0919+06 could be a candidate for an ultra-compact binary of an orbital period of ∼10 minutes and a companion star of a white dwarf star. The binary nature of such a candidate is further tested by (1) comparing the predicted long-term binary effect with decades of timing noise observed and (2) observing the optical counterpart of the expected companion star. Test (1) likely supports our model, while more observations are needed in test (2). Some interesting ultra-compact binaries could be found in the near future by applying such a new approach to other binary candidates.

  13. Improving quantum state transfer efficiency and entanglement distribution in binary tree spin network through incomplete collapsing measurements

    NASA Astrophysics Data System (ADS)

    Behzadi, Naghi; Ahansaz, Bahram

    2018-04-01

    We propose a mechanism for quantum state transfer (QST) over a binary tree spin network on the basis of incomplete collapsing measurements. To this aim, we perform initially a weak measurement (WM) on the central qubit of the binary tree network where the state of our concern has been prepared on that qubit. After the time evolution of the whole system, a quantum measurement reversal (QMR) is performed on a chosen target qubit. By taking optimal value for the strength of QMR, it is shown that the QST quality from the sending qubit to any typical target qubit on the binary tree is considerably improved in terms of the WM strength. Also, we show that how high-quality entanglement distribution over the binary tree network is achievable by using this approach.

  14. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  15. Binary image encryption in a joint transform correlator scheme by aid of run-length encoding and QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong

    2018-07-01

    We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.

  16. Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir

    2010-01-01

    A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…

  17. Object tracking on mobile devices using binary descriptors

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Quraishi, Mohammad Faiz; Minnehan, Breton

    2015-03-01

    With the growing ubiquity of mobile devices, advanced applications are relying on computer vision techniques to provide novel experiences for users. Currently, few tracking approaches take into consideration the resource constraints on mobile devices. Designing efficient tracking algorithms and optimizing performance for mobile devices can result in better and more efficient tracking for applications, such as augmented reality. In this paper, we use binary descriptors, including Fast Retina Keypoint (FREAK), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Independent Features (BRIEF), and Binary Robust Invariant Scalable Keypoints (BRISK) to obtain real time tracking performance on mobile devices. We consider both Google's Android and Apple's iOS operating systems to implement our tracking approach. The Android implementation is done using Android's Native Development Kit (NDK), which gives the performance benefits of using native code as well as access to legacy libraries. The iOS implementation was created using both the native Objective-C and the C++ programing languages. We also introduce simplified versions of the BRIEF and BRISK descriptors that improve processing speed without compromising tracking accuracy.

  18. Robust k-mer frequency estimation using gapped k-mers

    PubMed Central

    Ghandi, Mahmoud; Mohammad-Noori, Morteza

    2013-01-01

    Oligomers of fixed length, k, commonly known as k-mers, are often used as fundamental elements in the description of DNA sequence features of diverse biological function, or as intermediate elements in the constuction of more complex descriptors of sequence features such as position weight matrices. k-mers are very useful as general sequence features because they constitute a complete and unbiased feature set, and do not require parameterization based on incomplete knowledge of biological mechanisms. However, a fundamental limitation in the use of k-mers as sequence features is that as k is increased, larger spatial correlations in DNA sequence elements can be described, but the frequency of observing any specific k-mer becomes very small, and rapidly approaches a sparse matrix of binary counts. Thus any statistical learning approach using k-mers will be susceptible to noisy estimation of k-mer frequencies once k becomes large. Because all molecular DNA interactions have limited spatial extent, gapped k-mers often carry the relevant biological signal. Here we use gapped k-mer counts to more robustly estimate the ungapped k-mer frequencies, by deriving an equation for the minimum norm estimate of k-mer frequencies given an observed set of gapped k-mer frequencies. We demonstrate that this approach provides a more accurate estimate of the k-mer frequencies in real biological sequences using a sample of CTCF binding sites in the human genome. PMID:23861010

  19. Robust k-mer frequency estimation using gapped k-mers.

    PubMed

    Ghandi, Mahmoud; Mohammad-Noori, Morteza; Beer, Michael A

    2014-08-01

    Oligomers of fixed length, k, commonly known as k-mers, are often used as fundamental elements in the description of DNA sequence features of diverse biological function, or as intermediate elements in the constuction of more complex descriptors of sequence features such as position weight matrices. k-mers are very useful as general sequence features because they constitute a complete and unbiased feature set, and do not require parameterization based on incomplete knowledge of biological mechanisms. However, a fundamental limitation in the use of k-mers as sequence features is that as k is increased, larger spatial correlations in DNA sequence elements can be described, but the frequency of observing any specific k-mer becomes very small, and rapidly approaches a sparse matrix of binary counts. Thus any statistical learning approach using k-mers will be susceptible to noisy estimation of k-mer frequencies once k becomes large. Because all molecular DNA interactions have limited spatial extent, gapped k-mers often carry the relevant biological signal. Here we use gapped k-mer counts to more robustly estimate the ungapped k-mer frequencies, by deriving an equation for the minimum norm estimate of k-mer frequencies given an observed set of gapped k-mer frequencies. We demonstrate that this approach provides a more accurate estimate of the k-mer frequencies in real biological sequences using a sample of CTCF binding sites in the human genome.

  20. Enhanced Regulatory Sequence Prediction Using Gapped k-mer Features

    PubMed Central

    Mohammad-Noori, Morteza; Beer, Michael A.

    2014-01-01

    Abstract Oligomers of length k, or k-mers, are convenient and widely used features for modeling the properties and functions of DNA and protein sequences. However, k-mers suffer from the inherent limitation that if the parameter k is increased to resolve longer features, the probability of observing any specific k-mer becomes very small, and k-mer counts approach a binary variable, with most k-mers absent and a few present once. Thus, any statistical learning approach using k-mers as features becomes susceptible to noisy training set k-mer frequencies once k becomes large. To address this problem, we introduce alternative feature sets using gapped k-mers, a new classifier, gkm-SVM, and a general method for robust estimation of k-mer frequencies. To make the method applicable to large-scale genome wide applications, we develop an efficient tree data structure for computing the kernel matrix. We show that compared to our original kmer-SVM and alternative approaches, our gkm-SVM predicts functional genomic regulatory elements and tissue specific enhancers with significantly improved accuracy, increasing the precision by up to a factor of two. We then show that gkm-SVM consistently outperforms kmer-SVM on human ENCODE ChIP-seq datasets, and further demonstrate the general utility of our method using a Naïve-Bayes classifier. Although developed for regulatory sequence analysis, these methods can be applied to any sequence classification problem. PMID:25033408

  1. Enhanced regulatory sequence prediction using gapped k-mer features.

    PubMed

    Ghandi, Mahmoud; Lee, Dongwon; Mohammad-Noori, Morteza; Beer, Michael A

    2014-07-01

    Oligomers of length k, or k-mers, are convenient and widely used features for modeling the properties and functions of DNA and protein sequences. However, k-mers suffer from the inherent limitation that if the parameter k is increased to resolve longer features, the probability of observing any specific k-mer becomes very small, and k-mer counts approach a binary variable, with most k-mers absent and a few present once. Thus, any statistical learning approach using k-mers as features becomes susceptible to noisy training set k-mer frequencies once k becomes large. To address this problem, we introduce alternative feature sets using gapped k-mers, a new classifier, gkm-SVM, and a general method for robust estimation of k-mer frequencies. To make the method applicable to large-scale genome wide applications, we develop an efficient tree data structure for computing the kernel matrix. We show that compared to our original kmer-SVM and alternative approaches, our gkm-SVM predicts functional genomic regulatory elements and tissue specific enhancers with significantly improved accuracy, increasing the precision by up to a factor of two. We then show that gkm-SVM consistently outperforms kmer-SVM on human ENCODE ChIP-seq datasets, and further demonstrate the general utility of our method using a Naïve-Bayes classifier. Although developed for regulatory sequence analysis, these methods can be applied to any sequence classification problem.

  2. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.

  3. Efficient removal of arsenic from water using a granular adsorbent: Fe-Mn binary oxide impregnated chitosan bead.

    PubMed

    Qi, Jianying; Zhang, Gaosheng; Li, Haining

    2015-10-01

    A novel sorbent of Fe-Mn binary oxide impregnated chitosan bead (FMCB) was fabricated through impregnating Fe-Mn binary oxide into chitosan matrix. The FMCB is sphere-like with a diameter of 1.6-1.8 mm, which is effective for both As(V) and As(III) sorption. The maximal sorption capacities are 39.1 and 54.2 mg/g, respectively, outperforming most of reported granular sorbents. The arsenic was mainly removed by adsorbing onto the Fe-Mn oxide component. The coexisting SO4(2-), HCO3(-) and SiO3(2-) have no great influence on arsenic sorption, whereas, the HPO4(2-) shows negative effects. The arsenic-loaded FMCB could be effectively regenerated using NaOH solution and repeatedly used. In column tests, about 1500 and 3200 bed volumes of simulated groundwater containing 233 μg/L As(V) and As(III) were respectively treated before breakthrough. These results demonstrate the superiority of the FMCB in removing As(V) and As(III), indicating that it is a promising candidate for arsenic removal from real drinking water. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Hadamard multimode optical imaging transceiver

    DOEpatents

    Cooke, Bradly J; Guenther, David C; Tiee, Joe J; Kellum, Mervyn J; Olivas, Nicholas L; Weisse-Bernstein, Nina R; Judd, Stephen L; Braun, Thomas R

    2012-10-30

    Disclosed is a method and system for simultaneously acquiring and producing results for multiple image modes using a common sensor without optical filtering, scanning, or other moving parts. The system and method utilize the Walsh-Hadamard correlation detection process (e.g., functions/matrix) to provide an all-binary structure that permits seamless bridging between analog and digital domains. An embodiment may capture an incoming optical signal at an optical aperture, convert the optical signal to an electrical signal, pass the electrical signal through a Low-Noise Amplifier (LNA) to create an LNA signal, pass the LNA signal through one or more correlators where each correlator has a corresponding Walsh-Hadamard (WH) binary basis function, calculate a correlation output coefficient for each correlator as a function of the corresponding WH binary basis function in accordance with Walsh-Hadamard mathematical principles, digitize each of the correlation output coefficient by passing each correlation output coefficient through an Analog-to-Digital Converter (ADC), and performing image mode processing on the digitized correlation output coefficients as desired to produce one or more image modes. Some, but not all, potential image modes include: multi-channel access, temporal, range, three-dimensional, and synthetic aperture.

  5. Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.

    PubMed

    Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing

    2017-01-01

    Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.

  6. Doublet craters and the tidal disruption of binary asteroids

    NASA Technical Reports Server (NTRS)

    Melosh, H. J.; Stansberry, J. A.

    1991-01-01

    An evaluation is conducted of the possibility that the tidal disruption of a population of contact binary asteroids can account for terrestrial-impact 'doublet' craters. Detailed orbital integrations indicate that while such asteroids are often disrupted by tidal forces outside the Roche limit, the magnitude of the resulting separations is too small to account for the observed doublet craters. It is hypothesized that an initial population of km-scale earth-crossing objects encompassing 10-20 percent binaries must be responsible for doublet impacts, as may be verified by future observations of earth-approaching asteroids.

  7. Diagrammatic technique for calculating matrix elements of collective operators in superradiance. [eigenstates for N two-level atom systems

    NASA Technical Reports Server (NTRS)

    Lee, C. T.

    1975-01-01

    Adopting the so-called genealogical construction, one can express the eigenstates of collective operators corresponding to a specified mode for an N-atom system in terms of those for an (N-1) atom system. Using these Dicke states as bases and using the Wigner-Eckart theorem, a matrix element of a collective operator of an arbitrary mode can be written as the product of an m-dependent factor and an m-independent reduced matrix element (RME). A set of recursion formulas for the RME is obtained. A graphical representation of the RME on the branching diagram for binary irreducible representations of permutation groups is then introduced. This gives a simple and systematic way of calculating the RME. This method is especially useful when the cooperation number r is close to N/2, where almost exact asymptotic expressions can be obtained easily. The result shows explicity the geometry dependence of superradiance and the relative importance of r-conserving and r-nonconserving processes.

  8. Transactional Database Transformation and Its Application in Prioritizing Human Disease Genes

    PubMed Central

    Xiang, Yang; Payne, Philip R.O.; Huang, Kun

    2013-01-01

    Binary (0,1) matrices, commonly known as transactional databases, can represent many application data, including gene-phenotype data where “1” represents a confirmed gene-phenotype relation and “0” represents an unknown relation. It is natural to ask what information is hidden behind these “0”s and “1”s. Unfortunately, recent matrix completion methods, though very effective in many cases, are less likely to infer something interesting from these (0,1)-matrices. To answer this challenge, we propose IndEvi, a very succinct and effective algorithm to perform independent-evidence-based transactional database transformation. Each entry of a (0,1)-matrix is evaluated by “independent evidence” (maximal supporting patterns) extracted from the whole matrix for this entry. The value of an entry, regardless of its value as 0 or 1, has completely no effect for its independent evidence. The experiment on a gene-phenotype database shows that our method is highly promising in ranking candidate genes and predicting unknown disease genes. PMID:21422495

  9. Reciprocated suppression of polymer crystallization toward improved solid polymer electrolytes: Higher ion conductivity and tunable mechanical properties

    DOE PAGES

    Bi, Sheng; Sun, Che-Nan; Zawodzinski, Thomas A.; ...

    2015-08-06

    Solid polymer electrolytes based on lithium bis(trifluoromethanesulfonyl) imide and polymer matrix were extensively studied in the past due to their excellent potential in a broad range of energy related applications. Poly(vinylidene fluoride) (PVDF) and polyethylene oxide (PEO) are among the most examined polymer candidates as solid polymer electrolyte matrix. In this paper, we study the effect of reciprocated suppression of polymer crystallization in PVDF/PEO binary matrix on ion transport and mechanical properties of the resultant solid polymer electrolytes. With electron and X-ray diffractions as well as energy filtered transmission electron microscopy, we identify and examine the appropriate blending composition thatmore » is responsible for the diminishment of both PVDF and PEO crystallites. Laslty, a three-fold conductivity enhancement is achieved along with a highly tunable elastic modulus ranging from 20 to 200 MPa, which is expected to contribute toward future designs of solid polymer electrolytes with high room-temperature ion conductivities and mechanical flexibility.« less

  10. Cosmic matrix in the jubilee of relativistic astrophysics

    NASA Astrophysics Data System (ADS)

    Ruffini, R.; Aimuratov, Y.; Belinski, V.; Bianco, C. L.; Enderli, M.; Izzo, L.; Kovacevic, M.; Mathews, G. J.; Moradi, R.; Muccino, M.; Penacchioni, A. V.; Pisani, G. B.; Rueda, J. A.; Vereshchagin, G. V.; Wang, Y.; Xue, S.-S.

    2015-12-01

    Following the classical works on Neutron Stars, Black Holes and Cosmology, I outline some recent results obtained in the IRAP-PhD program of ICRANet on the "Cosmic Matrix": a new astrophysical phenomenon recorded by the X- and Gamma-Ray satellites and by the largest ground based optical telescopes all over our planet. In 3 minutes it has been recorded the occurrence of a "Supernova", the "Induced-Gravitational-Collapse" on a Neutron Star binary, the formation of a "Black Hole", and the creation of a "Newly Born Neutron Star". This presentation is based on a document describing activities of ICRANet and recent developments of the paradigm of the Cosmic Matrix in the comprehension of Gamma Ray Bursts (GRBs) presented on the occasion of the Fourteenth Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Gravitation, and Relativistic Field Theory. A Portuguese version of this document can be downloaded at: http://www.icranet.org/documents/brochure_icranet_pt.pdf.

  11. Achieving Passive Localization with Traffic Light Schedules in Urban Road Sensor Networks

    PubMed Central

    Niu, Qiang; Yang, Xu; Gao, Shouwan; Chen, Pengpeng; Chan, Shibing

    2016-01-01

    Localization is crucial for the monitoring applications of cities, such as road monitoring, environment surveillance, vehicle tracking, etc. In urban road sensor networks, sensors are often sparely deployed due to the hardware cost. Under this sparse deployment, sensors cannot communicate with each other via ranging hardware or one-hop connectivity, rendering the existing localization solutions ineffective. To address this issue, this paper proposes a novel Traffic Lights Schedule-based localization algorithm (TLS), which is built on the fact that vehicles move through the intersection with a known traffic light schedule. We can first obtain the law by binary vehicle detection time stamps and describe the law as a matrix, called a detection matrix. At the same time, we can also use the known traffic light information to construct the matrices, which can be formed as a collection called a known matrix collection. The detection matrix is then matched in the known matrix collection for identifying where sensors are located on urban roads. We evaluate our algorithm by extensive simulation. The results show that the localization accuracy of intersection sensors can reach more than 90%. In addition, we compare it with a state-of-the-art algorithm and prove that it has a wider operational region. PMID:27735871

  12. Study of drug release and tablet characteristics of silicone adhesive matrix tablets.

    PubMed

    Tolia, Gaurav; Li, S Kevin

    2012-11-01

    Matrix tablets of a model drug acetaminophen (APAP) were prepared using a highly compressible low glass transition temperature (T(g)) polymer silicone pressure sensitive adhesive (PSA) at various binary mixtures of silicone PSA/APAP ratios. Matrix tablets of a rigid high T(g) matrix forming polymer ethyl cellulose (EC) were the reference for comparison. Drug release study was carried out using USP Apparatus 1 (basket), and the relationship between the release kinetic parameters of APAP and polymer/APAP ratio was determined to estimate the excipient percolation threshold. The critical points attributed to both silicone PSA and EC tablet percolation thresholds were found to be between 2.5% and 5% w/w. For silicone PSA tablets, satisfactory mechanical properties were obtained above the polymer percolation threshold; no cracking or chipping of the tablet was observed above this threshold. Rigid EC APAP tablets showed low tensile strength and high friability. These results suggest that silicone PSA could eliminate issues related to drug compressibility in the formulation of directly compressed oral controlled release tablets of poorly compressible drug powder such as APAP. No routinely used excipients such as binders, granulating agents, glidants, or lubricants were required for making an acceptable tablet matrix of APAP using silicone PSA. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Distinguishing boson stars from black holes and neutron stars from tidal interactions in inspiraling binary systems

    NASA Astrophysics Data System (ADS)

    Sennett, Noah; Hinderer, Tanja; Steinhoff, Jan; Buonanno, Alessandra; Ossokine, Serguei

    2017-07-01

    Binary systems containing boson stars—self-gravitating configurations of a complex scalar field—can potentially mimic black holes or neutron stars as gravitational-wave sources. We investigate the extent to which tidal effects in the gravitational-wave signal can be used to discriminate between these standard sources and boson stars. We consider spherically symmetric boson stars within two classes of scalar self-interactions: an effective-field-theoretically motivated quartic potential and a solitonic potential constructed to produce very compact stars. We compute the tidal deformability parameter characterizing the dominant tidal imprint in the gravitational-wave signals for a large span of the parameter space of each boson star model, covering the entire space in the quartic case, and an extensive portion of interest in the solitonic case. We find that the tidal deformability for boson stars with a quartic self-interaction is bounded below by Λmin≈280 and for those with a solitonic interaction by Λmin≈1.3 . We summarize our results as ready-to-use fits for practical applications. Employing a Fisher matrix analysis, we estimate the precision with which Advanced LIGO and third-generation detectors can measure these tidal parameters using the inspiral portion of the signal. We discuss a novel strategy to improve the distinguishability between black holes/neutrons stars and boson stars by combining tidal deformability measurements of each compact object in a binary system, thereby eliminating the scaling ambiguities in each boson star model. Our analysis shows that current-generation detectors can potentially distinguish boson stars with quartic potentials from black holes, as well as from neutron-star binaries if they have either a large total mass or a large (asymmetric) mass ratio. Discriminating solitonic boson stars from black holes using only tidal effects during the inspiral will be difficult with Advanced LIGO, but third-generation detectors should be able to distinguish between binary black holes and these binary boson stars.

  14. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  15. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  16. Abstract Datatypes in PVS

    NASA Technical Reports Server (NTRS)

    Owre, Sam; Shankar, Natarajan

    1997-01-01

    PVS (Prototype Verification System) is a general-purpose environment for developing specifications and proofs. This document deals primarily with the abstract datatype mechanism in PVS which generates theories containing axioms and definitions for a class of recursive datatypes. The concepts underlying the abstract datatype mechanism are illustrated using ordered binary trees as an example. Binary trees are described by a PVS abstract datatype that is parametric in its value type. The type of ordered binary trees is then presented as a subtype of binary trees where the ordering relation is also taken as a parameter. We define the operations of inserting an element into, and searching for an element in an ordered binary tree; the bulk of the report is devoted to PVS proofs of some useful properties of these operations. These proofs illustrate various approaches to proving properties of abstract datatype operations. They also describe the built-in capabilities of the PVS proof checker for simplifying abstract datatype expressions.

  17. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Li, Qingbo; Lu, Xiandan

    1998-04-21

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  18. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Chang, Huan-Tsang; Fung, Eliza N.; Li, Qingbo; Lu, Xiandan

    1996-12-10

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  19. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Li, Q.; Lu, X.

    1998-04-21

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  20. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Chang, H.T.; Fung, E.N.; Li, Q.; Lu, X.

    1996-12-10

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  1. The incidence of stellar mergers and mass gainers among massive stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Mink, S. E.; Sana, H.; Langer, N.

    2014-02-10

    Because the majority of massive stars are born as members of close binary systems, populations of massive main-sequence stars contain stellar mergers and products of binary mass transfer. We simulate populations of massive stars accounting for all major binary evolution effects based on the most recent binary parameter statistics and extensively evaluate the effect of model uncertainties. Assuming constant star formation, we find that 8{sub −4}{sup +9}% of a sample of early-type stars are the products of a merger resulting from a close binary system. In total we find that 30{sub −15}{sup +10}% of massive main-sequence stars are the productsmore » of binary interaction. We show that the commonly adopted approach to minimize the effects of binaries on an observed sample by excluding systems detected as binaries through radial velocity campaigns can be counterproductive. Systems with significant radial velocity variations are mostly pre-interaction systems. Excluding them substantially enhances the relative incidence of mergers and binary products in the non-radial velocity variable sample. This poses a challenge for testing single stellar evolutionary models. It also raises the question of whether certain peculiar classes of stars, such as magnetic O stars, are the result of binary interaction and it emphasizes the need to further study the effect of binarity on the diagnostics that are used to derive the fundamental properties (star-formation history, initial mass function, mass-to-light ratio) of stellar populations nearby and at high redshift.« less

  2. Linear models to perform treaty verification tasks for enhanced information security

    DOE PAGES

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; ...

    2016-11-12

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less

  3. Linear models to perform treaty verification tasks for enhanced information security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less

  4. Primary radiation damage of Zr-0.5%Nb binary alloy: atomistic simulation by molecular dynamics method

    NASA Astrophysics Data System (ADS)

    Tikhonchev, M.; Svetukhin, V.; Kapustin, P.

    2017-09-01

    Ab initio calculations predict high positive binding energy (˜1 eV) between niobium atoms and self-interstitial configurations in hcp zirconium. It allows the expectation of increased niobium fraction in self-interstitials formed under neutron irradiation in atomic displacement cascades. In this paper, we report the results of molecular dynamics simulation of atomic displacement cascades in Zr-0.5%Nb binary alloy and pure Zr at the temperature of 300 K. Two sets of n-body interatomic potentials have been used for the Zr-Nb system. We consider a cascade energy range of 2-20 keV. Calculations show close estimations of the average number of produced Frenkel pairs in the alloy and pure Zr. A high fraction of Nb is observed in the self-interstitial configurations. Nb is mainly detected in single self-interstitial configurations, where its fraction reaches tens of percent, i.e. more than its tenfold concentration in the matrix. The basic mechanism of this phenomenon is the trapping of mobile self-interstitial configurations by niobium. The diffusion of pure zirconium and mixed zirconium-niobium self-interstitial configurations in the zirconium matrix at 300 K has been simulated. We observe a strong dependence of the estimated diffusion coefficients and fractions of Nb in self-interstitials produced in displacement cascades on the potential.

  5. Linear models to perform treaty verification tasks for enhanced information security

    NASA Astrophysics Data System (ADS)

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A.

    2017-02-01

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.

  6. Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.

    PubMed

    Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong

    2017-05-01

    Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.

  7. Slow plastic strain rate compressive flow in binary CoAl intermetallics

    NASA Technical Reports Server (NTRS)

    Whittenberger, J. D.

    1985-01-01

    Constant-velocity elevated temperature compression tests have been conducted on a series of binary CoAl intermetallics produced by hot extrusion of blended prealloyed powders. The as-extruded materials were polycrystalline, and they retained their nominal 10-micron grain size after being tested between 1100 and 1400 K at strain rates ranging from 2 x 10 to the -4th to 2 x 10 to the -7th per sec. Significant plastic flow was obtained in all cases; while cracking was observed, much of this could be due to failure at matrix-oxide interfaces along extrusion stringers rather than to solely intergranular fracture. A maximum in flow strength occurs at an aluminum-to-cobalt ratio of 0.975, and the stress exponent appears to be constant for aluminum-to-cobalt ratios of 0.85 or more. It is likely that very aluminum-deficient materials deform by a different mechanism than do other compositions.

  8. Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Jia, C. J.; Wang, Y.; Mendl, C. B.; Moritz, B.; Devereaux, T. P.

    2018-03-01

    We describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the "checkerboard" decomposition of the Hamiltonian matrix for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.

  9. Collisional redistribution of radiation. II - The effects of degeneracy on the equations of motion for the density matrix. III - The equation of motion for the correlation function and the scattered spectrum

    NASA Technical Reports Server (NTRS)

    Burnett, K.; Cooper, J.

    1980-01-01

    The effect of correlations between an absorber atom and perturbers in the binary-collision approximation are applied to degenerate atomic systems. A generalized absorption profile which specifies the final state of the atom after an absorption event is related to the total intensities of Rayleigh scattering and fluorescence from the atom. It is suggested that additional dynamical information to that obtainable from ordinary absorption experiments is required in order to describe redistributed atomic radiation. The scattering of monochromatic radiation by a degenerate atom is computed in a binary-collision approximation; an equation of motion is derived for the correlation function which is valid outside the quantum-regression regime. Solutions are given for the weak-field conditions in terms of generalized absorption and emission profiles that depend on the indices of the atomic multipoles.

  10. Mapping the Milky Way Galaxy with LISA

    NASA Technical Reports Server (NTRS)

    McKinnon, Jose A.; Littenberg, Tyson

    2012-01-01

    Gravitational wave detectors in the mHz band (such as the Laser Interferometer Space Antenna, or LISA) will observe thousands of compact binaries in the galaxy which can be used to better understand the structure of the Milky Way. To test the effectiveness of LISA to measure the distribution of the galaxy, we simulated the Close White Dwarf Binary (CWDB) gravitational wave sky using different models for the Milky Way. To do so, we have developed a galaxy density distribution modeling code based on the Markov Chain Monte Carlo method. The code uses different distributions to construct realizations of the galaxy. We then use the Fisher Information Matrix to estimate the variance and covariance of the recovered parameters for each detected CWDB. This is the first step toward characterizing the capabilities of space-based gravitational wave detectors to constrain models for galactic structure, such as the size and orientation of the bar in the center of the Milky Way

  11. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  12. Black Hole Mergers in Galactic Nuclei Induced by the Eccentric Kozai–Lidov Effect

    NASA Astrophysics Data System (ADS)

    Hoang, Bao-Minh; Naoz, Smadar; Kocsis, Bence; Rasio, Frederic A.; Dosopoulou, Fani

    2018-04-01

    Nuclear star clusters around a central massive black hole (MBH) are expected to be abundant in stellar black hole (BH) remnants and BH–BH binaries. These binaries form a hierarchical triple system with the central MBH, and gravitational perturbations from the MBH can cause high-eccentricity excitation in the BH–BH binary orbit. During this process, the eccentricity may approach unity, and the pericenter distance may become sufficiently small so that gravitational-wave emission drives the BH–BH binary to merge. In this work, we construct a simple proof-of-concept model for this process, and specifically, we study the eccentric Kozai–Lidov mechanism in unequal-mass, soft BH–BH binaries. Our model is based on a set of Monte Carlo simulations for BH–BH binaries in galactic nuclei, taking into account quadrupole- and octupole-level secular perturbations, general relativistic precession, and gravitational-wave emission. For a typical steady-state number of BH–BH binaries, our model predicts a total merger rate of ∼1–3 {Gpc} ‑3 {yr} ‑1, depending on the assumed density profile in the nucleus. Thus, our mechanism could potentially compete with other dynamical formation processes for merging BH–BH binaries, such as the interactions of stellar BHs in globular clusters or in nuclear star clusters without an MBH.

  13. Fiber Contraction Approaches for Improving CMC Proportional Limit

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A.; Yun, Hee Mann

    1997-01-01

    The fact that the service life of ceramic matrix composites (CMC) decreases dramatically for stresses above the CMC proportional limit has triggered a variety of research activities to develop microstructural approaches that can significantly improve this limit. As discussed in a previous report, both local and global approaches exist for hindering the propagation of cracks through the CMC matrix, the physical source for the proportional limit. Local approaches include: (1) minimizing fiber diameter and matrix modulus; (2) maximizing fiber volume fraction, fiber modulus, and matrix toughness; and (3) optimizing fiber-matrix interfacial shear strength; all of which should reduce the stress concentration at the tip of cracks pre existing or created in the matrix during CMC service. Global approaches, as with pre-stressed concrete, center on seeking mechanisms for utilizing the reinforcing fiber to subject the matrix to in-situ compressive stresses which will remain stable during CMC service. Demonstrated CMC examples for the viability of this residual stress approach are based on strain mismatches between the fiber and matrix in their free states, such as, thermal expansion mismatch and creep mismatch. However, these particular mismatch approaches are application limited in that the residual stresses from expansion mismatch are optimum only at low CMC service temperatures and the residual stresses from creep mismatch are typically unidirectional and difficult to implement in complex-shaped CMC.

  14. Unsettling the Gender Binary: Experiences of Gender in Entrepreneurial Leadership and Implications for HRD

    ERIC Educational Resources Information Center

    Patterson, Nicola; Mavin, Sharon; Turner, Jane

    2012-01-01

    Purpose: This feminist standpoint study aims to make an empirical contribution to the entrepreneurial leadership and HRD fields. Women entrepreneur leaders' experiences of gender will be explored through a framework of doing gender well and doing gender differently to unsettle the gender binary. Design/methodology/approach: Against a backcloth of…

  15. New Approach to Remove Metals from Chromated Copper Arsenate (CCA)-Treated Wood

    Treesearch

    Todd F. Shupe; Chung Y. Hse; Hui Pan

    2012-01-01

    Recovery of metals from chromated copper arsenate (CCA)-treated southern pine wood particles was investigated using binary acid solutions consisting of acetic, oxalic, and phosphoric acids in a microwave reactor. Formation of an insoluble copper oxalate complex in the binary solution containing oxalic acid was the major factor for low copper removal. Furthermore, the...

  16. Dynamic and scalable audio classification by collective network of binary classifiers framework: an evolutionary approach.

    PubMed

    Kiranyaz, Serkan; Mäkinen, Toni; Gabbouj, Moncef

    2012-10-01

    In this paper, we propose a novel framework based on a collective network of evolutionary binary classifiers (CNBC) to address the problems of feature and class scalability. The main goal of the proposed framework is to achieve a high classification performance over dynamic audio and video repositories. The proposed framework adopts a "Divide and Conquer" approach in which an individual network of binary classifiers (NBC) is allocated to discriminate each audio class. An evolutionary search is applied to find the best binary classifier in each NBC with respect to a given criterion. Through the incremental evolution sessions, the CNBC framework can dynamically adapt to each new incoming class or feature set without resorting to a full-scale re-training or re-configuration. Therefore, the CNBC framework is particularly designed for dynamically varying databases where no conventional static classifiers can adapt to such changes. In short, it is entirely a novel topology, an unprecedented approach for dynamic, content/data adaptive and scalable audio classification. A large set of audio features can be effectively used in the framework, where the CNBCs make appropriate selections and combinations so as to achieve the highest discrimination among individual audio classes. Experiments demonstrate a high classification accuracy (above 90%) and efficiency of the proposed framework over large and dynamic audio databases. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Hierarchically self-assembled hexagonal honeycomb and kagome superlattices of binary 1D colloids.

    PubMed

    Lim, Sung-Hwan; Lee, Taehoon; Oh, Younghoon; Narayanan, Theyencheri; Sung, Bong June; Choi, Sung-Min

    2017-08-25

    Synthesis of binary nanoparticle superlattices has attracted attention for a broad spectrum of potential applications. However, this has remained challenging for one-dimensional nanoparticle systems. In this study, we investigate the packing behavior of one-dimensional nanoparticles of different diameters into a hexagonally packed cylindrical micellar system and demonstrate that binary one-dimensional nanoparticle superlattices of two different symmetries can be obtained by tuning particle diameter and mixing ratios. The hexagonal arrays of one-dimensional nanoparticles are embedded in the honeycomb lattices (for AB 2 type) or kagome lattices (for AB 3 type) of micellar cylinders. The maximization of free volume entropy is considered as the main driving force for the formation of superlattices, which is well supported by our theoretical free energy calculations. Our approach provides a route for fabricating binary one-dimensional nanoparticle superlattices and may be applicable for inorganic one-dimensional nanoparticle systems.Binary mixtures of 1D particles are rarely observed to cooperatively self-assemble into binary superlattices, as the particle types separate into phases. Here, the authors design a system that avoids phase separation, obtaining binary superlattices with different symmetries by simply tuning the particle diameter and mixture composition.

  18. Modeling of protein binary complexes using structural mass spectrometry data

    PubMed Central

    Kamal, J.K. Amisha; Chance, Mark R.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684

  19. Reducing the Matrix Effect in Organic Cluster SIMS Using Dynamic Reactive Ionization

    NASA Astrophysics Data System (ADS)

    Tian, Hua; Wucher, Andreas; Winograd, Nicholas

    2016-12-01

    Dynamic reactive ionization (DRI) utilizes a reactive molecule, HCl, which is doped into an Ar cluster projectile and activated to produce protons at the bombardment site on the cold sample surface with the presence of water. The methodology has been shown to enhance the ionization of protonated molecular ions and to reduce salt suppression in complex biomatrices. In this study, we further examine the possibility of obtaining improved quantitation with DRI during depth profiling of thin films. Using a trehalose film as a model system, we are able to define optimal DRI conditions for depth profiling. Next, the strategy is applied to a multilayer system consisting of the polymer antioxidants Irganox 1098 and 1010. These binary mixtures have demonstrated large matrix effects, making quantitative SIMS measurement not feasible. Systematic comparisons of depth profiling of this multilayer film between directly using GCIB, and under DRI conditions, show that the latter enhances protonated ions for both components by 4- to 15-fold, resulting in uniform depth profiling in positive ion mode and almost no matrix effect in negative ion mode. The methodology offers a new strategy to tackle the matrix effect and should lead to improved quantitative measurement using SIMS.

  20. Low cost paths to binary optics

    NASA Technical Reports Server (NTRS)

    Nelson, Arthur; Domash, Lawrence

    1993-01-01

    Application of binary optics has been limited to a few major laboratories because of the limited availability of fabrication facilities such as e-beam machines and the lack of standardized design software. Foster-Miller has attempted to identify low cost approaches to medium-resolution binary optics using readily available computer and fabrication tools, primarily for the use of students and experimenters in optical computing. An early version of our system, MacBEEP, made use of an optimized laser film recorder from the commercial typesetting industry with 10 micron resolution. This report is an update on our current efforts to design and build a second generation MacBEEP, which aims at 1 micron resolution and multiple phase levels. Trails included a low cost scanning electron microscope in microlithography mode, and alternative laser inscribers or photomask generators. Our current software approach is based on Mathematica and PostScript compatibility.

  1. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    PubMed Central

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  2. QTest: Quantitative Testing of Theories of Binary Choice.

    PubMed

    Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.

  3. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy.

    PubMed

    Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A

    2016-11-23

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  4. A note about high blood pressure in childhood

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena; Simão, Carla

    2017-06-01

    In medical, behavioral and social sciences it is usual to get a binary outcome. In the present work is collected information where some of the outcomes are binary variables (1='yes'/ 0='no'). In [14] a preliminary study about the caregivers perception of pediatric hypertension was introduced. An experimental questionnaire was designed to be answered by the caregivers of routine pediatric consultation attendees in the Santa Maria's hospital (HSM). The collected data was statistically analyzed, where a descriptive analysis and a predictive model were performed. Significant relations between some socio-demographic variables and the assessed knowledge were obtained. In [14] can be found a statistical data analysis using partial questionnaire's information. The present article completes the statistical approach estimating a model for relevant remaining questions of questionnaire by Generalized Linear Models (GLM). Exploring the binary outcome issue, we intend to extend this approach using Generalized Linear Mixed Models (GLMM), but the process is still ongoing.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalid, Farah F.; Deptuch, Grzegorz; Shenai, Alpana

    Monolithic Active Matrix with Binary Counters (MAMBO) is a counting ASIC designed for detecting and measuring low energy X-rays from 6-12 keV. Each pixel contains analogue functionality implemented with a charge preamplifier, CR-RC{sup 2} shaper and a baseline restorer. It also contains a window comparator which can be trimmed by 4 bit DACs to remove systematic offsets. The hits are registered by a 12 bit ripple counter which is reconfigured as a shift register to serially output the data from the entire ASIC. Each pixel can be tested individually. Two diverse approaches have been used to prevent coupling between themore » detector and electronics in MAMBO III and MAMBO IV. MAMBO III is a 3D ASIC, the bottom ASIC consists of diodes which are connected to the top ASIC using {mu}-bump bonds. The detector is decoupled from the electronics by physically separating them on two tiers and using several metal layers as a shield. MAMBO IV is a monolithic structure which uses a nested well approach to isolate the detector from the electronics. The ASICs are being fabricated using the SOI 0.2 {micro}m OKI process, MAMBO III is 3D bonded at T-Micro and MAMBO IV nested well structure was developed in collaboration between OKI and Fermilab.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalid, Farah; Deptuch, Grzegorz; Shenai, Alpana

    Monolithic Active Matrix with Binary Counters (MAMBO) is a counting ASIC designed for detecting and measuring low energy X-rays from 6-12keV. Each pixel contains analogue functionality implemented with a charge preamplifier, CR-RC{sup 2} shaper and a baseline restorer. It also contains a window comparator which can be trimmed by 4 bit DACs to remove systematic offsets. The hits are registered by a 12 bit ripple counter which is reconfigured as a shift register to serially output the data from the entire ASIC. Each pixel can be tested individually. Two diverse approaches have been used to prevent coupling between the detectormore » and electronics in MAMBO III and MAMBO IV. MAMBO III is a 3D ASIC, the bottom ASIC consists of diodes which are connected to the top ASIC using {mu}-bump bonds. The detector is decoupled from the electronics by physically separating them on two tiers and using several metal layers as a shield. MAMBO IV is a monolithic structure which uses a nested well approach to isolate the detector from the electronics. The ASICs are being fabricated using the SOI 0.2 {micro}m OKI process, MAMBO III is 3D bonded at T-Micro and MAMBO IV nested well structure was developed in collaboration between OKI and Fermilab.« less

  7. Quantifying biological samples using Linear Poisson Independent Component Analysis for MALDI-ToF mass spectra

    PubMed Central

    Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W

    2018-01-01

    Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994

  8. Modifying Matrix Materials to Increase Wetting and Adhesion

    NASA Technical Reports Server (NTRS)

    Zhong, Katie

    2011-01-01

    In an alternative approach to increasing the degrees of wetting and adhesion between the fiber and matrix components of organic-fiber/polymer matrix composite materials, the matrix resins are modified. Heretofore, it has been common practice to modify the fibers rather than the matrices: The fibers are modified by chemical and/or physical surface treatments prior to combining the fibers with matrix resins - an approach that entails considerable expense and usually results in degradation (typically, weakening) of fibers. The alternative approach of modifying the matrix resins does not entail degradation of fibers, and affords opportunities for improving the mechanical properties of the fiber composites. The alternative approach is more cost-effective, not only because it eliminates expensive fiber-surface treatments but also because it does not entail changes in procedures for manufacturing conventional composite-material structures. The alternative approach is best described by citing an example of its application to a composite of ultra-high-molecular- weight polyethylene (UHMWPE) fibers in an epoxy matrix. The epoxy matrix was modified to a chemically reactive, polarized epoxy nano-matrix to increase the degrees of wetting and adhesion between the fibers and the matrix. The modification was effected by incorporating a small proportion (0.3 weight percent) of reactive graphitic nanofibers produced from functionalized nanofibers into the epoxy matrix resin prior to combining the resin with the UHMWPE fibers. The resulting increase in fiber/matrix adhesion manifested itself in several test results, notably including an increase of 25 percent in the maximum fiber pullout force and an increase of 60-65 percent in fiber pullout energy. In addition, it was conjectured that the functionalized nanofibers became involved in the cross linking reaction of the epoxy resin, with resultant enhancement of the mechanical properties and lower viscosity of the matrix.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanatzidis, Mercouri; Riley, Brian; Chun, Jaehun

    This report documents the work done under NEUP grant to examine the capability of novel chalcogels and some binary metal chalcogenides as a host matrix for the capture of gaseous iodine and the feasibility of their iodine-laden materials to be converted into a permanent waste form. The presented work was conducted over last two years. A number of novel chalcogels Zn 2Sn 2S 6, Sb 4Sn 4S 12, NiMoS 4, CoMoS 4, antimony sulfide (SbS x) chalcogels, silver functionalized chalcogels and binary metal sulfides (Sb 2S 3) were developed and studies for their iodine absorption efficacies. A new and simplemore » route was devised for the large scale preparation of antimony sulfide chalcogel. The chalcogel was obtained by treating Sb 2S 3 with Na 2S in the presence of water followed by addition of formamide. The obtained gels have a low-density sponge like network of meso porous nature having BET surface area of 125 m 2/g. The chalcogels, silver functionalized chalcogel and the binary metal sulfides were exposed to iodine vapors in a closed container. Silver-functionalized chalcogels and Sb 2S 3 powders showed iodine uptake up to 100 wt%, the highest iodine uptake of 200 wt% was observed for the SbS-III chalcogel. The PXRD patterns of iodine-laden specimens revealed that iodine shows spontaneous chemisorption to the matrix used. The iodine loaded chalcogels and the binary chalcogenides were sealed under vacuum in fused silica ampoules and heated in a temperature controlled furnace. The consolidated products were analyzed by PXRD, energy dispersive spectroscopy (EDS), UV-Vis and Raman spectroscopy. The final products were found to be amorphous in most of the cases with high amount (~4-35 wt%) of iodine and aapproximately ~60- 90 % of the absorbed iodine could be consolidated into the final waste form. Alginate reinforced composite scaffolds with SbS/SnS chalcogels and Sb 2S 3 bulk powder were also fabricated aiming to study their efficacy as host matrices in capturing the gaseous molecular iodine in dynamic mode from nuclear spent fuel. The obtained composites looks robust in comparison to their respective pristine chalcogels and Sb 2S 3 bulk powder.« less

  10. Genders and Individual Treatment Progress in (Non-)Binary Trans Individuals.

    PubMed

    Koehler, Andreas; Eyssel, Jana; Nieder, Timo O

    2018-01-01

    Health care for transgender and transsexual (ie, trans) individuals has long been based on a binary understanding of gender (ie, feminine vs masculine). However, the existence of non-binary or genderqueer (NBGQ) genders is increasingly recognized by academic and/or health care professionals. To gain insight into the individual health care experiences and needs of binary and NBGQ individuals to improve their health care outcomes and experience. Data were collected using an online survey study on experiences with trans health care. The non-clinical sample consisted of 415 trans individuals. An individual treatment progress score was calculated to report and compare participants' individual progress toward treatment completion and consider the individual treatment needs and definitions of completed treatment (ie, amount and types of different treatments needed to complete one's medical transition). Main outcome measures were (i) general and trans-related sociodemographic data and (ii) received and planned treatments. Participants reported binary (81.7%) and different NBGQ (18.3%) genders. The 2 groups differed significantly in basic demographic data (eg, mean age; P < .05). NBGQ participants reported significantly fewer received treatments compared with binary participants. For planned treatments, binary participants reported more treatments related to primary sex characteristics only. Binary participants required more treatments for a completed treatment than NBGQ participants (6.0 vs 4.0). There were no differences with regard to individual treatment progress score. Because traditional binary-focused treatment practice could have hindered NBGQ individuals from accessing trans health care or sufficiently articulating their needs, health care professionals are encouraged to provide a holistic and individual treatment approach and acknowledge genders outside the gender binary to address their needs appropriately. Because the study was made inclusive for non-patients and individuals who decided against trans health care, bias from a participant-patient double role was prevented, which is the reason the results are likely to have a higher level of validity than a clinical sample. However, because of the anonymity of an online survey, it remains unclear whether NBGQ individuals live according to their gender identity in their everyday life. The study highlights the broad spectrum of genders in trans-individuals and associated health care needs and provides a novel approach to measure individual treatment progress in trans individuals. Koehler A, Eyssel J, Nieder TO. Genders and Individual Treatment Progress in (Non-)Binary Trans Individuals. J Sex Med 2018;15:102-113. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  11. Formation of close binary black holes merging due to gravitational-wave radiation

    NASA Astrophysics Data System (ADS)

    Tutukov, A. V.; Cherepashchuk, A. M.

    2017-10-01

    The conditions for the formation of close-binary black-hole systems merging over the Hubble time due to gravitational-wave radiation are considered in the framework of current ideas about the evolution of massive close-binary systems. The original systems whose mergers were detected by LIGO consisted of main-sequence stars with masses of 30-100 M ⊙. The preservation of the compactness of a binary black hole during the evolution of its components requires either the formation of a common envelope, probably also with a low initial abundance of metals, or the presence of a "kick"—a velocity obtained during a supernova explosion accompanied by the formation of a black hole. In principle, such a kick can explain the relatively low frequency of mergers of the components of close-binary stellar black holes, if the characteristic speed of the kick exceeds the orbital velocities of the system components during the supernova explosion. Another opportunity for the components of close-binary systems to approach each other is related to their possible motion in a dense molecular cloud.

  12. Design of an optical 4-bit binary to BCD converter using electro-optic effect of lithium niobate based Mach-Zehnder interferometers

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2017-07-01

    Binary to Binary coded decimal (BCD) converter is a basic building block for BCD processing. The last few decades have witnessed exponential rise in applications of binary coded data processing in the field of optical computing thus there is an eventual increase in demand of acceptable hardware platform for the same. Keeping this as an approach a novel design exploiting the preeminent feature of Mach-Zehnder Interferometer (MZI) is presented in this paper. Here, an optical 4-bit binary to binary coded decimal (BCD) converter utilizing the electro-optic effect of lithium niobate based MZI has been demonstrated. It exhibits the property of switching the optical signal from one port to the other, when a certain appropriate voltage is applied to its electrodes. The projected scheme is implemented using the combinations of cascaded electro-optic (EO) switches. Theoretical description along with mathematical formulation of the device is provided and the operation is analyzed through finite difference-Beam propagation method (FD-BPM). The fabrication techniques to develop the device are also discussed.

  13. Concentration dependence of electrical resistivity of binary liquid alloy HgZn: Ab-initio study

    NASA Astrophysics Data System (ADS)

    Sharma, Nalini; Thakur, Anil; Ahluwalia, P. K.

    2013-06-01

    The electrical resistivity of HgZn liquid alloy has been made calculated using Troullier and Martins ab-initio pseudopotential as a function of concentration. Hard sphere diameters of Hg and Zn are obtained through the inter-ionic pair potential have been used to calculate partial structure factors. Considering the liquid alloy to be a ternary mixture Ziman's formula for calculating the resistivity of binary liquid alloys, modified for complex formation, has been used. These results suggest that ab-initio approach for calculating electrical resistivity is quite successful in explaining the electronic transport properties of binary Liquid alloys.

  14. Theoretical studies of binaries in astrophysics

    NASA Astrophysics Data System (ADS)

    Dischler, Johann Sebastian

    This thesis introduces and summarizes four papers dealing with computer simulations of astrophysical processes involving binaries. The first part gives the rational and theoretical background to these papers. In paper I and II a statistical approach to studying eclipsing binaries is described. By using population synthesis models for binaries the probabilities for eclipses are calculated for different luminosity classes of binaries. These are compared with Hipparcos data and they agree well if one uses a standard input distribution for the orbit sizes. If one uses a random pairing model, where both companions are independently picked from an IMF, one finds too feclipsing binaries by an order of magnitude. In paper III we investigate a possible scenario for the origin of the stars observed close to the centre of our galaxy, called S stars. We propose that a cluster falls radially cowards the central black hole. The binaries within the cluster can then, if they have small impact parameters, be broken up by the black hole's tidal held and one of the components of the binary will be captured by the black hole. Paper IV investigates how the onset of mass transfer in eccentric binaries depends on the eccentricity. To do this we have developed a new two-phase SPH scheme where very light particles are at tire outer edge of our simulated star. This enables us to get a much better resolution of the very small mass that is transferred in close binaries. Our simulations show that the minimum required distance between the stars to have mass transfer decreases with the eccentricity.

  15. A Scientific Basis for an Alternate Cathode Architecture.

    DTIC Science & Technology

    1988-02-01

    working it below the annealing temperature. VO Page 11 4K5 However, when the filament operated above the annealing temperature, it recrystallized with...an impregnant ratio of 5 A moles of BaCO3: 2 moles A1203 . This represented the lowest eutectic point in the binary phase diagram. This cathode was...matrix. In its original composition, cathode impregnants in the 1 ratio of 5BaO:2A1203 were chosen because this is the lowest melting point eutectic not

  16. Parametric number covariance in quantum chaotic spectra.

    PubMed

    Vinayak; Kumar, Sandeep; Pandey, Akhilesh

    2016-03-01

    We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.

  17. INTERNATIONAL CONFERENCE ON SEMICONDUCTOR INJECTION LASERS SELCO-87: Method for calculation of electrical and optical properties of laser active media

    NASA Astrophysics Data System (ADS)

    Aleksandrov, D. G.; Filipov, F. I.

    1988-11-01

    A method is proposed for calculation of the electron band structure of multicomponent semiconductor solid solutions. Use is made of virtual atomic orbitals formed from real orbitals. The method represents essentially an approximation of a multicomponent solid solution by a binary one. The matrix elements of the Hamiltonian are obtained in the methods of linear combinations of atomic and bound orbitals. Some approximations used in these methods are described.

  18. Capillaries for use in a multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Chang, Huan-Tsang; Fung, Eliza N.

    1997-12-09

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  19. Capillaries for use in a multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Chang, H.T.; Fung, E.N.

    1997-12-09

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  20. Plasma Processing Systems for the Manufacture of Refractory Metals and their Alloys for Military Needs

    DTIC Science & Technology

    1978-10-09

    melting point is around 4000*K. An exceedingly interesting feature of these solidification composites is the formation of fibrous MC type carbide ...the matrix could be refractory metal binary alloys with copper or uranium and the eutectic phase could be carbide of tungsten, * molybdenum, tantalum or...42 Accs -n or - *DTTI Tf Avn ! -7ll ’ i CrDi t , l’’*i,;. LIST OF FIGURES FIG. 1 Flow Diagram of Cemented Carbide Manufacture

  1. A Framework for Designing Cluster Randomized Trials with Binary Outcomes

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Martinez, Andres

    2011-01-01

    The purpose of this paper is to provide a frame work for approaching a power analysis for a CRT (cluster randomized trial) with a binary outcome. The authors suggest a framework in the context of a simple CRT and then extend it to a blocked design, or a multi-site cluster randomized trial (MSCRT). The framework is based on proportions, an…

  2. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  3. Binary image classification

    NASA Technical Reports Server (NTRS)

    Morris, Carl N.

    1987-01-01

    Motivated by the LANDSAT problem of estimating the probability of crop or geological types based on multi-channel satellite imagery data, Morris and Kostal (1983), Hill, Hinkley, Kostal, and Morris (1984), and Morris, Hinkley, and Johnston (1985) developed an empirical Bayes approach to this problem. Here, researchers return to those developments, making certain improvements and extensions, but restricting attention to the binary case of only two attributes.

  4. Phase behaviour of the symmetric binary mixture from thermodynamic perturbation theory.

    PubMed

    Dorsaz, N; Foffi, G

    2010-03-17

    We study the phase behaviour of symmetric binary mixtures of hard core Yukawa (HCY) particles via thermodynamic perturbation theory (TPT). We show that all the topologies of phase diagram reported for the symmetric binary mixtures are correctly reproduced within the TPT approach. In a second step we use the capability of TPT to be straightforwardly extended to mixtures that are nonsymmetric in size. Starting from mixtures that belong to the different topologies of symmetric binary mixtures we investigate the effect on the phase behaviour when an asymmetry in the diameters of the two components is introduced. Interestingly, when the energy of interaction between unlike particles is weaker than the interaction between like particles, the propensity for the solution to demix is found to increase strongly with size asymmetry.

  5. A comparison of multiple imputation methods for incomplete longitudinal binary data.

    PubMed

    Yamaguchi, Yusuke; Misumi, Toshihiro; Maruo, Kazushi

    2018-01-01

    Longitudinal binary data are commonly encountered in clinical trials. Multiple imputation is an approach for getting a valid estimation of treatment effects under an assumption of missing at random mechanism. Although there are a variety of multiple imputation methods for the longitudinal binary data, a limited number of researches have reported on relative performances of the methods. Moreover, when focusing on the treatment effect throughout a period that has often been used in clinical evaluations of specific disease areas, no definite investigations comparing the methods have been available. We conducted an extensive simulation study to examine comparative performances of six multiple imputation methods available in the SAS MI procedure for longitudinal binary data, where two endpoints of responder rates at a specified time point and throughout a period were assessed. The simulation study suggested that results from naive approaches of a single imputation with non-responders and a complete case analysis could be very sensitive against missing data. The multiple imputation methods using a monotone method and a full conditional specification with a logistic regression imputation model were recommended for obtaining unbiased and robust estimations of the treatment effect. The methods were illustrated with data from a mental health research.

  6. Control Synthesis of Discrete-Time T-S Fuzzy Systems: Reducing the Conservatism Whilst Alleviating the Computational Burden.

    PubMed

    Xie, Xiangpeng; Yue, Dong; Zhang, Huaguang; Peng, Chen

    2017-09-01

    The augmented multi-indexed matrix approach acts as a powerful tool in reducing the conservatism of control synthesis of discrete-time Takagi-Sugeno fuzzy systems. However, its computational burden is sometimes too heavy as a tradeoff. Nowadays, reducing the conservatism whilst alleviating the computational burden becomes an ideal but very challenging problem. This paper is toward finding an efficient way to achieve one of satisfactory answers. Different from the augmented multi-indexed matrix approach in the literature, we aim to design a more efficient slack variable approach under a general framework of homogenous matrix polynomials. Thanks to the introduction of a new extended representation for homogeneous matrix polynomials, related matrices with the same coefficient are collected together into one sole set and thus those redundant terms of the augmented multi-indexed matrix approach can be removed, i.e., the computational burden can be alleviated in this paper. More importantly, due to the fact that more useful information is involved into control design, the conservatism of the proposed approach as well is less than the counterpart of the augmented multi-indexed matrix approach. Finally, numerical experiments are given to show the effectiveness of the proposed approach.

  7. Constrained binary classification using ensemble learning: an application to cost-efficient targeted PrEP strategies.

    PubMed

    Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya

    2018-01-30

    Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Soumya D.; Nayak, Rajesh K.

    The space based gravitational wave detector LISA (Laser Interferometer Space Antenna) is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10{sup -4} to 10{sup -3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude--the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only Doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population inmore » frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta-function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.« less

  10. Investigating the binary nature of active asteroid 288P/300163

    NASA Astrophysics Data System (ADS)

    Agarwal, Jessica

    2016-10-01

    We propose to study the suspected binary nature of active asteroid 288P/300163. We aim to confirm or disprove the existence of a binary nucleus, and - if confirmed - to measure the mutual orbital period and orbit orientation of the compoents, and their sizes. We request 5 orbits of WFC3 imaging, spaced at intervals of 8-12 days. 288P belongs to the recently discovered group of active asteroids, and is particularly remarkable as HST images obtained during its last close approach to Earth in 2011 are consistent with a barely resolved binary system. If confirmed, 288P would be the first known active binary asteroid. For the first time, we would see two important consequences of rotational break-up in a single object: binary formation and dust ejection, highlighting the importance of the YORP-effect in re-shaping the asteroid belt. Confirming 288P as a binary would be a key step towards understanding the evolutionary processes underlying asteroid activity. In order to resolve the two components we need 288P at a geocentric distance comparable to or less than we had in 2011 December (1.85 AU). This condition will be fulfilled for the first time since 2011, between mid-July and mid-November of 2016. The next opportunity to carry out such observations will be in 2021.

  11. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  12. Two-Point Resistance of a Non-Regular Cylindrical Network with a Zero Resistor Axis and Two Arbitrary Boundaries

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Zhong

    2017-03-01

    We study a problem of two-point resistance in a non-regular m × n cylindrical network with a zero resistor axis and two arbitrary boundaries by means of the Recursion-Transform method. This is a new problem never solved before, the Green’s function technique and the Laplacian matrix approach are invalid in this case. A disordered network with arbitrary boundaries is a basic model in many physical systems or real world systems, however looking for the exact calculation of the resistance of a binary resistor network is important but difficult in the case of the arbitrary boundaries, the boundary is like a wall or trap which affects the behavior of finite network. In this paper we obtain a general resistance formula of a non-regular m × n cylindrical network, which is composed of a single summation. Further, the current distribution is given explicitly as a byproduct of the method. As applications, several interesting results are derived by making special cases from the general formula. Supported by the Natural Science Foundation of Jiangsu Province under Grant No. BK20161278

  13. Sensing Home: A Cost-Effective Design for Smart Home via Heterogeneous Wireless Networks

    PubMed Central

    Fan, Xiaohu; Huang, Hao; Qi, Shipeng; Luo, Xincheng; Zeng, Jing; Xie, Qubo; Xie, Changsheng

    2015-01-01

    The aging population has inspired the marketing of advanced real time devices for home health care, more and more wearable devices and mobile applications, which have emerged in this field. However, to properly collect behavior information, accurately recognize human activities, and deploy the whole system in a real living environment is a challenging task. In this paper, we propose a feasible wireless-based solution to deploy a data collection scheme, activity recognition model, feedback control and mobile integration via heterogeneous networks. We compared and found a suitable algorithm that can be run on cost-efficient embedded devices. Specifically, we use the Super Set Transformation method to map the raw data into a sparse binary matrix. Furthermore, designed front-end devices of low power consumption gather the living data of the habitant via ZigBee to reduce the burden of wiring work. Finally, we evaluated our approach and show it can achieve a theoretical time-slice accuracy of 98%. The mapping solution we propose is compatible with more wearable devices and mobile apps. PMID:26633424

  14. Partial Least Squares with Structured Output for Modelling the Metabolomics Data Obtained from Complex Experimental Designs: A Study into the Y-Block Coding.

    PubMed

    Xu, Yun; Muhamadali, Howbeer; Sayqal, Ali; Dixon, Neil; Goodacre, Royston

    2016-10-28

    Partial least squares (PLS) is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R) or a classification model (PLS-DA). However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a "pure" regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.

  15. Sensing Home: A Cost-Effective Design for Smart Home via Heterogeneous Wireless Networks.

    PubMed

    Fan, Xiaohu; Huang, Hao; Qi, Shipeng; Luo, Xincheng; Zeng, Jing; Xie, Qubo; Xie, Changsheng

    2015-12-03

    The aging population has inspired the marketing of advanced real time devices for home health care, more and more wearable devices and mobile applications, which have emerged in this field. However, to properly collect behavior information, accurately recognize human activities, and deploy the whole system in a real living environment is a challenging task. In this paper, we propose a feasible wireless-based solution to deploy a data collection scheme, activity recognition model, feedback control and mobile integration via heterogeneous networks. We compared and found a suitable algorithm that can be run on cost-efficient embedded devices. Specifically, we use the Super Set Transformation method to map the raw data into a sparse binary matrix. Furthermore, designed front-end devices of low power consumption gather the living data of the habitant via ZigBee to reduce the burden of wiring work. Finally, we evaluated our approach and show it can achieve a theoretical time-slice accuracy of 98%. The mapping solution we propose is compatible with more wearable devices and mobile apps.

  16. The analysis of image feature robustness using cometcloud

    PubMed Central

    Qi, Xin; Kim, Hyunjoo; Xing, Fuyong; Parashar, Manish; Foran, David J.; Yang, Lin

    2012-01-01

    The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval. PMID:23248759

  17. VizieR Online Data Catalog: ASAS, NSVS, and LINEAR detached eclipsing binaries (Lee, 2015)

    NASA Astrophysics Data System (ADS)

    Lee, C.-H.

    2016-04-01

    We follow the approach of Devor et al. (2008AJ....135..850D, Cat. J/AJ/135/850) to analyse the LC from ASAS (Pojmanski et al., Cat. II/264, NSVS (Wozniak et al., 2004AJ....127.2436W, and LINEAR (Palaversa et al., Cat. J/AJ/146/101) and extract the physical properties of the eclipsing binaries. (3 data files).

  18. Evaporative lithographic patterning of binary colloidal films.

    PubMed

    Harris, Daniel J; Conrad, Jacinta C; Lewis, Jennifer A

    2009-12-28

    Evaporative lithography offers a promising new route for patterning a broad array of soft materials. In this approach, a mask is placed above a drying film to create regions of free and hindered evaporation, which drive fluid convection and entrained particles to regions of highest evaporative flux. We show that binary colloidal films exhibit remarkable pattern formation when subjected to a periodic evaporative landscape during drying.

  19. Full Ionisation In Binary-Binary Encounters With Small Positive Energies

    NASA Astrophysics Data System (ADS)

    Sweatman, W. L.

    2006-08-01

    Interactions between binary stars and single stars and binary stars and other binary stars play a key role in the dynamics of a dense stellar system. Energy can be transferred between the internal dynamics of a binary and the larger scale dynamics of the interacting objects. Binaries can be destroyed and created by the interaction. In a binary-binary encounter, full ionisation occurs when both of the binary stars are destroyed in the interaction to create four single stars. This is only possible when the total energy of the system is positive. For very small energies the probability of this occurring is very low and it tends towards zero as the total energy tends towards zero. Here the case is considered for which all the stars have equal masses. An asymptotic power law is predicted relating the probability of full ionisation with the total energy when this latter quantity is small. The exponent, which is approximately 2.31, is compared with the results from numerical scattering experiments. The theoretical approach taken is similar to one used previously in the three-body problem. It makes use of the fact that the most dramatic changes in scale and energies of a few-body system occur when its components pass near to a central configuration. The position, and number, of these configurations is not known for the general four-body problem, however, with equal masses there are known to be exactly five different cases. Separate consideration and comparison of the properties of orbits close to each of these five central configurations enables the prediction of the form of the cross-section for full ionisation for the case of small positive total energy. This is the relation between total energy and the probability of total ionisation described above.

  20. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.

    PubMed

    Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn

    2016-01-01

    Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

  1. Development and Validation of a Job Exposure Matrix for Physical Risk Factors in Low Back Pain

    PubMed Central

    Solovieva, Svetlana; Pehkonen, Irmeli; Kausto, Johanna; Miranda, Helena; Shiri, Rahman; Kauppinen, Timo; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2012-01-01

    Objectives The aim was to construct and validate a gender-specific job exposure matrix (JEM) for physical exposures to be used in epidemiological studies of low back pain (LBP). Materials and Methods We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration) and exposures that increase the biomechanical load on the low back (arm elevation) or those that in combination with other known risk factors could be related to LBP (kneeling or squatting). Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based) binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM) exposures with those of individual-based exposures. Results The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM) exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. Conclusions The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in other countries with a similar level of technology. PMID:23152793

  2. A Novel Partial Sequence Alignment Tool for Finding Large Deletions

    PubMed Central

    Aruk, Taner; Ustek, Duran; Kursun, Olcay

    2012-01-01

    Finding large deletions in genome sequences has become increasingly more useful in bioinformatics, such as in clinical research and diagnosis. Although there are a number of publically available next generation sequencing mapping and sequence alignment programs, these software packages do not correctly align fragments containing deletions larger than one kb. We present a fast alignment software package, BinaryPartialAlign, that can be used by wet lab scientists to find long structural variations in their experiments. For BinaryPartialAlign, we make use of the Smith-Waterman (SW) algorithm with a binary-search-based approach for alignment with large gaps that we called partial alignment. BinaryPartialAlign implementation is compared with other straight-forward applications of SW. Simulation results on mtDNA fragments demonstrate the effectiveness (runtime and accuracy) of the proposed method. PMID:22566777

  3. Hybrid Black-Hole Binary Initial Data

    NASA Technical Reports Server (NTRS)

    Mundim, Bruno C.; Kelly, Bernard J.; Nakano, Hiroyuki; Zlochower, Yosef; Campanelli, Manuela

    2010-01-01

    "Traditional black-hole binary puncture initial data is conformally flat. This unphysical assumption is coupled with a lack of radiation signature from the binary's past life. As a result, waveforms extracted from evolutions of this data display an abrupt jump. In Kelly et al. [Class. Quantum Grav. 27:114005 (2010)], a new binary black-hole initial data with radiation contents derived in the post-Newtonian (PN) calculations was adapted to puncture evolutions in numerical relativity. This data satisfies the constraint equations to the 2.5PN order, and contains a transverse-traceless "wavy" metric contribution, violating the standard assumption of conformal flatness. Although the evolution contained less spurious radiation, there were undesired features; the unphysical horizon mass loss and the large initial orbital eccentricity. Introducing a hybrid approach to the initial data evaluation, we significantly reduce these undesired features."

  4. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  5. Manifold regularized matrix completion for multi-label learning with ADMM.

    PubMed

    Liu, Bin; Li, Yingming; Xu, Zenglin

    2018-05-01

    Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Design quadrilateral apertures in binary computer-generated holograms of large space bandwidth product.

    PubMed

    Wang, Jing; Sheng, Yunlong

    2016-09-20

    A new approach for designing the binary computer-generated hologram (CGH) of a very large number of pixels is proposed. Diffraction of the CGH apertures is computed by the analytical Abbe transform and by considering the aperture edges as the basic diffracting elements. The computation cost is independent of the CGH size. The arbitrary-shaped polygonal apertures in the CGH consist of quadrilateral apertures, which are designed by assigning the binary phases using the parallel genetic algorithm with a local search, followed by optimizing the locations of the co-vertices with a direct search. The design results in high performance with low image reconstruction error.

  7. An approach to the language discrimination in different scripts using adjacent local binary pattern

    NASA Astrophysics Data System (ADS)

    Brodić, D.; Amelio, A.; Milivojević, Z. N.

    2017-09-01

    The paper proposes a language discrimination method of documents. First, each letter is encoded with the certain script type according to its status in baseline area. Such a cipher text is subjected to a feature extraction process. Accordingly, the local binary pattern as well as its expanded version called adjacent local binary pattern are extracted. Because of the difference in the language characteristics, the above analysis shows significant diversity. This type of diversity is a key aspect in the decision-making differentiation of the languages. Proposed method is tested on an example of documents. The experiments give encouraging results.

  8. Binary similarity measures for fingerprint analysis of qualitative metabolomic profiles.

    PubMed

    Rácz, Anita; Andrić, Filip; Bajusz, Dávid; Héberger, Károly

    2018-01-01

    Contemporary metabolomic fingerprinting is based on multiple spectrometric and chromatographic signals, used either alone or combined with structural and chemical information of metabolic markers at the qualitative and semiquantitative level. However, signal shifting, convolution, and matrix effects may compromise metabolomic patterns. Recent increase in the use of qualitative metabolomic data, described by the presence (1) or absence (0) of particular metabolites, demonstrates great potential in the field of metabolomic profiling and fingerprint analysis. The aim of this study is a comprehensive evaluation of binary similarity measures for the elucidation of patterns among samples of different botanical origin and various metabolomic profiles. Nine qualitative metabolomic data sets covering a wide range of natural products and metabolomic profiles were applied to assess 44 binary similarity measures for the fingerprinting of plant extracts and natural products. The measures were analyzed by the novel sum of ranking differences method (SRD), searching for the most promising candidates. Baroni-Urbani-Buser (BUB) and Hawkins-Dotson (HD) similarity coefficients were selected as the best measures by SRD and analysis of variance (ANOVA), while Dice (Di1), Yule, Russel-Rao, and Consonni-Todeschini 3 ranked the worst. ANOVA revealed that concordantly and intermediately symmetric similarity coefficients are better candidates for metabolomic fingerprinting than the asymmetric and correlation based ones. The fingerprint analysis based on the BUB and HD coefficients and qualitative metabolomic data performed equally well as the quantitative metabolomic profile analysis. Fingerprint analysis based on the qualitative metabolomic profiles and binary similarity measures proved to be a reliable way in finding the same/similar patterns in metabolomic data as that extracted from quantitative data.

  9. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    PubMed

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Optical modular arithmetic

    NASA Astrophysics Data System (ADS)

    Pavlichin, Dmitri S.; Mabuchi, Hideo

    2014-06-01

    Nanoscale integrated photonic devices and circuits offer a path to ultra-low power computation at the few-photon level. Here we propose an optical circuit that performs a ubiquitous operation: the controlled, random-access readout of a collection of stored memory phases or, equivalently, the computation of the inner product of a vector of phases with a binary selector" vector, where the arithmetic is done modulo 2pi and the result is encoded in the phase of a coherent field. This circuit, a collection of cascaded interferometers driven by a coherent input field, demonstrates the use of coherence as a computational resource, and of the use of recently-developed mathematical tools for modeling optical circuits with many coupled parts. The construction extends in a straightforward way to the computation of matrix-vector and matrix-matrix products, and, with the inclusion of an optical feedback loop, to the computation of a weighted" readout of stored memory phases. We note some applications of these circuits for error correction and for computing tasks requiring fast vector inner products, e.g. statistical classification and some machine learning algorithms.

  11. A parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1993-01-01

    A parallel algorithm, called polysection, is presented for computing the eigenvalues of a symmetric tridiagonal matrix. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The signs of the polynomials at the interval endpoints are determined a priori and used to guarantee that all zeros are found. The use of finite-precision arithmetic may result in multiple zeros; however, in this case, the intervals coalesce and their number determines exactly the multiplicity of the zero. For an N x N matrix the eigenvalues can be determined in O(log-squared N) time with N-squared processors and O(N) time with N processors. The method is compared with a parallel variant of bisection that requires O(N-squared) time on a single processor, O(N) time with N processors, and O(log N) time with N-squared processors.

  12. Pooled-matrix protein interaction screens using Barcode Fusion Genetics.

    PubMed

    Yachie, Nozomu; Petsalaki, Evangelia; Mellor, Joseph C; Weile, Jochen; Jacob, Yves; Verby, Marta; Ozturk, Sedide B; Li, Siyang; Cote, Atina G; Mosca, Roberto; Knapp, Jennifer J; Ko, Minjeong; Yu, Analyn; Gebbia, Marinella; Sahni, Nidhi; Yi, Song; Tyagi, Tanya; Sheykhkarimli, Dayag; Roth, Jonathan F; Wong, Cassandra; Musa, Louai; Snider, Jamie; Liu, Yi-Chun; Yu, Haiyuan; Braun, Pascal; Stagljar, Igor; Hao, Tong; Calderwood, Michael A; Pelletier, Laurence; Aloy, Patrick; Hill, David E; Vidal, Marc; Roth, Frederick P

    2016-04-22

    High-throughput binary protein interaction mapping is continuing to extend our understanding of cellular function and disease mechanisms. However, we remain one or two orders of magnitude away from a complete interaction map for humans and other major model organisms. Completion will require screening at substantially larger scales with many complementary assays, requiring further efficiency gains in proteome-scale interaction mapping. Here, we report Barcode Fusion Genetics-Yeast Two-Hybrid (BFG-Y2H), by which a full matrix of protein pairs can be screened in a single multiplexed strain pool. BFG-Y2H uses Cre recombination to fuse DNA barcodes from distinct plasmids, generating chimeric protein-pair barcodes that can be quantified via next-generation sequencing. We applied BFG-Y2H to four different matrices ranging in scale from ~25 K to 2.5 M protein pairs. The results show that BFG-Y2H increases the efficiency of protein matrix screening, with quality that is on par with state-of-the-art Y2H methods. © 2016 The Authors. Published under the terms of the CC BY 4.0 license.

  13. Sparse Gaussian elimination with controlled fill-in on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita; Jordan, Harry F.

    1989-01-01

    It is shown that in sparse matrices arising from electronic circuits, it is possible to do computations on many diagonal elements simultaneously. A technique for obtaining an ordered compatible set directly from the ordered incompatible table is given. The ordering is based on the Markowitz number of the pivot candidates. This technique generates a set of compatible pivots with the property of generating few fills. A novel heuristic algorithm is presented that combines the idea of an order-compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. An elimination set for reducing the matrix is generated and selected on the basis of a minimum Markowitz sum number. The parallel pivoting technique presented is a stepwise algorithm and can be applied to any submatrix of the original matrix. Thus, it is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices using the HEP multiprocessor (Kowalik, 1985) are presented and analyzed.

  14. Expansion and melting of Xe nanocrystals in Si

    NASA Astrophysics Data System (ADS)

    Faraci, Giuseppe; Pennisi, Agata R.; Zontone, Federico; Li, Boquan; Petrov, Ivan

    2006-12-01

    Xe agglomerates confined in a Si matrix by ion implantation were synthesized with different size depending on the implantation process and/or the thermal treatment. At low temperature Xe nanocrystals are formed, whose expansion and melting were studied in the range 15- 300K . Previous high resolution x-ray diffraction spectra were corroborated with complementary techniques such as two-dimensional imaging plate patterns and transmission electron microscopy. We detected fcc Xe nanocrystals whose properties were size dependent. The experiments showed that in annealed samples epitaxial condensation of small Xe clusters, on the cavities of the Si matrix, gave in fact expanded and oriented Xe, suggesting a possible preferential growth of Xe(311) planes oriented orthogonally to the Si[02-2] direction. On the contrary, small Xe clusters in an amorphous Si matrix have a fcc lattice contracted as a consequence of surface tension. Furthermore, a solid-to-liquid phase transition size dependent was found. Expansion of fcc Xe lattice was accurately determined as a function of the temperature. Overpressurized nanocrystals and/or binary size distributions were disproved.

  15. Dynamic detection-rate-based bit allocation with genuine interval concealment for binary biometric representation.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann

    2013-06-01

    Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.

  16. High energy radiation precursors to the collapse of black holes binaries based on resonating plasma modes

    NASA Astrophysics Data System (ADS)

    Coppi, B.

    2018-05-01

    The presence of well organized plasma structures around binary systems of collapsed objects [1,2] (black holes and neutron stars) is proposed in which processes can develop [3] leading to high energy electromagnetic radiation emission immediately before the binary collapse. The formulated theoretical model supporting this argument shows that resonating plasma collective modes can be excited in the relevant magnetized plasma structure. Accordingly, the collapse of the binary approaches, with the loss of angular momentum by emission of gravitational waves [2], the resonance conditions with vertically standing plasma density and magnetic field oscillations are met. Then, secondary plasma modes propagating along the magnetic field are envisioned to be sustained with mode-particle interactions producing the particle populations responsible for the observable electromagnetic radiation emission. Weak evidence for a precursor to the binary collapse reported in Ref. [2], has been offered by the Agile X-γ-ray observatory [4] while the August 17 (2017) event, identified first by the LIGO-Virgo detection of gravitational waves and featuring the inferred collapse of a neutron star binary, improves the evidence of such a precursor. A new set of experimental observations is needed to reassess the presented theory.

  17. Satisfiability modulo theory and binary puzzle

    NASA Astrophysics Data System (ADS)

    Utomo, Putranto

    2017-06-01

    The binary puzzle is a sudoku-like puzzle with values in each cell taken from the set {0, 1}. We look at the mathematical theory behind it. A solved binary puzzle is an n × n binary array where n is even that satisfies the following conditions: (1) No three consecutive ones and no three consecutive zeros in each row and each column, (2) Every row and column is balanced, that is the number of ones and zeros must be equal in each row and in each column, (3) Every two rows and every two columns must be distinct. The binary puzzle had been proven to be an NP-complete problem [5]. Research concerning the satisfiability of formulas with respect to some background theory is called satisfiability modulo theory (SMT). An SMT solver is an extension of a satisfiability (SAT) solver. The notion of SMT can be used for solving various problem in mathematics and industries such as formula verification and operation research [1, 7]. In this paper we apply SMT to solve binary puzzles. In addition, we do an experiment in solving different sizes and different number of blanks. We also made comparison with two other approaches, namely by a SAT solver and exhaustive search.

  18. A novel approach for SEMG signal classification with adaptive local binary patterns.

    PubMed

    Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan

    2016-07-01

    Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals.

  19. Treelets Binary Feature Retrieval for Fast Keypoint Recognition.

    PubMed

    Zhu, Jianke; Wu, Chenxia; Chen, Chun; Cai, Deng

    2015-10-01

    Fast keypoint recognition is essential to many vision tasks. In contrast to the classification-based approaches, we directly formulate the keypoint recognition as an image patch retrieval problem, which enjoys the merit of finding the matched keypoint and its pose simultaneously. To effectively extract the binary features from each patch surrounding the keypoint, we make use of treelets transform that can group the highly correlated data together and reduce the noise through the local analysis. Treelets is a multiresolution analysis tool, which provides an orthogonal basis to reflect the geometry of the noise-free data. To facilitate the real-world applications, we have proposed two novel approaches. One is the convolutional treelets that capture the image patch information locally and globally while reducing the computational cost. The other is the higher-order treelets that reflect the relationship between the rows and columns within image patch. An efficient sub-signature-based locality sensitive hashing scheme is employed for fast approximate nearest neighbor search in patch retrieval. Experimental evaluations on both synthetic data and the real-world Oxford dataset have shown that our proposed treelets binary feature retrieval methods outperform the state-of-the-art feature descriptors and classification-based approaches.

  20. Electric Field Induced Interfacial Instabilities

    NASA Technical Reports Server (NTRS)

    Kusner, Robert E.; Min, Kyung Yang; Wu, Xiao-Lun; Onuki, Akira

    1996-01-01

    The study of the interface in a charge-free, nonpolar, critical and near-critical binary fluid in the presence of an externally applied electric field is presented. At sufficiently large fields, the interface between the two phases of the binary fluid should become unstable and exhibit an undulation with a predefined wavelength on the order of the capillary length. As the critical point is approached, this wavelength is reduced, potentially approaching length-scales such as the correlation length or critical nucleation radius. At this point the critical properties of the system may be affected. In zero gravity, the interface is unstable at all long wavelengths in the presence of a field applied across it. It is conjectured that this will cause the binary fluid to break up into domains small enough to be outside the instability condition. The resulting pattern formation, and the effects on the critical properties as the domains approach the correlation length are of acute interest. With direct observation, laser light scattering, and interferometry, the phenomena can be probed to gain further understanding of interfacial instabilities and the pattern formation which results, and dimensional crossover in critical systems as the critical fluctuations in a particular direction are suppressed by external forces.

  1. Binary-Phase Fourier Gratings for Nonuniform Array Generation

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Crow, Robert W.; Ashley, Paul R.

    2003-01-01

    We describe a design method for a binary-phase Fourier grating that generates an array of spots with nonuniform, user-defined intensities symmetric about the zeroth order. Like the Dammann fanout grating approach, the binary-phase Fourier grating uses only two phase levels in its grating surface profile to generate the final spot array. Unlike the Dammann fanout grating approach, this method allows for the generation of nonuniform, user-defined intensities within the final fanout pattern. Restrictions governing the specification and realization of the array's individual spot intensities are discussed. Design methods used to realize the grating employ both simulated annealing and nonlinear optimization approaches to locate optimal solutions to the grating design problem. The end-use application driving this development operates in the near- to mid-infrared spectrum - allowing for higher resolution in grating specification and fabrication with respect to wavelength than may be available in visible spectrum applications. Fabrication of a grating generating a user-defined nine spot pattern is accomplished in GaAs for the near-infrared. Characterization of the grating is provided through the measurement of individual spot intensities, array uniformity, and overall efficiency. Final measurements are compared to calculated values with a discussion of the results.

  2. Conceptualizing and Confronting Inequity: Approaches within and New Directions for the "NNEST Movement"

    ERIC Educational Resources Information Center

    Rudolph, Nathanael; Selvi, Ali Fuad; Yazan, Bedrettin

    2015-01-01

    This article examines inequity as conceptualized and approached within and through the non-native English speakers in TESOL (NNEST) "movement." The authors unpack critical approaches to the NNEST experience, conceptualized via binaries (NS/NNS; NEST/NNEST). The authors then explore postmodern and poststructural approaches to identity and…

  3. Structure and component dynamics in binary mixtures of poly(2-(dimethylamino)ethyl methacrylate) with water and tetrahydrofuran: A diffraction, calorimetric, and dielectric spectroscopy study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goracci, G., E-mail: sckgorag@ehu.es; Arbe, A.; Alegría, A.

    2016-04-21

    We have combined X-ray diffraction, neutron diffraction with polarization analysis, small angle neutron scattering, differential scanning calorimetry, and broad band dielectric spectroscopy to investigate the structure and dynamics of binary mixtures of poly (2-(dimethylamino)ethyl methacrylate) with either water or tetrahydrofuran (THF) at different concentrations. Aqueous mixtures are characterized by a highly heterogeneous structure where water clusters coexist with an underlying nano-segregation of main chains and side groups of the polymeric matrix. THF molecules are homogeneously distributed among the polymeric nano-domains for concentrations of one THF molecule/monomer or lower. A more heterogeneous situation is found for higher THF amounts, but withoutmore » evidences for solvent clusters. In THF-mixtures, we observe a remarkable reduction of the glass-transition temperature which is enhanced with increasing amount of solvent but seems to reach saturation at high THF concentrations. Adding THF markedly reduces the activation energy of the polymer β-relaxation. The presence of THF molecules seemingly hinders a slow component of this process which is active in the dry state. The aqueous mixtures present a strikingly broad glass-transition feature, revealing a highly heterogeneous behavior in agreement with the structural study. Regarding the solvent dynamics, deep in the glassy state all data can be described by an Arrhenius temperature dependence with a rather similar activation energy. However, the values of the characteristic times are about three orders of magnitude smaller for THF than for water. Water dynamics display a crossover toward increasingly higher apparent activation energies in the region of the onset of the glass transition, supporting its interpretation as a consequence of the freezing of the structural relaxation of the surrounding matrix. The absence of such a crossover (at least in the wide dynamic window here accessed) in THF is attributed to the lack of cooperativity effects in the relaxation of these molecules within the polymeric matrix.« less

  4. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  5. Consistency of Post-Newtonian Waveforms with Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; vanMeter, James R.; McWilliams, Sean T.; Centrella, Joan; Kelly, Bernard J.

    2007-01-01

    General relativity predicts the gravitational radiation signatures of mergers of compact binaries,such as coalescing binary black hole systems. Derivations of waveform predictions for such systems are required for optimal scientific analysis of observational gravitational wave data, and have so far been achieved primarily with the aid of the post-Newtonian (PN) approximation. The quaIity of this treatment is unclear, however, for the important late inspiral portion. We derive late-inspiral wave forms via a complementary approach, direct numerical simulation of Einstein's equations, which has recently matured sufficiently for such applications. We compare waveform phasing from simulations covering the last approximately 14 cycles of gravitational radiation from an equal-mass binary system of nonspinning black holes with corresponding 3PN and 3.5PN waveforms. We find phasing agreement consistent with internal error estimates based in either approach, at the level of one radian over approximately 10 cycles. The result suggests that PN waveforms for this system are effective roughly until the system reaches its last stable orbit just prior to the final merger.

  6. Consistency of Post-Newtonian Waveforms with Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; vanMeter, James R.; McWilliams, Sean T.; Cewntrella, Joan; Kelly, Bernard J.

    2006-01-01

    General relativity predicts the gravitational radiation signatures of mergers of compact binaries, such as coalescing binary black hole systems. Derivations of waveform predictions for such systems are required for optimal scientific analysis of observational gravitational wave data, and have so far been achieved primarily with the aid of the post-Newtonian (PN) approximation. The quality of this treatment is unclear, however, for the important late inspiral portion. We derive late-inspiral waveforms via a complementary approach, direct numerical simulation of Einstein's equations, which has recently matured sufficiently for such applications. We compare waveform phasing from simulations covering the last approximately 14 cycles of gravitational radiation from an equal-mass binary system of nonspinning black holes with the corresponding 3PN and 3.5PN orbital phasing. We find agreement consistent with internal error estimates based on either approach at the level of one radian over approximately 10 cycles. The result suggests that PN waveforms for this system are effective roughly until the system reaches its last stable orbit just prior to the final merger/

  7. QTest: Quantitative Testing of Theories of Binary Choice

    PubMed Central

    Regenwetter, Michel; Davis-Stober, Clintin P.; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of “Random Cumulative Prospect Theory.” A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences. PMID:24999495

  8. A Novel Approach for Evaluating Carbamate Mixtures for Dose Additivity

    EPA Science Inventory

    Two mathematical approaches were used to test the hypothesis ofdose-addition for a binary and a seven-chemical mixture ofN-methyl carbamates, toxicologically similar chemicals that inhibit cholinesterase (ChE). In the more novel approach, mixture data were not included in the ana...

  9. Relationships Between Abrasive Wear, Hardness, and Surface Grinding Characteristics of Titanium-Based Metal Matrix Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blau, Peter Julian; Jolly, Brian C

    2009-01-01

    The objective of this work was to support the development of grinding models for titanium metal-matrix composites (MMCs) by investigating possible relationships between their indentation hardness, low-stress belt abrasion, high-stress belt abrasion, and the surface grinding characteristics. Three Ti-based particulate composites were tested and compared with the popular titanium alloy Ti-6Al-4V. The three composites were a Ti-6Al-4V-based MMC with 5% TiB{sub 2} particles, a Ti-6Al-4V MMC with 10% TiC particles, and a Ti-6Al-4V/Ti-7.5%W binary alloy matrix that contained 7.5% TiC particles. Two types of belt abrasion tests were used: (a) a modified ASTM G164 low-stress loop abrasion test, and (b)more » a higher-stress test developed to quantify the grindability of ceramics. Results were correlated with G-ratios (ratio of stock removed to abrasives consumed) obtained from an instrumented surface grinder. Brinell hardness correlated better with abrasion characteristics than microindentation or scratch hardness. Wear volumes from low-stress and high-stress abrasive belt tests were related by a second-degree polynomial. Grindability numbers correlated with hard particle content but were also matrix-dependent.« less

  10. Discretized torsional dynamics and the folding of an RNA chain.

    PubMed

    Fernández, A; Salthú, R; Cendra, H

    1999-08-01

    The aim of this work is to implement a discrete coarse codification of local torsional states of the RNA chain backbone in order to explore the long-time limit dynamics and ultimately obtain a coarse solution to the RNA folding problem. A discrete representation of the soft-mode dynamics is turned into an algorithm for a rough structure prediction. The algorithm itself is inherently parallel, as it evaluates concurrent folding possibilities by pattern recognition, but it may be implemented in a personal computer as a chain of perturbation-translation-renormalization cycles performed on a binary matrix of local topological constraints. This requires suitable representational tools and a periodic quenching of the dynamics for system renormalization. A binary coding of local topological constraints associated with each structural motif is introduced, with each local topological constraint corresponding to a local torsional state. This treatment enables us to adopt a computation time step far larger than hydrodynamic drag time scales. Accordingly, the solvent is no longer treated as a hydrodynamic drag medium. Instead we incorporate its capacity for forming local conformation-dependent dielectric domains. Each translation of the matrix of local topological constraints (LTM's) depends on the conformation-dependent local dielectric created by a confined solvent. Folding pathways are resolved as transitions between patterns of locally encoded structural signals which change within the 1 ns-100 ms time scale range. These coarse folding pathways are generated by a search at regular intervals for structural patterns in the LTM. Each pattern is recorded as a base-pairing pattern (BPP) matrix, a consensus-evaluation operation subject to a renormalization feedback loop. Since several mutually conflicting consensus evaluations might occur at a given time, the need arises for a probabilistic approach appropriate for an ensemble of RNA molecules. Thus, a statistical dynamics of consensus formation is determined by the time evolution of the base pairing probability matrix. These dynamics are generated for a functional RNA molecule, a representative of the so-called group I ribozymes, in order to test the model. The resulting ensemble of conformations is sharply peaked and the most probable structure features the predominance of all phylogenetically conserved intrachain helices tantamount to ribozyme function. Furthermore, the magnesium-aided cooperativity that leads to the shaping of the catalytic core is elucidated. Once the predictive folding algorithm has been implemented, the validity of the so-called "adiabatic approximation" is tested. This approximation requires that conformational microstates be lumped up into BPP's which are treated as quasiequilibrium states, while folding pathways are coarsely represented as sequences of BPP transitions. To test the validity of this adiabatic ansatz, a computation of the coarse Shannon information entropy sigma associated to the specific partition of conformation space into BPP's is performed taking into account the LTM evolution and contrasted with the adiabatic computation. The results reveal a subordination of torsional microstate dynamics to BPP transitions within time scales relevant to folding. This adiabatic entrainment in the long-time limit is thus identified as responsible for the expediency of the folding process.

  11. Finding an appropriate equation to measure similarity between binary vectors: case studies on Indonesian and Japanese herbal medicines.

    PubMed

    Wijaya, Sony Hartono; Afendi, Farit Mochamad; Batubara, Irmanida; Darusman, Latifah K; Altaf-Ul-Amin, Md; Kanaya, Shigehiko

    2016-12-07

    The binary similarity and dissimilarity measures have critical roles in the processing of data consisting of binary vectors in various fields including bioinformatics and chemometrics. These metrics express the similarity and dissimilarity values between two binary vectors in terms of the positive matches, absence mismatches or negative matches. To our knowledge, there is no published work presenting a systematic way of finding an appropriate equation to measure binary similarity that performs well for certain data type or application. A proper method to select a suitable binary similarity or dissimilarity measure is needed to obtain better classification results. In this study, we proposed a novel approach to select binary similarity and dissimilarity measures. We collected 79 binary similarity and dissimilarity equations by extensive literature search and implemented those equations as an R package called bmeasures. We applied these metrics to quantify the similarity and dissimilarity between herbal medicine formulas belonging to the Indonesian Jamu and Japanese Kampo separately. We assessed the capability of binary equations to classify herbal medicine pairs into match and mismatch efficacies based on their similarity or dissimilarity coefficients using the Receiver Operating Characteristic (ROC) curve analysis. According to the area under the ROC curve results, we found Indonesian Jamu and Japanese Kampo datasets obtained different ranking of binary similarity and dissimilarity measures. Out of all the equations, the Forbes-2 similarity and the Variant of Correlation similarity measures are recommended for studying the relationship between Jamu formulas and Kampo formulas, respectively. The selection of binary similarity and dissimilarity measures for multivariate analysis is data dependent. The proposed method can be used to find the most suitable binary similarity and dissimilarity equation wisely for a particular data. Our finding suggests that all four types of matching quantities in the Operational Taxonomic Unit (OTU) table are important to calculate the similarity and dissimilarity coefficients between herbal medicine formulas. Also, the binary similarity and dissimilarity measures that include the negative match quantity d achieve better capability to separate herbal medicine pairs compared to equations that exclude d.

  12. Randomizing world trade. II. A weighted network analysis

    NASA Astrophysics Data System (ADS)

    Squartini, Tiziano; Fagiolo, Giorgio; Garlaschelli, Diego

    2011-10-01

    Based on the misleading expectation that weighted network properties always offer a more complete description than purely topological ones, current economic models of the International Trade Network (ITN) generally aim at explaining local weighted properties, not local binary ones. Here we complement our analysis of the binary projections of the ITN by considering its weighted representations. We show that, unlike the binary case, all possible weighted representations of the ITN (directed and undirected, aggregated and disaggregated) cannot be traced back to local country-specific properties, which are therefore of limited informativeness. Our two papers show that traditional macroeconomic approaches systematically fail to capture the key properties of the ITN. In the binary case, they do not focus on the degree sequence and hence cannot characterize or replicate higher-order properties. In the weighted case, they generally focus on the strength sequence, but the knowledge of the latter is not enough in order to understand or reproduce indirect effects.

  13. A new technique for calculations of binary stellar evolution, with application to magnetic braking

    NASA Technical Reports Server (NTRS)

    Rappaport, S.; Joss, P. C.; Verbunt, F.

    1983-01-01

    The development of appropriate computer programs has made it possible to conduct studies of stellar evolution which are more detailed and accurate than the investigations previously feasible. However, the use of such programs can also entail some serious drawbacks which are related to the time and expense required for the work. One approach for overcoming these drawbacks involves the employment of simplified stellar evolution codes which incorporate the essential physics of the problem of interest without attempting either great generality or maximal accuracy. Rappaport et al. (1982) have developed a simplified code to study the evolution of close binary stellar systems composed of a collapsed object and a low-mass secondary. The present investigation is concerned with a more general, but still simplified, technique for calculating the evolution of close binary systems with collapsed binaries and mass-losing secondaries.

  14. Different binarization processes validated against manual counts of fluorescent bacterial cells.

    PubMed

    Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W

    2016-09-01

    State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Feedforward, high density, programmable read only neural network based memory system

    NASA Technical Reports Server (NTRS)

    Daud, Taher; Moopenn, Alex; Lamb, James; Thakoor, Anil; Khanna, Satish

    1988-01-01

    Neural network-inspired, nonvolatile, programmable associative memory using thin-film technology is demonstrated. The details of the architecture, which uses programmable resistive connection matrices in synaptic arrays and current summing and thresholding amplifiers as neurons, are described. Several synapse configurations for a high-density array of a binary connection matrix are also described. Test circuits are evaluated for operational feasibility and to demonstrate the speed of the read operation. The results are discussed to highlight the potential for a read data rate exceeding 10 megabits/sec.

  16. Some conservative estimates in quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2006-08-15

    Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.

  17. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  18. Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar

    DOE PAGES

    Sen, Satyabrata

    2015-08-04

    We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less

  19. Surrogate matrix and surrogate analyte approaches for definitive quantitation of endogenous biomolecules.

    PubMed

    Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L

    2012-10-01

    Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.

  20. Adiabatic pipelining: a key to ternary computing with quantum dots.

    PubMed

    Pečar, P; Ramšak, A; Zimic, N; Mraz, M; Lebar Bajec, I

    2008-12-10

    The quantum-dot cellular automaton (QCA), a processing platform based on interacting quantum dots, was introduced by Lent in the mid-1990s. What followed was an exhilarating period with the development of the line, the functionally complete set of logic functions, as well as more complex processing structures, however all in the realm of binary logic. Regardless of these achievements, it has to be acknowledged that the use of binary logic is in computing systems mainly the end result of the technological limitations, which the designers had to cope with in the early days of their design. The first advancement of QCAs to multi-valued (ternary) processing was performed by Lebar Bajec et al, with the argument that processing platforms of the future should not disregard the clear advantages of multi-valued logic. Some of the elementary ternary QCAs, necessary for the construction of more complex processing entities, however, lead to a remarkable increase in size when compared to their binary counterparts. This somewhat negates the advantages gained by entering the ternary computing domain. As it turned out, even the binary QCA had its initial hiccups, which have been solved by the introduction of adiabatic switching and the application of adiabatic pipeline approaches. We present here a study that introduces adiabatic switching into the ternary QCA and employs the adiabatic pipeline approach to successfully solve the issues of elementary ternary QCAs. What is more, the ternary QCAs presented here are sizewise comparable to binary QCAs. This in our view might serve towards their faster adoption.

  1. Retargeted Least Squares Regression Algorithm.

    PubMed

    Zhang, Xu-Yao; Wang, Lingfeng; Xiang, Shiming; Liu, Cheng-Lin

    2015-09-01

    This brief presents a framework of retargeted least squares regression (ReLSR) for multicategory classification. The core idea is to directly learn the regression targets from data other than using the traditional zero-one matrix as regression targets. The learned target matrix can guarantee a large margin constraint for the requirement of correct classification for each data point. Compared with the traditional least squares regression (LSR) and a recently proposed discriminative LSR models, ReLSR is much more accurate in measuring the classification error of the regression model. Furthermore, ReLSR is a single and compact model, hence there is no need to train two-class (binary) machines that are independent of each other. The convex optimization problem of ReLSR is solved elegantly and efficiently with an alternating procedure including regression and retargeting as substeps. The experimental evaluation over a range of databases identifies the validity of our method.

  2. Development of high performance electroless Ni-P-HNT composite coatings

    NASA Astrophysics Data System (ADS)

    Ranganatha, S.; Venkatesha, T. V.; Vathsala, K.

    2012-12-01

    Halloysite nanotubes (HNTs) of the dimension 50 nm × 1-3 μm (diameter × length) are utililized to fabricate the alloy composite by employing electroless/autocatalytic deposition technique. Electroless Ni-P-HNT binary alloy composite coatings are prepared successfully on low carbon steel. These nanotubes were made to get inserted/incorporated into nickel matrix and corresponding composites are examined for their electrochemical, mechanical and tribological performances and compared with that of plain Ni-P. The coatings were characterized using scanning electron microscopy (SEM) and Energy dispersive X-ray analysis (EDX) techniques to analyze surface nature and composition correspondingly. Small amount of incorporated HNTs made Ni-P deposits appreciable enhancement and betterment in corrosion resistance, hardness and friction resistance. This drastic improvement in the properties reflects the effect of addition of HNTs into Ni-P matrix leading to the development of high performance Ni-P-HNT composite coatings.

  3. Superradiant effects on pulse propagation in resonant media. [atomic excitations/coherent radiation - operators (mathematics)/matrices (mathematics)

    NASA Technical Reports Server (NTRS)

    Lee, C.

    1975-01-01

    Adopting the so-called genealogical construction, the eigenstates of collective operators can be expressed corresponding to a specified mode for an N-atom system in terms of those for an (N-1)-atom system. Matrix element of a collective operator of an arbitrary mode is presented which can be written as the product of an m-dependent factor and an m-independent reduced matrix element (RME). A set of recursion formulas for the RME was obtained. A graphical representation of the RME on the branching diagram for binary irreducible representations of permutation groups was then introduced. This gave a simple and systematic way of calculating the RME. Results show explicitly the geometry dependence of superradiance and the relative importance of r-conserving and r-nonconserving processes and clears up the chief difficulty encounted in the problem of N two-level atoms, spread over large regions, interacting with a multimode radiation field.

  4. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns.

    PubMed

    Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling

    2016-05-01

    Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Harnessing Active Fins to Segregate Nanoparticles from Binary Mixtures

    NASA Astrophysics Data System (ADS)

    Liu, Ya; Kuksenok, Olga; Bhattacharya, Amitabh; Ma, Yongting; He, Ximin; Aizenberg, Joanna; Balazs, Anna

    2014-03-01

    One of the challenges in creating high-performance polymeric nanocomposites for optoelectronic applications, such as bilayer solar cells, is establishing effective and facile routes for controlling the properties of interface and segregation of binary particles with hole conductor particles and electron conductor particles. We model nanocomposites that encompass binary particles and binary blends in a microchannel. An array of oscillating microfins is immersed in the fluid and tethered to the floor of the microchannel; the fluid containing mixture of nanoparticles is driven along the channel by an imposed pressure gradient. During the oscillations, the fins with the specific chemical wetting reach the upper fluid when they are upright and are entirely within the lower stream when they are tilted. We introduce specific interaction between the fins and particulates in the solution. Fins can selectively ``catch'' target nanoparticles within the upper fluid stream and then release them into the lower stream. We focus on different modes of fins motion to optimize selective segregation of particles within binary mixture. Our approach provides an effective means of tailoring the properties and ultimate performance of the composites.

  6. Bottom-Up and Top-Down Solid-State NMR Approaches for Bacterial Biofilm Matrix Composition

    PubMed Central

    Cegelski, Lynette

    2015-01-01

    The genomics and proteomics revolutions have been enormously successful in providing crucial “parts lists” for biological systems. Yet, formidable challenges exist in generating complete descriptions of how the parts function and assemble into macromolecular complexes and whole-cell assemblies. Bacterial biofilms are complex multicellular bacterial communities protected by a slime-like extracellular matrix that confers protection to environmental stress and enhances resistance to antibiotics and host defenses. As a non-crystalline, insoluble, heterogeneous assembly, the biofilm extracellular matrix poses a challenge to compositional analysis by conventional methods. In this Perspective, bottom-up and top-down solid-state NMR approaches are described for defining chemical composition in complex macrosystems. The “sum-of-theparts” bottom-up approach was introduced to examine the amyloid-integrated biofilms formed by E. coli and permitted the first determination of the composition of the intact extracellular matrix from a bacterial biofilm. An alternative top-down approach was developed to define composition in V. cholerae biofilms and relied on an extensive panel of NMR measurements to tease out specific carbon pools from a single sample of the intact extracellular matrix. These two approaches are widely applicable to other heterogeneous assemblies. For bacterial biofilms, quantitative parameters of matrix composition are needed to understand how biofilms are assembled, to improve the development of biofilm inhibitors, and to dissect inhibitor modes of action. Solid-state NMR approaches will also be invaluable in obtaining parameters of matrix architecture. PMID:25797008

  7. Bottom-up and top-down solid-state NMR approaches for bacterial biofilm matrix composition.

    PubMed

    Cegelski, Lynette

    2015-04-01

    The genomics and proteomics revolutions have been enormously successful in providing crucial "parts lists" for biological systems. Yet, formidable challenges exist in generating complete descriptions of how the parts function and assemble into macromolecular complexes and whole-cell assemblies. Bacterial biofilms are complex multicellular bacterial communities protected by a slime-like extracellular matrix that confers protection to environmental stress and enhances resistance to antibiotics and host defenses. As a non-crystalline, insoluble, heterogeneous assembly, the biofilm extracellular matrix poses a challenge to compositional analysis by conventional methods. In this perspective, bottom-up and top-down solid-state NMR approaches are described for defining chemical composition in complex macrosystems. The "sum-of-the-parts" bottom-up approach was introduced to examine the amyloid-integrated biofilms formed by Escherichia coli and permitted the first determination of the composition of the intact extracellular matrix from a bacterial biofilm. An alternative top-down approach was developed to define composition in Vibrio cholerae biofilms and relied on an extensive panel of NMR measurements to tease out specific carbon pools from a single sample of the intact extracellular matrix. These two approaches are widely applicable to other heterogeneous assemblies. For bacterial biofilms, quantitative parameters of matrix composition are needed to understand how biofilms are assembled, to improve the development of biofilm inhibitors, and to dissect inhibitor modes of action. Solid-state NMR approaches will also be invaluable in obtaining parameters of matrix architecture. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Bottom-up and top-down solid-state NMR approaches for bacterial biofilm matrix composition

    NASA Astrophysics Data System (ADS)

    Cegelski, Lynette

    2015-04-01

    The genomics and proteomics revolutions have been enormously successful in providing crucial "parts lists" for biological systems. Yet, formidable challenges exist in generating complete descriptions of how the parts function and assemble into macromolecular complexes and whole-cell assemblies. Bacterial biofilms are complex multicellular bacterial communities protected by a slime-like extracellular matrix that confers protection to environmental stress and enhances resistance to antibiotics and host defenses. As a non-crystalline, insoluble, heterogeneous assembly, the biofilm extracellular matrix poses a challenge to compositional analysis by conventional methods. In this perspective, bottom-up and top-down solid-state NMR approaches are described for defining chemical composition in complex macrosystems. The "sum-of-the-parts" bottom-up approach was introduced to examine the amyloid-integrated biofilms formed by Escherichia coli and permitted the first determination of the composition of the intact extracellular matrix from a bacterial biofilm. An alternative top-down approach was developed to define composition in Vibrio cholerae biofilms and relied on an extensive panel of NMR measurements to tease out specific carbon pools from a single sample of the intact extracellular matrix. These two approaches are widely applicable to other heterogeneous assemblies. For bacterial biofilms, quantitative parameters of matrix composition are needed to understand how biofilms are assembled, to improve the development of biofilm inhibitors, and to dissect inhibitor modes of action. Solid-state NMR approaches will also be invaluable in obtaining parameters of matrix architecture.

  9. A new scheme for strain typing of methicillin-resistant Staphylococcus aureus on the basis of matrix-assisted laser desorption ionization time-of-flight mass spectrometry by using machine learning approach.

    PubMed

    Wang, Hsin-Yao; Lee, Tzong-Yi; Tseng, Yi-Ju; Liu, Tsui-Ping; Huang, Kai-Yao; Chang, Yung-Ta; Chen, Chun-Hsien; Lu, Jang-Jih

    2018-01-01

    Methicillin-resistant Staphylococcus aureus (MRSA), one of the most important clinical pathogens, conducts an increasing number of morbidity and mortality in the world. Rapid and accurate strain typing of bacteria would facilitate epidemiological investigation and infection control in near real time. Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry is a rapid and cost-effective tool for presumptive strain typing. To develop robust method for strain typing based on MALDI-TOF spectrum, machine learning (ML) is a promising algorithm for the construction of predictive model. In this study, a strategy of building templates of specific types was used to facilitate generating predictive models of methicillin-resistant Staphylococcus aureus (MRSA) strain typing through various ML methods. The strain types of the isolates were determined through multilocus sequence typing (MLST). The area under the receiver operating characteristic curve (AUC) and the predictive accuracy of the models were compared. ST5, ST59, and ST239 were the major MLST types, and ST45 was the minor type. For binary classification, the AUC values of various ML methods ranged from 0.76 to 0.99 for ST5, ST59, and ST239 types. In multiclass classification, the predictive accuracy of all generated models was more than 0.83. This study has demonstrated that ML methods can serve as a cost-effective and promising tool that provides preliminary strain typing information about major MRSA lineages on the basis of MALDI-TOF spectra.

  10. Semistochastic approach to many electron systems

    NASA Astrophysics Data System (ADS)

    Grossjean, M. K.; Grossjean, M. F.; Schulten, K.; Tavan, P.

    1992-08-01

    A Pariser-Parr-Pople (PPP) Hamiltonian of an 8π electron system of the molecule octatetraene, represented in a configuration-interaction basis (CI basis), is analyzed with respect to the statistical properties of its matrix elements. Based on this analysis we develop an effective Hamiltonian, which represents virtual excitations by a Gaussian orthogonal ensemble (GOE). We also examine numerical approaches which replace the original Hamiltonian by a semistochastically generated CI matrix. In that CI matrix, the matrix elements of high energy excitations are choosen randomly according to distributions reflecting the statistics of the original CI matrix.

  11. Investigation of drug-excipient compatibility using rheological and thermal tools

    NASA Astrophysics Data System (ADS)

    Trivedi, Maitri R.

    HYPOTHESIS: We plan to investigate a different approach to evaluate drug-excipient physical compatibility using rheological and thermal tools as opposed to commonly used chemical techniques in pharmaceutical industry. This approach offers practical solutions to routinely associated problems arising with API's and commonly used hydrates forms of excipients. ABSTRACT: Drug-Excipient compatibility studies are an important aspect of pre-formulation and formulation development in pharmaceutical research and development. Various approaches have been used in pharmaceutical industry including use of thermal analysis and quantitative assessment of drug-excipient mixtures after keeping the samples under stress environment depending upon the type of formulation. In an attempt to provide better understanding of such compatibility aspect of excipients with different properties of API, various rheological and thermal studies were conducted on binary mixtures of excipients which exist in different hydrates. Dibasic Calcium Phosphate (DCP, anhydrous and dihydrate forms) and Lactose (Lac, anhydrous and monohydrate) were selected with cohesive API's (Acetaminophen and Aspirin). Binary mixtures of DCP and Lac were prepared by addition of 0% w/w to 50% w/w of the API into each powder blend. Rheological and thermal aspects were considered using different approaches such as powder rheometer, rotational shear cell and traditional rheometery approaches like angle of repose (AOR), hausner's ratio (HR) and cares index (CI). Thermal analysis was conducted using modulated differential scanning calorimetry (MDSC) and thermal effusivity. The data suggested that the powder rheometer showed distinctive understanding in the flowability behavior of binary mixtures with addition of increasing proportion of API's than traditional approaches. Thermal approaches revealed the potential interaction of water of crystallization DCP-D with the API (APAP) while such interactions were absent in DCP-A, while in case of Lac-M and Lac-A, interaction with water of crystallization were not present. Binary mixtures prepared with DCP-D were better flowable while blends with DCP-A were better in stability (physical), compressibility and permeability. Similarly binary mixtures prepared with Lac-M were better flowable and stable in physical compatibility as compared to Lac-A. Lac-A were better in compressibility and permeability. Second part of these research included understanding the powder behavior from wet granulation point of view. Wet granulation includes the formation of agglomerates with powders to form granules in order to have better flowability, content uniformity and compressibility of granular mass. End point determination of powders involving change in powder energies and compressibility, permeability along with thermal analyses were conducted. The effects of water of crystallization on end point determination was studied and based on which overall effects on drug-excipient compatibility using different hydrate forms of excipients were evaluated.

  12. Semiclassical matrix model for quantum chaotic transport with time-reversal symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel, E-mail: marcel.novaes@gmail.com

    2015-10-15

    We show that the semiclassical approach to chaotic quantum transport in the presence of time-reversal symmetry can be described by a matrix model. In other words, we construct a matrix integral whose perturbative expansion satisfies the semiclassical diagrammatic rules for the calculation of transport statistics. One of the virtues of this approach is that it leads very naturally to the semiclassical derivation of universal predictions from random matrix theory.

  13. Merging Black Hole Binaries in Galactic Nuclei: Implications for Advanced-LIGO Detections

    NASA Astrophysics Data System (ADS)

    Antonini, Fabio; Rasio, Frederic A.

    2016-11-01

    Motivated by the recent detection of gravitational waves from the black hole binary merger GW150914, we study the dynamical evolution of (stellar-mass) black holes in galactic nuclei, where massive star clusters reside. With masses of ˜ {10}7 {M}⊙ and sizes of only a few parsecs, nuclear star clusters (NSCs) are the densest stellar systems observed in the local universe and represent a robust environment where black hole binaries can dynamically form, harden, and merge. We show that due to their large escape speeds, NSCs can retain a large fraction of their merger remnants. Successive mergers can then lead to significant growth and produce black hole mergers of several tens of solar masses similar to GW150914 and up to a few hundreds of solar masses, without the need to invoke extremely low metallicity environments. We use a semi-analytical approach to describe the dynamics of black holes in massive star clusters. Our models give a black hole binary merger rate of ≈ 1.5 {{Gpc}}-3 {{yr}}-1 from NSCs, implying up to a few tens of possible detections per year with Advanced LIGO. Moreover, we find a local merger rate of ˜ 1 {{Gpc}}-3 {{yr}}-1 for high mass black hole binaries similar to GW150914; a merger rate comparable to or higher than that of similar binaries assembled dynamically in globular clusters (GCs). Finally, we show that if all black holes receive high natal kicks, ≳ 50 {km} {{{s}}}-1, then NSCs will dominate the local merger rate of binary black holes compared to either GCs or isolated binary evolution.

  14. On the Lack of Circumbinary Planets Orbiting Isolated Binary Stars

    NASA Astrophysics Data System (ADS)

    Fleming, David P.; Barnes, Rory; Graham, David E.; Luger, Rodrigo; Quinn, Thomas R.

    2018-05-01

    We outline a mechanism that explains the observed lack of circumbinary planets (CBPs) via coupled stellar–tidal evolution of isolated binary stars. Tidal forces between low-mass, short-period binary stars on the pre-main sequence slow the stellar rotations transferring rotational angular momentum to the orbit as the stars approach the tidally locked state. This transfer increases the binary orbital period, expanding the region of dynamical instability around the binary, and destabilizing CBPs that tend to preferentially orbit just beyond the initial dynamical stability limit. After the stars tidally lock, we find that angular momentum loss due to magnetic braking can significantly shrink the binary orbit, and hence the region of dynamical stability, over time, impacting where surviving CBPs are observed relative to the boundary. We perform simulations over a wide range of parameter space and find that the expansion of the instability region occurs for most plausible initial conditions and that, in some cases, the stability semimajor axis doubles from its initial value. We examine the dynamical and observable consequences of a CBP falling within the dynamical instability limit by running N-body simulations of circumbinary planetary systems and find that, typically, at least one planet is ejected from the system. We apply our theory to the shortest-period Kepler binary that possesses a CBP, Kepler-47, and find that its existence is consistent with our model. Under conservative assumptions, we find that coupled stellar–tidal evolution of pre-main sequence binary stars removes at least one close-in CBP in 87% of multi-planet circumbinary systems.

  15. DANCING IN THE DARK: NEW BROWN DWARF BINARIES FROM KERNEL PHASE INTERFEROMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Benjamin; Tuthill, Peter; Martinache, Frantz, E-mail: bjsp@physics.usyd.edu.au, E-mail: p.tuthill@physics.usyd.edu.au, E-mail: frantz@naoj.org

    2013-04-20

    This paper revisits a sample of ultracool dwarfs in the solar neighborhood previously observed with the Hubble Space Telescope's NICMOS NIC1 instrument. We have applied a novel high angular resolution data analysis technique based on the extraction and fitting of kernel phases to archival data. This was found to deliver a dramatic improvement over earlier analysis methods, permitting a search for companions down to projected separations of {approx}1 AU on NIC1 snapshot images. We reveal five new close binary candidates and present revised astrometry on previously known binaries, all of which were recovered with the technique. The new candidate binariesmore » have sufficiently close separation to determine dynamical masses in a short-term observing campaign. We also present four marginal detections of objects which may be very close binaries or high-contrast companions. Including only confident detections within 19 pc, we report a binary fraction of at least #Greek Lunate Epsilon Symbol#{sub b} = 17.2{sub -3.7}{sup +5.7}%. The results reported here provide new insights into the population of nearby ultracool binaries, while also offering an incisive case study of the benefits conferred by the kernel phase approach in the recovery of companions within a few resolution elements of the point-spread function core.« less

  16. Teaching the extracellular matrix and introducing online databases within a multidisciplinary course with i-cell-MATRIX: A student-centered approach.

    PubMed

    Sousa, João Carlos; Costa, Manuel João; Palha, Joana Almeida

    2010-03-01

    The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure. Internet-available resources can bridge the division between the molecular details and ECM's biological properties and associated processes. This article presents an approach to teach the ECM developed for first year medical undergraduates who, working in teams: (i) Explore a specific molecular component of the matrix, (ii) identify a disease in which the component is implicated, (iii) investigate how the component's structure/function contributes to ECM' supramolecular organization in physiological and in pathological conditions, and (iv) share their findings with colleagues. The approach-designated i-cell-MATRIX-is focused on the contribution of individual components to the overall organization and biological functions of the ECM. i-cell-MATRIX is student centered and uses 5 hours of class time. Summary of results and take home message: A "1-minute paper" has been used to gather student feedback on the impact of i-cell-MATRIX. Qualitative analysis of student feedback gathered in three consecutive years revealed that students appreciate the approach's reliance on self-directed learning, the interactivity embedded and the demand for deeper insights on the ECM. Learning how to use internet biomedical resources is another positive outcome. Ninety percent of students recommend the activity for subsequent years. i-cell-MATRIX is adaptable by other medical schools which may be looking for an approach that achieves higher student engagement with the ECM. Copyright © 2010 International Union of Biochemistry and Molecular Biology, Inc.

  17. Studies in optical parallel processing. [All optical and electro-optic approaches

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1978-01-01

    Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.

  18. Precision Measurement of Black Hole Binary Dynamics: Analyzing the LISA Data Stream

    NASA Technical Reports Server (NTRS)

    McWilliams, Sean T.; Thorpe, James Ira; Baker, John G.; Arnaud, Keith A.; Kelly, Bernard J.

    2008-01-01

    One of the richest potential sources of insight into fundamental physics that LISA will be capable of observing is the inspiral of supermassive black hole binaries (BHBs). However, the data analysis challenge presented by the LISA data stream is quite unlike the situation for present day gravitational wave detectors. In order to make the precision measurements necessary to achieve LISA's science goals, the BHB signal must be distinguished from a data stream that not only contains instrumental noise, but potentially thousands of other signals as well, so that the "background" we wish to separate out to focus on the BHB signal is likely to be highly nonstationary and nongaussian, as well as being of scientific interest in its own right. In addition, whereas the theoretical templates that we calculate in order to ultimately estimate the parameters can afford to be somewhat inaccurate and still be effective for present day and near future detectors, this is not the case for LISA, and extremely high fidelity of the theoretical templates for high signal-to-noise signals will be required to prevent theoretical errors from dominating the parameter estimates. NVe, will describe efforts in the community of LISA data analysts to address the challenges regarding the specific issue of BHB signals. These efforts include using a Markov Chain Monte Carlo approach with the freedom to model the BHB and the other signals present in the data stream simultaneously, rather than trying to remove other signals and risk biasing the remaining data. The Mock LISA Data Challenge is a community of LISA scientists who generate rounds of simulated LISA noise with increasingly difficult signal content, and invite the LISA data analysis community to exercise their methods, or develop new methods, in an attempt to extract the parameters for the signals embedded in the mock data. In addition to practical approaches such ,is this to assess the level of parameter accuracy, one can apply the Fisher matrix formalism to assess both the statistical errors from noise and the theoretical errors

  19. On the Lack of Circumbinary Planets Orbiting Isolated Binary Stars

    NASA Astrophysics Data System (ADS)

    Fleming, David; Barnes, Rory; Graham, David E.; Luger, Rodrigo; Quinn, Thomas R.

    2018-04-01

    To date, no binary star system with an orbital period less than 7.5 days has been observed to host a circumbinary planet (CBP), a puzzling observation given the thousands of binary stars with orbital periods < 10 days discovered by the Kepler mission (Kirk et al., 2016) and the observational biases that favor their detection (Munoz & Lai, 2015). We outline a mechanism that explains the observed lack of CBPs via coupled stellar-tidal evolution of isolated binary stars. Tidal forces between low-mass, short-period binary stars on the pre-main sequence slow the stellar rotations, transferring rotational angular momentum to the orbit as the stars approach the tidally locked state. This transfer increases the binary orbital period, expanding the region of dynamical instability around the binary, and destabilizing CBPs that tend to preferentially orbit just beyond the initial dynamical stability limit. After the stars tidally lock, we find that angular momentum loss due to magnetic braking can significantly shrink the binary orbit, and hence the region of dynamical stability, over time impacting where surviving CBPs are observed relative to the boundary. We perform simulations over a wide range of parameter space and find that the expansion of the instability region occurs for most plausible initial conditions and that in some cases, the stability semi-major axis doubles from its initial value. We examine the dynamical and observable consequences of a CBP falling within the dynamical instability limit by running N-body simulations of circumbinary planetary systems and find that typically, at least one planet is ejected from the system. We apply our theory to the shortest period Kepler binary that possesses a CBP, Kepler-47, and find that its existence is consistent with our model. Under conservative assumptions, we find that coupled stellar-tidal evolution of pre-main sequence binary stars removes at least one close-in CBP in 87% of multi-planet circumbinary systems.

  20. Improving the de-agglomeration and dissolution of a poorly water soluble drug by decreasing the agglomerate strength of the cohesive powder.

    PubMed

    Allahham, Ayman; Stewart, Peter J; Das, Shyamal C

    2013-11-30

    Influence of ternary, poorly water-soluble components on the agglomerate strength of cohesive indomethacin mixtures during dissolution was studied to explore the relationship between agglomerate strength and extent of de-agglomeration and dissolution of indomethacin (Ind). Dissolution profiles of Ind from 20% Ind-lactose binary mixtures, and ternary mixtures containing additional dibasic calcium phosphate (1% or 10%; DCP), calcium sulphate (10%) and talc (10%) were determined. Agglomerate strength distributions were estimated by Monte Carlo simulation of particle size, work of cohesion and packing fraction distributions. The agglomerate strength of Ind decreased from 1.19 MPa for the binary Ind mixture to 0.84 MPa for 1DCP:20Ind mixture and to 0.42 MPa for 1DCP:2Ind mixture. Both extent of de-agglomeration, demonstrated by the concentration of the dispersed indomethacin distribution, and extent of dispersion, demonstrated by the particle size of the dispersed indomethacin, were in descending order of 1DCP:2Ind>1DCP:20Ind>binary Ind. The addition of calcium sulphate dihydrate and talc also reduced the agglomerate strength and improved de-agglomeration and dispersion of indomethacin. While not definitively causal, the improved de-agglomeration and dispersion of a poorly water soluble drug by poorly water soluble components was related to the agglomerate strength of the cohesive matrix during dissolution. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry combined with multidimensional scaling, binary hierarchical cluster tree and selected diagnostic masses improves species identification of Neolithic keratin sequences from furs of the Tyrolean Iceman Oetzi.

    PubMed

    Hollemeyer, Klaus; Altmeyer, Wolfgang; Heinzle, Elmar; Pitra, Christian

    2012-08-30

    The identification of fur origins from the 5300-year-old Tyrolean Iceman's accoutrement is not yet complete, although definite identification is essential for the socio-cultural context of his epoch. Neither have all potential samples been identified so far, nor there has a consensus been reached on the species identified using the classical methods. Archaeological hair often lacks analyzable hair scale patterns in microscopic analyses and polymer chain reaction (PCR)-based techniques are often inapplicable due to the lack of amplifiable ancient DNA. To overcome these drawbacks, a matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) method was used exclusively based on hair keratins. Thirteen fur specimens from his accoutrement were analyzed after tryptic digest of native hair. Peptide mass fingerprints (pmfs) from ancient samples and from reference species mostly occurring in the Alpine surroundings at his lifetime were compared to each other using multidimensional scaling and binary hierarchical cluster tree analysis. Both statistical methods highly reflect spectral similarities among pmfs as close zoological relationships. While multidimensional scaling was useful to discriminate specimens on the zoological order level, binary hierarchical cluster tree reached the family or subfamily level. Additionally, the presence and/or absence of order, family and/or species-specific diagnostic masses in their pmfs allowed the identification of mammals mostly down to single species level. Red deer was found in his shoe vamp, goat in the leggings, cattle in his shoe sole and at his quiver's closing flap as well as sheep and chamois in his coat. Canid species, like grey wolf, domestic dog or European red fox, were discovered in his leggings for the first time, but could not be differentiated to species level. This is widening the spectrum of processed fur-bearing species to at least one member of the Canidae family. His fur cap was allocated to a carnivore species, but differentiation between brown bear and a canid species could not be made with certainty. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Mediation Analysis with Multiple Mediators

    PubMed Central

    VanderWeele, T.J.; Vansteelandt, S.

    2014-01-01

    Recent advances in the causal inference literature on mediation have extended traditional approaches to direct and indirect effects to settings that allow for interactions and non-linearities. In this paper, these approaches from causal inference are further extended to settings in which multiple mediators may be of interest. Two analytic approaches, one based on regression and one based on weighting are proposed to estimate the effect mediated through multiple mediators and the effects through other pathways. The approaches proposed here accommodate exposure-mediator interactions and, to a certain extent, mediator-mediator interactions as well. The methods handle binary or continuous mediators and binary, continuous or count outcomes. When the mediators affect one another, the strategy of trying to assess direct and indirect effects one mediator at a time will in general fail; the approach given in this paper can still be used. A characterization is moreover given as to when the sum of the mediated effects for multiple mediators considered separately will be equal to the mediated effect of all of the mediators considered jointly. The approach proposed in this paper is robust to unmeasured common causes of two or more mediators. PMID:25580377

  3. Mediation Analysis with Multiple Mediators.

    PubMed

    VanderWeele, T J; Vansteelandt, S

    2014-01-01

    Recent advances in the causal inference literature on mediation have extended traditional approaches to direct and indirect effects to settings that allow for interactions and non-linearities. In this paper, these approaches from causal inference are further extended to settings in which multiple mediators may be of interest. Two analytic approaches, one based on regression and one based on weighting are proposed to estimate the effect mediated through multiple mediators and the effects through other pathways. The approaches proposed here accommodate exposure-mediator interactions and, to a certain extent, mediator-mediator interactions as well. The methods handle binary or continuous mediators and binary, continuous or count outcomes. When the mediators affect one another, the strategy of trying to assess direct and indirect effects one mediator at a time will in general fail; the approach given in this paper can still be used. A characterization is moreover given as to when the sum of the mediated effects for multiple mediators considered separately will be equal to the mediated effect of all of the mediators considered jointly. The approach proposed in this paper is robust to unmeasured common causes of two or more mediators.

  4. Binary black hole mergers from globular clusters: Masses, merger rates, and the impact of stellar evolution

    NASA Astrophysics Data System (ADS)

    Rodriguez, Carl L.; Chatterjee, Sourav; Rasio, Frederic A.

    2016-04-01

    The recent discovery of GW150914, the binary black hole merger detected by Advanced LIGO, has the potential to revolutionize observational astrophysics. But to fully utilize this new window into the Universe, we must compare these new observations to detailed models of binary black hole formation throughout cosmic time. Expanding upon our previous work [C. L. Rodriguez, M. Morscher, B. Pattabiraman, S. Chatterjee, C.-J. Haster, and F. A. Rasio, Phys. Rev. Lett. 115, 051101 (2015).], we study merging binary black holes formed in globular clusters using our Monte Carlo approach to stellar dynamics. We have created a new set of 52 cluster models with different masses, metallicities, and radii to fully characterize the binary black hole merger rate. These models include all the relevant dynamical processes (such as two-body relaxation, strong encounters, and three-body binary formation) and agree well with detailed direct N -body simulations. In addition, we have enhanced our stellar evolution algorithms with updated metallicity-dependent stellar wind and supernova prescriptions, allowing us to compare our results directly to the most recent population synthesis predictions for merger rates from isolated binary evolution. We explore the relationship between a cluster's global properties and the population of binary black holes that it produces. In particular, we derive a numerically calibrated relationship between the merger times of ejected black hole binaries and a cluster's mass and radius. With our improved treatment of stellar evolution, we find that globular clusters can produce a significant population of massive black hole binaries that merge in the local Universe. We explore the masses and mass ratios of these binaries as a function of redshift, and find a merger rate of ˜5 Gpc-3yr-1 in the local Universe, with 80% of sources having total masses from 32 M⊙ to 64 M⊙. Under standard assumptions, approximately one out of every seven binary black hole mergers in the local Universe will have originated in a globular cluster, but we also explore the sensitivity of this result to different assumptions for binary stellar evolution. If black holes were born with significant natal kicks, comparable to those of neutron stars, then the merger rate of binary black holes from globular clusters would be comparable to that from the field, with approximately 1 /2 of mergers originating in clusters. Finally we point out that population synthesis results for the field may also be modified by dynamical interactions of binaries taking place in dense star clusters which, unlike globular clusters, dissolved before the present day.

  5. Matrix approach to land carbon cycle modeling: A case study with the Community Land Model.

    PubMed

    Huang, Yuanyuan; Lu, Xingjie; Shi, Zheng; Lawrence, David; Koven, Charles D; Xia, Jianyang; Du, Zhenggang; Kluzek, Erik; Luo, Yiqi

    2018-03-01

    The terrestrial carbon (C) cycle has been commonly represented by a series of C balance equations to track C influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C cycle processes well but makes it difficult to track model behaviors. It is also computationally expensive, limiting the ability to conduct comprehensive parametric sensitivity analyses. To overcome these challenges, we have developed a matrix approach, which reorganizes the C balance equations in the original ESM into one matrix equation without changing any modeled C cycle processes and mechanisms. We applied the matrix approach to the Community Land Model (CLM4.5) with vertically-resolved biogeochemistry. The matrix equation exactly reproduces litter and soil organic carbon (SOC) dynamics of the standard CLM4.5 across different spatial-temporal scales. The matrix approach enables effective diagnosis of system properties such as C residence time and attribution of global change impacts to relevant processes. We illustrated, for example, the impacts of CO 2 fertilization on litter and SOC dynamics can be easily decomposed into the relative contributions from C input, allocation of external C into different C pools, nitrogen regulation, altered soil environmental conditions, and vertical mixing along the soil profile. In addition, the matrix tool can accelerate model spin-up, permit thorough parametric sensitivity tests, enable pool-based data assimilation, and facilitate tracking and benchmarking of model behaviors. Overall, the matrix approach can make a broad range of future modeling activities more efficient and effective. © 2017 John Wiley & Sons Ltd.

  6. Matrix exponential-based closures for the turbulent subgrid-scale stress tensor.

    PubMed

    Li, Yi; Chevillard, Laurent; Eyink, Gregory; Meneveau, Charles

    2009-01-01

    Two approaches for closing the turbulence subgrid-scale stress tensor in terms of matrix exponentials are introduced and compared. The first approach is based on a formal solution of the stress transport equation in which the production terms can be integrated exactly in terms of matrix exponentials. This formal solution of the subgrid-scale stress transport equation is shown to be useful to explore special cases, such as the response to constant velocity gradient, but neglecting pressure-strain correlations and diffusion effects. The second approach is based on an Eulerian-Lagrangian change of variables, combined with the assumption of isotropy for the conditionally averaged Lagrangian velocity gradient tensor and with the recent fluid deformation approximation. It is shown that both approaches lead to the same basic closure in which the stress tensor is expressed as the matrix exponential of the resolved velocity gradient tensor multiplied by its transpose. Short-time expansions of the matrix exponentials are shown to provide an eddy-viscosity term and particular quadratic terms, and thus allow a reinterpretation of traditional eddy-viscosity and nonlinear stress closures. The basic feasibility of the matrix-exponential closure is illustrated by implementing it successfully in large eddy simulation of forced isotropic turbulence. The matrix-exponential closure employs the drastic approximation of entirely omitting the pressure-strain correlation and other nonlinear scrambling terms. But unlike eddy-viscosity closures, the matrix exponential approach provides a simple and local closure that can be derived directly from the stress transport equation with the production term, and using physically motivated assumptions about Lagrangian decorrelation and upstream isotropy.

  7. Dynamical model of binary asteroid systems through patched three-body problems

    NASA Astrophysics Data System (ADS)

    Ferrari, Fabio; Lavagna, Michèle; Howell, Kathleen C.

    2016-08-01

    The paper presents a strategy for trajectory design in the proximity of a binary asteroid pair. A novel patched approach has been used to design trajectories in the binary system, which is modeled by means of two different three-body systems. The model introduces some degrees of freedom with respect to a classical two-body approach and it is intended to model to higher accuracy the peculiar dynamical properties of such irregular and low gravity field bodies, while keeping the advantages of having a full analytical formulation and low computational cost required. The neighborhood of the asteroid couple is split into two regions of influence where two different three-body problems describe the dynamics of the spacecraft. These regions have been identified by introducing the concept of surface of equivalence (SOE), a three-dimensional surface that serves as boundary between the regions of influence of each dynamical model. A case of study is presented, in terms of potential scenario that may benefit of such an approach in solving its mission analysis. Cost-effective solutions to land a vehicle on the surface of a low gravity body are selected by generating Poincaré maps on the SOE, seeking intersections between stable and unstable manifolds of the two patched three-body systems.

  8. Binary Gene Expression Patterning of the Molt Cycle: The Case of Chitin Metabolism

    PubMed Central

    Abehsera, Shai; Glazer, Lilah; Tynyakov, Jenny; Plaschkes, Inbar; Chalifa-Caspi, Vered; Khalaila, Isam; Aflalo, Eliahu D.; Sagi, Amir

    2015-01-01

    In crustaceans, like all arthropods, growth is accompanied by a molting cycle. This cycle comprises major physiological events in which mineralized chitinous structures are built and degraded. These events are in turn governed by genes whose patterns of expression are presumably linked to the molting cycle. To study these genes we performed next generation sequencing and constructed a molt-related transcriptomic library from two exoskeletal-forming tissues of the crayfish Cherax quadricarinatus, namely the gastrolith and the mandible cuticle-forming epithelium. To simplify the study of such a complex process as molting, a novel approach, binary patterning of gene expression, was employed. This approach revealed that key genes involved in the synthesis and breakdown of chitin exhibit a molt-related pattern in the gastrolith-forming epithelium. On the other hand, the same genes in the mandible cuticle-forming epithelium showed a molt-independent pattern of expression. Genes related to the metabolism of glucosamine-6-phosphate, a chitin precursor synthesized from simple sugars, showed a molt-related pattern of expression in both tissues. The binary patterning approach unfolds typical patterns of gene expression during the molt cycle of a crustacean. The use of such a simplifying integrative tool for assessing gene patterning seems appropriate for the study of complex biological processes. PMID:25919476

  9. Effect of clay content and mineralogy on frictional sliding behavior of simulated gouges: binary and ternary mixtures of quartz, illite, and montmorillonite

    USGS Publications Warehouse

    Tembe, Sheryl; Lockner, David A.; Wong, Teng-Fong

    2010-01-01

    We investigated the frictional sliding behavior of simulated quartz-clay gouges under stress conditions relevant to seismogenic depths. Conventional triaxial compression tests were conducted at 40 MPa effective normal stress on saturated saw cut samples containing binary and ternary mixtures of quartz, montmorillonite, and illite. In all cases, frictional strengths of mixtures fall between the end-members of pure quartz (strongest) and clay (weakest). The overall trend was a decrease in strength with increasing clay content. In the illite/quartz mixture the trend was nearly linear, while in the montmorillonite mixtures a sigmoidal trend with three strength regimes was noted. Microstructural observations were performed on the deformed samples to characterize the geometric attributes of shear localization within the gouge layers. Two micromechanical models were used to analyze the critical clay fractions for the two-regime transitions on the basis of clay porosity and packing of the quartz grains. The transition from regime 1 (high strength) to 2 (intermediate strength) is associated with the shift from a stress-supporting framework of quartz grains to a clay matrix embedded with disperse quartz grains, manifested by the development of P-foliation and reduction in Riedel shear angle. The transition from regime 2 (intermediate strength) to 3 (low strength) is attributed to the development of shear localization in the clay matrix, occurring only when the neighboring layers of quartz grains are separated by a critical clay thickness. Our mixture data relating strength degradation to clay content agree well with strengths of natural shear zone materials obtained from scientific deep drilling projects.

  10. Local structure of amorphous Ag5In5Sb60Te30 and In3SbTe2 phase change materials revealed by X-ray photoelectron and Raman spectroscopic studies

    NASA Astrophysics Data System (ADS)

    Sahu, Smriti; Manivannan, Anbarasu; Shaik, Habibuddin; Mohan Rao, G.

    2017-07-01

    Reversible switching between highly resistive (binary "0") amorphous phase and low resistive (binary "1") crystalline phase of chalcogenide-based Phase Change Materials is accredited for the development of next generation high-speed, non-volatile, data storage applications. The doped Sb-Te based materials have shown enhanced electrical/optical properties, compared to Ge-Sb-Te family for high-speed memory devices. We report here the local atomic structure of as-deposited amorphous Ag5In5Sb60Te30 (AIST) and In3SbTe2 (IST) phase change materials using X-ray photoelectron and Raman spectroscopic studies. Although AIST and IST materials show identical crystallization behavior, they differ distinctly in their crystallization temperatures. Our experimental results demonstrate that the local environment of In remains identical in the amorphous phase of both AIST and IST material, irrespective of its atomic fraction. In bonds with Sb (˜44%) and Te (˜56%), thereby forming the primary matrix in IST with a very few Sb-Te bonds. Sb2Te constructs the base matrix for AIST (˜63%) along with few Sb-Sb bonds. Furthermore, an interesting assimilation of the role of small-scale dopants such as Ag and In in AIST, reveals rare bonds between themselves, while showing selective substitution in the vicinity of Sb and Te. This results in increased electronegativity difference, and consequently, the bond strength is recognized as the factor rendering stability in amorphous AIST.

  11. The progenitors of supernovae Type Ia

    NASA Astrophysics Data System (ADS)

    Toonen, Silvia

    2014-09-01

    Despite the significance of Type Ia supernovae (SNeIa) in many fields in astrophysics, SNeIa lack a theoretical explanation. SNeIa are generally thought to be thermonuclear explosions of carbon/oxygen (CO) white dwarfs (WDs). The canonical scenarios involve white dwarfs reaching the Chandrasekhar mass, either by accretion from a non-degenerate companion (single-degenerate channel, SD) or by a merger of two CO WDs (double-degenerate channel, DD). The study of SNeIa progenitors is a very active field of research for binary population synthesis (BPS) studies. The strength of the BPS approach is to study the effect of uncertainties in binary evolution on the macroscopic properties of a binary population, in order to constrain binary evolutionary processes. I will discuss the expected SNeIa rate from the BPS approach and the uncertainties in their progenitor evolution, and compare with current observations. I will also discuss the results of the POPCORN project in which four BPS codes were compared to better understand the differences in the predicted SNeIa rate of the SD channel. The goal of this project is to investigate whether differences in the simulated populations are due to numerical effects or whether they can be explained by differences in the input physics. I will show which assumptions in BPS codes affect the results most and hence should be studied in more detail.

  12. Combining binary decision tree and geostatistical methods to estimate snow distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, Benjamin; Elder, Kelly

    2000-01-01

    We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.

  13. Statistics of time delay and scattering correlation functions in chaotic systems. I. Random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel

    2015-06-15

    We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.

  14. Multi-Scale Distributed Representation for Deep Learning and its Application to b-Jet Tagging

    NASA Astrophysics Data System (ADS)

    Lee, Jason Sang Hun; Park, Inkyu; Park, Sangnam

    2018-06-01

    Recently machine learning algorithms based on deep layered artificial neural networks (DNNs) have been applied to a wide variety of high energy physics problems such as jet tagging or event classification. We explore a simple but effective preprocessing step which transforms each realvalued observational quantity or input feature into a binary number with a fixed number of digits. Each binary digit represents the quantity or magnitude in different scales. We have shown that this approach improves the performance of DNNs significantly for some specific tasks without any further complication in feature engineering. We apply this multi-scale distributed binary representation to deep learning on b-jet tagging using daughter particles' momenta and vertex information.

  15. Analysis of the Conformally Flat Approximation for Binary Neutron Star Initial Conditions

    DOE PAGES

    Suh, In-Saeng; Mathews, Grant J.; Haywood, J. Reese; ...

    2017-01-09

    The spatially conformally flat approximation (CFA) is a viable method to deduce initial conditions for the subsequent evolution of binary neutron stars employing the full Einstein equations. Here in this paper, we analyze the viability of the CFA for the general relativistic hydrodynamic initial conditions of binary neutron stars. We illustrate the stability of the conformally flat condition on the hydrodynamics by numerically evolving ~100 quasicircular orbits. We illustrate the use of this approximation for orbiting neutron stars in the quasicircular orbit approximation to demonstrate the equation of state dependence of these initial conditions and how they might affect themore » emergent gravitational wave frequency as the stars approach the innermost stable circular orbit.« less

  16. Study on effect of L-arginine on solubility and dissolution of Zaltoprofen: Preparation and characterization of binary and ternary cyclodextrin inclusion complexes

    NASA Astrophysics Data System (ADS)

    Sherje, Atul P.; Patel, Forum; Murahari, Manikanta; Suvarna, Vasanti; Patel, Kavitkumar

    2018-02-01

    The present study demonstrated the binary and ternary complexes of Zaltoprofen (ZPF) with β-CD and HP-β-CD. The products were characterized using solubility, in vitro dissolution, and DSC studies. The mode of interaction of guest and host was revealed through 1H NMR and FT-IR studies. A significant increase was noticed in the stability constant (Kc) and complexation efficiency (CE) of β-CD and HP-β-CD due to addition of L-Arg in ternary complexes. The ternary complexes showed greater increase in solubility and dissolution of ZPF than binary complexes. Thus, ternary system of ZPF could be an innovative approach for its solubility and dissolution enhancement.

  17. Communication: Virial coefficients and demixing in highly asymmetric binary additive hard-sphere mixtures.

    PubMed

    López de Haro, Mariano; Tejero, Carlos F; Santos, Andrés

    2013-04-28

    The problem of demixing in a binary fluid mixture of highly asymmetric additive hard spheres is revisited. A comparison is presented between the results derived previously using truncated virial expansions for three finite size ratios with those that one obtains with the same approach in the extreme case in which one of the components consists of point particles. Since this latter system is known not to exhibit fluid-fluid segregation, the similarity observed for the behavior of the critical constants arising in the truncated series in all instances, while not being conclusive, may cast serious doubts as to the actual existence of a demixing fluid-fluid transition in disparate-sized binary additive hard-sphere mixtures.

  18. Light curve variations of the eclipsing binary V367 Cygni

    NASA Astrophysics Data System (ADS)

    Akan, M. C.

    1987-07-01

    The long-period eclipsing binary star V367 Cygni has been observed photoelectrically in two colours, B and V, in 1984, 1985, and 1986. These new light curves of the system have been discussed and compared for the light-variability with the earlier ones presented by Heiser (1962). Using some of the previously published photoelectric light curves and the present ones, several primary minima times have been derived to calculate the light elements. Any attempt to obtain a photometric solution of the binary is complicated by the peculiar nature of the light curve caused by the presence of the circumstellar matter in the system. Despite this difficulty, however, some approaches are being carried out to solve the light curves which are briefly discussed.

  19. Global optimization of small bimetallic Pd-Co binary nanoalloy clusters: a genetic algorithm approach at the DFT level.

    PubMed

    Aslan, Mikail; Davis, Jack B A; Johnston, Roy L

    2016-03-07

    The global optimisation of small bimetallic PdCo binary nanoalloys are systematically investigated using the Birmingham Cluster Genetic Algorithm (BCGA). The effect of size and composition on the structures, stability, magnetic and electronic properties including the binding energies, second finite difference energies and mixing energies of Pd-Co binary nanoalloys are discussed. A detailed analysis of Pd-Co structural motifs and segregation effects is also presented. The maximal mixing energy corresponds to Pd atom compositions for which the number of mixed Pd-Co bonds is maximised. Global minimum clusters are distinguished from transition states by vibrational frequency analysis. HOMO-LUMO gap, electric dipole moment and vibrational frequency analyses are made to enable correlation with future experiments.

  20. Bioethanol production optimization: a thermodynamic analysis.

    PubMed

    Alvarez, Víctor H; Rivera, Elmer Ccopa; Costa, Aline C; Filho, Rubens Maciel; Wolf Maciel, Maria Regina; Aznar, Martín

    2008-03-01

    In this work, the phase equilibrium of binary mixtures for bioethanol production by continuous extractive process was studied. The process is composed of four interlinked units: fermentor, centrifuge, cell treatment unit, and flash vessel (ethanol-congener separation unit). A proposal for modeling the vapor-liquid equilibrium in binary mixtures found in the flash vessel has been considered. This approach uses the Predictive Soave-Redlich-Kwong equation of state, with original and modified molecular parameters. The congeners considered were acetic acid, acetaldehyde, furfural, methanol, and 1-pentanol. The results show that the introduction of new molecular parameters r and q in the UNIFAC model gives more accurate predictions for the concentration of the congener in the gas phase for binary and ternary systems.

  1. On Fitting Generalized Linear Mixed-effects Models for Binary Responses using Different Statistical Packages

    PubMed Central

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.

    2011-01-01

    Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252

  2. A Biosequence-based Approach to Software Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oehmen, Christopher S.; Peterson, Elena S.; Phillips, Aaron R.

    For many applications, it is desirable to have some process for recognizing when software binaries are closely related without relying on them to be identical or have identical segments. Some examples include monitoring utilization of high performance computing centers or service clouds, detecting freeware in licensed code, and enforcing application whitelists. But doing so in a dynamic environment is a nontrivial task because most approaches to software similarity require extensive and time-consuming analysis of a binary, or they fail to recognize executables that are similar but nonidentical. Presented herein is a novel biosequence-based method for quantifying similarity of executable binaries.more » Using this method, it is shown in an example application on large-scale multi-author codes that 1) the biosequence-based method has a statistical performance in recognizing and distinguishing between a collection of real-world high performance computing applications better than 90% of ideal; and 2) an example of using family tree analysis to tune identification for a code subfamily can achieve better than 99% of ideal performance.« less

  3. Numerical Modeling of HgCdTe Solidification: Effects of Phase Diagram, Double-Diffusion Convection and Microgravity Level

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1997-01-01

    Melt convection, along with species diffusion and segregation on the solidification interface are the primary factors responsible for species redistribution during HgCdTe crystal growth from the melt. As no direct information about convection velocity is available, numerical modeling is a logical approach to estimate convection. Furthermore influence of microgravity level, double-diffusion and material properties should be taken into account. In the present study, HgCdTe is considered as a binary alloy with melting temperature available from a phase diagram. The numerical model of convection and solidification of binary alloy is based on the general equations of heat and mass transfer in two-dimensional region. Mathematical modeling of binary alloy solidification is still a challenging numericial problem. A Rigorous mathematical approach to this problem is available only when convection is not considered at all. The proposed numerical model was developed using the finite element code FIDAP. In the present study, the numerical model is used to consider thermal, solutal convection and a double diffusion source of mass transport.

  4. Physical Properties of the LMC Eclipsing Binary Stars

    NASA Astrophysics Data System (ADS)

    Prsa, Andrej; Devinney, E. J.; Guinan, E. F.; Engle, S. G.; DeGeorge, M.

    2009-01-01

    To date, three independent studies have devised an automatic procedure to analyse and extract the principal parameters of 2581 detached eclipsing binary stars from the OGLE photometric survey of the Large Magellanic Cloud (LMC): Devor (2005), Tamuz et al. (2006), and Prsa et al. (2008). For time efficiency, Devor used a simple model of two spherical, limb-darkened stars without tidal or reflection physics. Tamuz et al.'s approach employs a more realistic EBOP model, which is still limited in handling proximity physics. Our study used a back-propagating neural network that was trained on the light curves computed by a modern Wilson-Devinney code. The three approaches are confronted and correlations in the results are sought that indicate the degree of reliability of the obtained results. A database of solutions consistent across all three studies is presented. We assess the suitability of each method for other morphology types (i.e. semi-detached and overcontact binaries) and we overview the practical limitations of these methods for the upcoming survey data. This research is supported by NFS/RUI Grant No. AST-05-07542, which we gratefully acknowledge.

  5. Analytical development of disturbed matrix eigenvalue problem applied to mixed convection stability analysis in Darcy media

    NASA Astrophysics Data System (ADS)

    Hamed, Haikel Ben; Bennacer, Rachid

    2008-08-01

    This work consists in evaluating algebraically and numerically the influence of a disturbance on the spectral values of a diagonalizable matrix. Thus, two approaches will be possible; to use the theorem of disturbances of a matrix depending on a parameter, due to Lidskii and primarily based on the structure of Jordan of the no disturbed matrix. The second approach consists in factorizing the matrix system, and then carrying out a numerical calculation of the roots of the disturbances matrix characteristic polynomial. This problem can be a standard model in the equations of the continuous media mechanics. During this work, we chose to use the second approach and in order to illustrate the application, we choose the Rayleigh-Bénard problem in Darcy media, disturbed by a filtering through flow. The matrix form of the problem is calculated starting from a linear stability analysis by a finite elements method. We show that it is possible to break up the general phenomenon into other elementary ones described respectively by a disturbed matrix and a disturbance. A good agreement between the two methods was seen. To cite this article: H.B. Hamed, R. Bennacer, C. R. Mecanique 336 (2008).

  6. Understanding AlN Obtaining Through Computational Thermodynamics Combined with Experimental Investigation

    NASA Astrophysics Data System (ADS)

    Florea, R. M.

    2017-06-01

    Basic material concept, technology and some results of studies on aluminum matrix composite with dispersive aluminum nitride reinforcement was shown. Studied composites were manufactured by „in situ” technique. Aluminum nitride (AlN) has attracted large interest recently, because of its high thermal conductivity, good dielectric properties, high flexural strength, thermal expansion coefficient matches that of Si and its non-toxic nature, as a suitable material for hybrid integrated circuit substrates. AlMg alloys are the best matrix for AlN obtaining. Al2O3-AlMg, AlN-Al2O3, and AlN-AlMg binary diagrams were thermodynamically modelled. The obtained Gibbs free energies of components, solution parameters and stoichiometric phases were used to build a thermodynamic database of AlN- Al2O3-AlMg system. Obtaining of AlN with Liquid-phase of AlMg as matrix has been studied and compared with the thermodynamic results. The secondary phase microstructure has a significant effect on the final thermal conductivity of the obtained AlN. Thermodynamic modelling of AlN-Al2O3-AlMg system provided an important basis for understanding the obtaining behavior and interpreting the experimental results.

  7. A Binary Approach to Define and Classify Final Ecosystem Goods and Services

    EPA Science Inventory

    The ecosystem services literature decries the lack of consistency and standards in the application of ecosystem services as well as the inability of current approaches to explicitly link ecosystem services to human well-being. Recently, SEEA and CICES have conceptually identifie...

  8. Cellular and dendritic growth in a binary melt - A marginal stability approach

    NASA Technical Reports Server (NTRS)

    Laxmanan, V.

    1986-01-01

    A simple model for the constrained growth of an array of cells or dendrites in a binary alloy in the presence of an imposed positive temperature gradient in the liquid is proposed, with the dendritic or cell tip radius calculated using the marginal stability criterion of Langer and Muller-Krumbhaar (1977). This approach, an approach adopting the ad hoc assumption of minimum undercooling at the cell or dendrite tip, and an approach based on the stability criterion of Trivedi (1980) all predict tip radii to within 30 percent of each other, and yield a simple relationship between the tip radius and the growth conditions. Good agreement is found between predictions and data obtained in a succinonitrile-acetone system, and under the present experimental conditions, the dendritic tip stability parameter value is found to be twice that obtained previously, possibly due to a transition in morphology from a cellular structure with just a few side branches, to a more fully developed dendritic structure.

  9. Model reduction of nonsquare linear MIMO systems using multipoint matrix continued-fraction expansions

    NASA Technical Reports Server (NTRS)

    Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San

    1994-01-01

    This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.

  10. Robust watermarking scheme for binary images using a slice-based large-cluster algorithm with a Hamming Code

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Yuan; Liu, Chen-Chung

    2006-01-01

    The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.

  11. A learning framework for age rank estimation based on face images with scattering transform.

    PubMed

    Chang, Kuang-Yu; Chen, Chu-Song

    2015-03-01

    This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.

  12. Quick probabilistic binary image matching: changing the rules of the game

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2016-09-01

    A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.

  13. Neighborhood binary speckle pattern for deformation measurements insensitive to local illumination variation by digital image correlation.

    PubMed

    Zhao, Jian; Yang, Ping; Zhao, Yue

    2017-06-01

    Speckle pattern-based characteristics of digital image correlation (DIC) restrict its application in engineering fields and nonlaboratory environments, since serious decorrelation effect occurs due to localized sudden illumination variation. A simple and efficient speckle pattern adjusting and optimizing approach presented in this paper is aimed at providing a novel speckle pattern robust enough to resist local illumination variation. The new speckle pattern, called neighborhood binary speckle pattern, derived from original speckle pattern, is obtained by means of thresholding the pixels of a neighborhood at its central pixel value and considering the result as a binary number. The efficiency of the proposed speckle pattern is evaluated in six experimental scenarios. Experiment results indicate that the DIC measurements based on neighborhood binary speckle pattern are able to provide reliable and accurate results, even though local brightness and contrast of the deformed images have been seriously changed. It is expected that the new speckle pattern will have more potential value in engineering applications.

  14. 2SLS versus 2SRI: Appropriate methods for rare outcomes and/or rare exposures.

    PubMed

    Basu, Anirban; Coe, Norma B; Chapman, Cole G

    2018-06-01

    This study used Monte Carlo simulations to examine the ability of the two-stage least squares (2SLS) estimator and two-stage residual inclusion (2SRI) estimators with varying forms of residuals to estimate the local average and population average treatment effect parameters in models with binary outcome, endogenous binary treatment, and single binary instrument. The rarity of the outcome and the treatment was varied across simulation scenarios. Results showed that 2SLS generated consistent estimates of the local average treatment effects (LATE) and biased estimates of the average treatment effects (ATE) across all scenarios. 2SRI approaches, in general, produced biased estimates of both LATE and ATE under all scenarios. 2SRI using generalized residuals minimized the bias in ATE estimates. Use of 2SLS and 2SRI is illustrated in an empirical application estimating the effects of long-term care insurance on a variety of binary health care utilization outcomes among the near-elderly using the Health and Retirement Study. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Observing Mergers of Nonspinning Black Hole Binaries with LISA

    NASA Technical Reports Server (NTRS)

    McWilliams S.; Baker, John G.; Boggs, William D.; Centrella, Joan; Kelly Bernard J.; Thorpe, J. Ira; vanMeter, James R.

    2008-01-01

    Recent advances in the field of numerical relativity now make it possible to calculate the final, most powerful merger phase of binary black hole coalescence. We present the application of nonspinning numerical relativity waveforms to the search for and precision measurement of black hole binary coalescences using LISA. In particular, we focus on the advances made in moving beyond the equal mass, nonspinning case into other regions of parameter space, focusing on the case of nonspinning holes with ever-increasing mass ratios. We analyze the available unequal mass merger waveforms from numerical relativity, and compare them to two models, both of which use an effective one body treatment of the inspiral, but which use fundamentally different approaches to the treatment of the merger-ringdown. We confirm the expected mass ratio scaling of the merger, and investigate the changes in waveform behavior and their observational impact with changing mass ratio. Finally, we investigate the potential contribution from the merger portion of the waveform to measurement uncertainties of the binary's parameters for the unequal mass case.

  16. High-dimensional inference with the generalized Hopfield model: principal component analysis and corrections.

    PubMed

    Cocco, S; Monasson, R; Sessak, V

    2011-05-01

    We consider the problem of inferring the interactions between a set of N binary variables from the knowledge of their frequencies and pairwise correlations. The inference framework is based on the Hopfield model, a special case of the Ising model where the interaction matrix is defined through a set of patterns in the variable space, and is of rank much smaller than N. We show that maximum likelihood inference is deeply related to principal component analysis when the amplitude of the pattern components ξ is negligible compared to √N. Using techniques from statistical mechanics, we calculate the corrections to the patterns to the first order in ξ/√N. We stress the need to generalize the Hopfield model and include both attractive and repulsive patterns in order to correctly infer networks with sparse and strong interactions. We present a simple geometrical criterion to decide how many attractive and repulsive patterns should be considered as a function of the sampling noise. We moreover discuss how many sampled configurations are required for a good inference, as a function of the system size N and of the amplitude ξ. The inference approach is illustrated on synthetic and biological data.

  17. Dynamically Alterable Arrays of Polymorphic Data Types

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    An application library package was developed that represents data packets for Deep Space Network (DSN) message packets as dynamically alterable arrays composed of arbitrary polymorphic data types. The software was to address a limitation of the present state of the practice for having an array directly composed of a single monomorphic data type. This is a severe limitation when one is dealing with science data in that the types of objects one is dealing with are typically not known in advance and, therefore, are dynamic in nature. The unique feature of this approach is that it enables one to define at run-time the dynamic shape of the matrix with the ability to store polymorphic data types in each of its indices. Existing languages such as C and C++ have the restriction that the shape of the array must be known in advance and each of its elements be a monomorphic data type that is strictly defined at compile-time. This program can be executed on a variety of platforms. It can be distributed in either source code or binary code form. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware.

  18. Photon and vector meson exchanges in the production of light meson pairs and elementary atoms

    NASA Astrophysics Data System (ADS)

    Gevorkyan, S. R.; Kuraev, E. A.; Volkov, M. K.

    2013-01-01

    The production of pseudoscalar and scalar meson pairs ππ, ηη, η‧η‧, σσ as well as bound states in high energy γγ collisions are considered. The exchange by a vector particle in the binary process γ + γ → ha + hb with hadronic states ha, hb in fragmentation regions of the initial particle leads to nondecreasing cross sections with increasing energy, that is a priority of peripheral kinematics. Unlike the photon exchange the vector meson exchange needs a reggeization leading to fall with energy growth. Nevertheless, due to the peripheral kinematics beyond very forward production angles the vector meson exchanges dominate over all possible exchanges. The proposed approach allows one to express the matrix elements of the considered processes through impacting factors, which can be calculated in perturbation models like chiral perturbation theory (ChPT) or the Nambu-Jona-Lasinio (NJL) model. In particular cases the impact factors can be determined from relevant γγ sub-processes or the vector meson radiative decay width. The pionium atom production in the collisions of high energy electrons and pions with protons is considered and the relevant cross sections have been estimated.

  19. Identification of structural protein-protein interactions of herpes simplex virus type 1.

    PubMed

    Lee, Jin H; Vittone, Valerio; Diefenbach, Eve; Cunningham, Anthony L; Diefenbach, Russell J

    2008-09-01

    In this study we have defined protein-protein interactions between the structural proteins of herpes simplex virus type 1 (HSV-1) using a LexA yeast two-hybrid system. The majority of the capsid, tegument and envelope proteins of HSV-1 were screened in a matrix approach. A total of 40 binary interactions were detected including 9 out of 10 previously identified tegument-tegument interactions (Vittone, V., Diefenbach, E., Triffett, D., Douglas, M.W., Cunningham, A.L., and Diefenbach, R.J., 2005. Determination of interactions between tegument proteins of herpes simplex virus type 1. J. Virol. 79, 9566-9571). A total of 12 interactions involving the capsid protein pUL35 (VP26) and 11 interactions involving the tegument protein pUL46 (VP11/12) were identified. The most significant novel interactions detected in this study, which are likely to play a role in viral assembly, include pUL35-pUL37 (capsid-tegument), pUL46-pUL37 (tegument-tegument) and pUL49 (VP22)-pUS9 (tegument-envelope). This information will provide further insights into the pathways of HSV-1 assembly and the identified interactions are potential targets for new antiviral drugs.

  20. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-12-01

    In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  1. Tidal evolution of close binary stars. I - Revisiting the theory of the equilibrium tide

    NASA Technical Reports Server (NTRS)

    Zahn, J.-P.

    1989-01-01

    The theory of the equilibrium tide in stars that possess a convective envelope is reexamined critically, taking recent developments into account and treating thermal convection in the most consistent way within the mixing-length approach. The weak points are identified and discussed, in particular, the reduction of the turbulent viscosity when the tidal period becomes shorter than the convective turnover time. An improved version is derived for the secular equations governing the dynamical evolution of close binaries of such type.

  2. Delay differential equations via the matrix Lambert W function and bifurcation analysis: application to machine tool chatter.

    PubMed

    Yi, Sun; Nelson, Patrick W; Ulsoy, A Galip

    2007-04-01

    In a turning process modeled using delay differential equations (DDEs), we investigate the stability of the regenerative machine tool chatter problem. An approach using the matrix Lambert W function for the analytical solution to systems of delay differential equations is applied to this problem and compared with the result obtained using a bifurcation analysis. The Lambert W function, known to be useful for solving scalar first-order DDEs, has recently been extended to a matrix Lambert W function approach to solve systems of DDEs. The essential advantages of the matrix Lambert W approach are not only the similarity to the concept of the state transition matrix in lin ear ordinary differential equations, enabling its use for general classes of linear delay differential equations, but also the observation that we need only the principal branch among an infinite number of roots to determine the stability of a system of DDEs. The bifurcation method combined with Sturm sequences provides an algorithm for determining the stability of DDEs without restrictive geometric analysis. With this approach, one can obtain the critical values of delay, which determine the stability of a system and hence the preferred operating spindle speed without chatter. We apply both the matrix Lambert W function and the bifurcation analysis approach to the problem of chatter stability in turning, and compare the results obtained to existing methods. The two new approaches show excellent accuracy and certain other advantages, when compared to traditional graphical, computational and approximate methods.

  3. Dry Arthroscopy With a Retraction System for Matrix-Aided Cartilage Repair of Patellar Lesions

    PubMed Central

    Sadlik, Boguslaw; Wiewiorski, Martin

    2014-01-01

    Several commercially available cartilage repair techniques use a natural or synthetic matrix to aid cartilage regeneration (e.g., autologous matrix–induced chondrogenesis or matrix-induced cartilage implantation). However, the use of matrix-aided techniques during conventional knee joint arthroscopy under continuous irrigation is challenging. Insertion and fixation of the matrix can be complicated by the presence of fluid and the confined patellofemoral joint space with limited access to the lesion. To overcome these issues, we developed a novel arthroscopic approach for matrix-aided cartilage repair of patellar lesions. This technical note describes the use of dry arthroscopy assisted by a minimally invasive retraction system. An autologous matrix–induced chondrogenesis procedure is used to illustrate this novel approach. PMID:24749035

  4. Frequency domain system identification methods - Matrix fraction description approach

    NASA Technical Reports Server (NTRS)

    Horta, Luca G.; Juang, Jer-Nan

    1993-01-01

    This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.

  5. A Problem-Centered Approach to Canonical Matrix Forms

    ERIC Educational Resources Information Center

    Sylvestre, Jeremy

    2014-01-01

    This article outlines a problem-centered approach to the topic of canonical matrix forms in a second linear algebra course. In this approach, abstract theory, including such topics as eigenvalues, generalized eigenspaces, invariant subspaces, independent subspaces, nilpotency, and cyclic spaces, is developed in response to the patterns discovered…

  6. Efficient Data Mining for Local Binary Pattern in Texture Image Analysis

    PubMed Central

    Kwak, Jin Tae; Xu, Sheng; Wood, Bradford J.

    2015-01-01

    Local binary pattern (LBP) is a simple gray scale descriptor to characterize the local distribution of the grey levels in an image. Multi-resolution LBP and/or combinations of the LBPs have shown to be effective in texture image analysis. However, it is unclear what resolutions or combinations to choose for texture analysis. Examining all the possible cases is impractical and intractable due to the exponential growth in a feature space. This limits the accuracy and time- and space-efficiency of LBP. Here, we propose a data mining approach for LBP, which efficiently explores a high-dimensional feature space and finds a relatively smaller number of discriminative features. The features can be any combinations of LBPs. These may not be achievable with conventional approaches. Hence, our approach not only fully utilizes the capability of LBP but also maintains the low computational complexity. We incorporated three different descriptors (LBP, local contrast measure, and local directional derivative measure) with three spatial resolutions and evaluated our approach using two comprehensive texture databases. The results demonstrated the effectiveness and robustness of our approach to different experimental designs and texture images. PMID:25767332

  7. Selective Permeating Properties of Butanol and Water through Polystyrene- b-polydimethylsiloxane- b-polystyrene Pervaporation Membranes

    NASA Astrophysics Data System (ADS)

    Shin, Chaeyoung; Baer, Zachary; Chen, X. Chelsea; Ozcam, A. Evren; Clark, Douglas; Balsara, Nitash

    2015-03-01

    Polystyrene- b-polydimethylsiloxane- b-polystyrene (SDS) membranes have been studied in butanol-water binary pervaporation experiments and pervaporation experiments integrated with viable fermentation broths. Polydimethylsiloxane has been widely known to be a suitable material for separating organic chemicals from aqueous solutions, and it thus provides a continuous matrix phase in SDS membranes for permeation of small molecules. The polystyrene block provides mechanical stability to maintain the membrane structure in the pervaporation membranes. We take advantage of these features to fabricate a thin and butanol-selective SDS membrane for in situ product removal in fermentation.

  8. Handwritten text line segmentation by spectral clustering

    NASA Astrophysics Data System (ADS)

    Han, Xuecheng; Yao, Hui; Zhong, Guoqiang

    2017-02-01

    Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.

  9. Diffusion of multi-isotopic chemical species in molten silicates

    NASA Astrophysics Data System (ADS)

    Watkins, James M.; Liang, Yan; Richter, Frank; Ryerson, Frederick J.; DePaolo, Donald J.

    2014-08-01

    Diffusion experiments in a simplified Na2O-CaO-SiO2 liquid system are used to develop a general formulation for the fractionation of Ca isotopes during liquid-phase diffusion. Although chemical diffusion is a well-studied process, the mathematical description of the effects of diffusion on the separate isotopes of a chemical element is surprisingly underdeveloped and uncertain. Kinetic theory predicts a mass dependence on isotopic mobility, but it is unknown how this translates into a mass dependence on effective binary diffusion coefficients, or more generally, the chemical diffusion coefficients that are housed in a multicomponent diffusion matrix. Our experiments are designed to measure Ca mobility, effective binary diffusion coefficients, the multicomponent diffusion matrix, and the effects of chemical diffusion on Ca isotopes in a liquid of single composition. We carried out two chemical diffusion experiments and one self-diffusion experiment, all at 1250 °C and 0.7 GPa and using a bulk composition for which other information is available from the literature. The self-diffusion experiment is used to determine the mobility of Ca in the absence of diffusive fluxes of other liquid components. The chemical diffusion experiments are designed to determine the effect on Ca isotope fractionation of changing the counter-diffusing component from fast-diffusing Na2O to slow-diffusing SiO2. When Na2O is the main counter-diffusing species, CaO diffusion is fast and larger Ca isotopic effects are generated. When SiO2 is the main counter-diffusing species, CaO diffusion is slow and smaller Ca isotopic effects are observed. In both experiments, the liquid is initially isotopically homogeneous, and during the experiment Ca isotopes become fractionated by diffusion. The results are used as a test of a new general expression for the diffusion of isotopes in a multicomponent liquid system that accounts for both self diffusion and the effects of counter-diffusing species. Our results show that (1) diffusive isotopic fractionations depend on the direction of diffusion in composition space, (2) diffusive isotopic fractionations scale with effective binary diffusion coefficient, as previously noted by Watkins et al. (2011), (3) self-diffusion is not decoupled from chemical diffusion, (4) self diffusion can be faster than or slower than chemical diffusion and (5) off-diagonal terms in the chemical diffusion matrix have isotopic mass-dependence. The results imply that relatively large isotopic fractionations can be generated by multicomponent diffusion even in the absence of large concentration gradients of the diffusing element. The new formulations for isotope diffusion can be tested with further experimentation and provide an improved framework for interpreting mass-dependent isotopic variations in natural liquids.

  10. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  11. The Regional-Matrix Approach to the Training of Highly Qualified Personnel for the Sustainable Development of the Mining Region

    NASA Astrophysics Data System (ADS)

    Zhernov, Evgeny; Nehoda, Evgenia

    2017-11-01

    The state, regional and industry approaches to the problem of personnel training for building an innovative knowledge economy at all levels that ensures sustainable development of the region are analyzed in the article using the cases of the Kemerovo region and the coal industry. A new regional-matrix approach to the training of highly qualified personnel is proposed, which allows to link the training systems with the regional economic matrix "natural resources - cognitive resources" developed by the author. A special feature of the new approach is the consideration of objective conditions and contradictions of regional systems of personnel training, which have formed as part of economic systems of regions differ-entiated in the matrix. The methodology of the research is based on the statement about the interconnectivity of general and local knowledge, from which the understanding of the need for a combination of regional, indus-try and state approaches to personnel training is derived. A new form of representing such a combination is the proposed approach, which is based on matrix analysis. The results of the research can be implemented in the practice of modernization of professional education of workers in the coal industry of the natural resources extractive region.

  12. Reconceptualising Outdoor Adventure Education: Activity in Search of an Appropriate Theory

    ERIC Educational Resources Information Center

    Brown, Mike

    2009-01-01

    Experiential approaches to learning underpin teaching and learning strategies in outdoor adventure education (OAE). Recent critiques of experiential learning have problematised the individualistic and overly cognitive focus of this approach which creates binaries between experience-reflection and the learner-situation. This paper summarises these…

  13. PSR J2032+4127/MT91 213 on approach to periastron: X-ray & optical monitoring

    NASA Astrophysics Data System (ADS)

    Coe, M. J.; Steele, I. A.; Ho, W. C. G.; Stappers, B.; Lyne, A. G.; Halpern, J. P.; Ray, P. S.; Johnson, T. L.; Ng, C.-Y.; Kerr, M.

    2017-11-01

    Swift XRT monitoring of the 50 year binary system PSR J2032+4127/MT91 213 shows a dramatic decrease in the X-ray flux as the system is in the final stages of approach to periastron (13 November 2017).

  14. Scale-Dependent Fracture-Matrix Interactions And Their Impact on Radionuclide Transport - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell

    Matrix diffusion and adsorption within a rock matrix are widely regarded as important mechanisms for retarding the transport of radionuclides and other solutes in fractured rock (e.g., Neretnieks, 1980; Tang et al., 1981; Maloszewski and Zuber, 1985; Novakowski and Lapcevic, 1994; Jardine et al., 1999; Zhou and Xie, 2003; Reimus et al., 2003a,b). When remediation options are being evaluated for old sources of contamination, where a large fraction of contaminants reside within the rock matrix, slow diffusion out of the matrix greatly increases the difficulty and timeframe of remediation. Estimating the rates of solute exchange between fractures and the adjacentmore » rock matrix is a critical factor in quantifying immobilization and/or remobilization of DOE-relevant contaminants within the subsurface. In principle, the most rigorous approach to modeling solute transport with fracture-matrix interaction would be based on local-scale coupled advection-diffusion/dispersion equations for the rock matrix and in discrete fractures that comprise the fracture network (Discrete Fracture Network and Matrix approach, hereinafter referred to as DFNM approach), fully resolving aperture variability in fractures and matrix property heterogeneity. However, such approaches are computationally demanding, and thus, many predictive models rely upon simplified models. These models typically idealize fracture rock masses as a single fracture or system of parallel fractures interacting with slabs of porous matrix or as a mobile-immobile or multi-rate mass transfer system. These idealizations provide tractable approaches for interpreting tracer tests and predicting contaminant mobility, but rely upon a fitted effective matrix diffusivity or mass-transfer coefficients. However, because these fitted parameters are based upon simplified conceptual models, their effectiveness at predicting long-term transport processes remains uncertain. Evidence of scale dependence of effective matrix diffusion coefficients obtained from tracer tests highlights this point and suggests that the underlying mechanisms and relationship between rock and fracture properties are not fully understood in large complex fracture networks. In this project, we developed a high-resolution DFN model of solute transport in fracture networks to explore and quantify the mechanisms that control transport in complex fracture networks and how these may give rise to observed scale-dependent matrix diffusion coefficients. Results demonstrate that small scale heterogeneity in the flow field caused by local aperture variability within individual fractures can lead to long-tailed breakthrough curves indicative of matrix diffusion, even in the absence of interactions with the fracture matrix. Furthermore, the temporal and spatial scale dependence of these processes highlights the inability of short-term tracer tests to estimate transport parameters that will control long-term fate and transport of contaminants in fractured aquifers.« less

  15. Unveiling hidden properties of young star clusters: differential reddening, star-formation spread, and binary fraction

    NASA Astrophysics Data System (ADS)

    Bonatto, C.; Lima, E. F.; Bica, E.

    2012-04-01

    Context. Usually, important parameters of young, low-mass star clusters are very difficult to obtain by means of photometry, especially when differential reddening and/or binaries occur in large amounts. Aims: We present a semi-analytical approach (ASAmin) that, when applied to the Hess diagram of a young star cluster, is able to retrieve the values of mass, age, star-formation spread, distance modulus, foreground and differential reddening, and binary fraction. Methods: The global optimisation method known as adaptive simulated annealing (ASA) is used to minimise the residuals between the observed and simulated Hess diagrams of a star cluster. The simulations are realistic and take the most relevant parameters of young clusters into account. Important features of the simulations are a normal (Gaussian) differential reddening distribution, a time-decreasing star-formation rate, the unresolved binaries, and the smearing effect produced by photometric uncertainties on Hess diagrams. Free parameters are cluster mass, age, distance modulus, star-formation spread, foreground and differential reddening, and binary fraction. Results: Tests with model clusters built with parameters spanning a broad range of values show that ASAmin retrieves the input values with a high precision for cluster mass, distance modulus, and foreground reddening, but they are somewhat lower for the remaining parameters. Given the statistical nature of the simulations, several runs should be performed to obtain significant convergence patterns. Specifically, we find that the retrieved (absolute minimum) parameters converge to mean values with a low dispersion as the Hess residuals decrease. When applied to actual young clusters, the retrieved parameters follow convergence patterns similar to the models. We show how the stochasticity associated with the early phases may affect the results, especially in low-mass clusters. This effect can be minimised by averaging out several twin clusters in the simulated Hess diagrams. Conclusions: Even for low-mass star clusters, ASAmin is sensitive to the values of cluster mass, age, distance modulus, star-formation spread, foreground and differential reddening, and to a lesser degree, binary fraction. Compared with simpler approaches, including binaries, a decaying star-formation rate, and a normally distributed differential reddening appears to yield more constrained parameters, especially the mass, age, and distance from the Sun. A robust determination of cluster parameters may have a positive impact on many fields. For instance, age, mass, and binary fraction are important for establishing the dynamical state of a cluster or for deriving a more precise star-formation rate in the Galaxy.

  16. Binary centrifugal microfluidics enabling novel, digital addressable functions for valving and routing.

    PubMed

    Wang, Guanghui; Tan, Jie; Tang, Minghui; Zhang, Changbin; Zhang, Dongying; Ji, Wenbin; Chen, Junhao; Ho, Ho-Pui; Zhang, Xuping

    2018-03-16

    Centrifugal microfluidics or lab-on-a-disc (LOAD) is a promising branch of lab-on-a-chip or microfluidics. Besides effective fluid transportation and inherently available density-based sample separation in centrifugal microfluidics, uniform actuation of flow on the disc makes the platform compact and scalable. However, the natural radially outward centrifugal force in a LOAD system limits its capacity to perform complex fluid manipulation steps. In order to increase the fluid manipulation freedom and integration capacity of the LOAD system, we propose a binary centrifugal microfluidics platform. With the help of Euler force, our platform allows free switching of both left and right states based on a rather simple mechanical structure. The periodical switching of state would provide a "clock" signal for a sequence of droplet binary logic operations. With the binary state platform and the "clock" signal, we can accurately handle the droplet separately in each time step with a maximum main frequency of about 10 S s-1 (switching per second). Apart from droplet manipulations such as droplet generation and metering, we also demonstrate a series of droplet logic operations, such as binary valving, droplet routing and digital addressable droplet storage. Furthermore, complex bioassays such as the Bradford assay and DNA purification assay are demonstrated on a binary platform, which is totally impossible for a traditional LOAD system. Our binary platform largely improves the capability for logic operation on the LOAD platform, and it is a simple and promising approach for microfluidic lab-on-a-disc large-scale integration.

  17. Dancing in the Dark: New Brown Dwarf Binaries from Kernel Phase Interferometry

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Martinache, Frantz; Tuthill, Peter

    2013-04-01

    This paper revisits a sample of ultracool dwarfs in the solar neighborhood previously observed with the Hubble Space Telescope's NICMOS NIC1 instrument. We have applied a novel high angular resolution data analysis technique based on the extraction and fitting of kernel phases to archival data. This was found to deliver a dramatic improvement over earlier analysis methods, permitting a search for companions down to projected separations of ~1 AU on NIC1 snapshot images. We reveal five new close binary candidates and present revised astrometry on previously known binaries, all of which were recovered with the technique. The new candidate binaries have sufficiently close separation to determine dynamical masses in a short-term observing campaign. We also present four marginal detections of objects which may be very close binaries or high-contrast companions. Including only confident detections within 19 pc, we report a binary fraction of at least \\epsilon _b = 17.2^{+5.7}_{-3.7} %. The results reported here provide new insights into the population of nearby ultracool binaries, while also offering an incisive case study of the benefits conferred by the kernel phase approach in the recovery of companions within a few resolution elements of the point-spread function core. Based on observations performed with the NASA/ESA Hubble Space Telescope. The Hubble observations are associated with proposal ID 10143 and 10879 and were obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  18. The matrix approach to mental health care: Experiences in Florianopolis, Brazil.

    PubMed

    Soares, Susana; de Oliveira, Walter Ferreira

    2016-03-01

    This article reports on the experience of a matrix approach to mental health in primary health care. Professionals who work in the Family Health Support Nuclei, Núcleos de Apoio à Saúde da Família, pointed to challenges of this approach, especially regarding the difficulties of introducing pedagogic actions in the health field and problems related to work relationships. As the matrix approach and its practice are new aspects of the Brazilian Unified Health System, the academic knowledge must walk hand in hand with everyday professional practice to help improve the quality of the services offered in this context. © The Author(s) 2016.

  19. Continuum modeling of large lattice structures: Status and projections

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Mikulas, Martin M., Jr.

    1988-01-01

    The status and some recent developments of continuum modeling for large repetitive lattice structures are summarized. Discussion focuses on a number of aspects including definition of an effective substitute continuum; characterization of the continuum model; and the different approaches for generating the properties of the continuum, namely, the constitutive matrix, the matrix of mass densities, and the matrix of thermal coefficients. Also, a simple approach is presented for generating the continuum properties. The approach can be used to generate analytic and/or numerical values of the continuum properties.

  20. Mammogram classification scheme using 2D-discrete wavelet and local binary pattern for detection of breast cancer

    NASA Astrophysics Data System (ADS)

    Adi Putra, Januar

    2018-04-01

    In this paper, we propose a new mammogram classification scheme to classify the breast tissues as normal or abnormal. Feature matrix is generated using Local Binary Pattern to all the detailed coefficients from 2D-DWT of the region of interest (ROI) of a mammogram. Feature selection is done by selecting the relevant features that affect the classification. Feature selection is used to reduce the dimensionality of data and features that are not relevant, in this paper the F-test and Ttest will be performed to the results of the feature extraction dataset to reduce and select the relevant feature. The best features are used in a Neural Network classifier for classification. In this research we use MIAS and DDSM database. In addition to the suggested scheme, the competent schemes are also simulated for comparative analysis. It is observed that the proposed scheme has a better say with respect to accuracy, specificity and sensitivity. Based on experiments, the performance of the proposed scheme can produce high accuracy that is 92.71%, while the lowest accuracy obtained is 77.08%.

  1. Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composites Behavior

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Chamis, Christos C.; Mital, Subodh K.

    1996-01-01

    This report describes a methodology which predicts the behavior of ceramic matrix composites and has been incorporated in the computational tool CEMCAN (CEramic Matrix Composite ANalyzer). The approach combines micromechanics with a unique fiber substructuring concept. In this new concept, the conventional unit cell (the smallest representative volume element of the composite) of the micromechanics approach is modified by substructuring it into several slices and developing the micromechanics-based equations at the slice level. The methodology also takes into account nonlinear ceramic matrix composite (CMC) behavior due to temperature and the fracture initiation and progression. Important features of the approach and its effectiveness are described by using selected examples. Comparisons of predictions and limited experimental data are also provided.

  2. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  3. Effects of Neutron-Star Dynamic Tides on Gravitational Waveforms within the Effective-One-Body Approach

    NASA Astrophysics Data System (ADS)

    Hinderer, Tanja; Taracchini, Andrea; Foucart, Francois; Buonanno, Alessandra; Steinhoff, Jan; Duez, Matthew; Kidder, Lawrence E.; Pfeiffer, Harald P.; Scheel, Mark A.; Szilagyi, Bela; Hotokezaka, Kenta; Kyutoku, Koutarou; Shibata, Masaru; Carpenter, Cory W.

    2016-05-01

    Extracting the unique information on ultradense nuclear matter from the gravitational waves emitted by merging neutron-star binaries requires robust theoretical models of the signal. We develop a novel effective-one-body waveform model that includes, for the first time, dynamic (instead of only adiabatic) tides of the neutron star as well as the merger signal for neutron-star-black-hole binaries. We demonstrate the importance of the dynamic tides by comparing our model against new numerical-relativity simulations of nonspinning neutron-star-black-hole binaries spanning more than 24 gravitational-wave cycles, and to other existing numerical simulations for double neutron-star systems. Furthermore, we derive an effective description that makes explicit the dependence of matter effects on two key parameters: tidal deformability and fundamental oscillation frequency.

  4. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    PubMed

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  5. Effects of Neutron-Star Dynamic Tides on Gravitational Waveforms within the Effective-One-Body Approach.

    PubMed

    Hinderer, Tanja; Taracchini, Andrea; Foucart, Francois; Buonanno, Alessandra; Steinhoff, Jan; Duez, Matthew; Kidder, Lawrence E; Pfeiffer, Harald P; Scheel, Mark A; Szilagyi, Bela; Hotokezaka, Kenta; Kyutoku, Koutarou; Shibata, Masaru; Carpenter, Cory W

    2016-05-06

    Extracting the unique information on ultradense nuclear matter from the gravitational waves emitted by merging neutron-star binaries requires robust theoretical models of the signal. We develop a novel effective-one-body waveform model that includes, for the first time, dynamic (instead of only adiabatic) tides of the neutron star as well as the merger signal for neutron-star-black-hole binaries. We demonstrate the importance of the dynamic tides by comparing our model against new numerical-relativity simulations of nonspinning neutron-star-black-hole binaries spanning more than 24 gravitational-wave cycles, and to other existing numerical simulations for double neutron-star systems. Furthermore, we derive an effective description that makes explicit the dependence of matter effects on two key parameters: tidal deformability and fundamental oscillation frequency.

  6. On fitting generalized linear mixed-effects models for binary responses using different statistical packages.

    PubMed

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M

    2011-09-10

    The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Measuring Intervention Effectiveness: The Benefits of an Item Response Theory Approach

    ERIC Educational Resources Information Center

    McEldoon, Katherine; Cho, Sun-Joo; Rittle-Johnson, Bethany

    2012-01-01

    Assessing the effectiveness of educational interventions relies on quantifying differences between interventions groups over time in a between-within design. Binary outcome variables (e.g., correct responses versus incorrect responses) are often assessed. Widespread approaches use percent correct on assessments, and repeated measures analysis of…

  8. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  9. Retrospective Binary-Trait Association Test Elucidates Genetic Architecture of Crohn Disease

    PubMed Central

    Jiang, Duo; Zhong, Sheng; McPeek, Mary Sara

    2016-01-01

    In genetic association testing, failure to properly control for population structure can lead to severely inflated type 1 error and power loss. Meanwhile, adjustment for relevant covariates is often desirable and sometimes necessary to protect against spurious association and to improve power. Many recent methods to account for population structure and covariates are based on linear mixed models (LMMs), which are primarily designed for quantitative traits. For binary traits, however, LMM is a misspecified model and can lead to deteriorated performance. We propose CARAT, a binary-trait association testing approach based on a mixed-effects quasi-likelihood framework, which exploits the dichotomous nature of the trait and achieves computational efficiency through estimating equations. We show in simulation studies that CARAT consistently outperforms existing methods and maintains high power in a wide range of population structure settings and trait models. Furthermore, CARAT is based on a retrospective approach, which is robust to misspecification of the phenotype model. We apply our approach to a genome-wide analysis of Crohn disease, in which we replicate association with 17 previously identified regions. Moreover, our analysis on 5p13.1, an extensively reported region of association, shows evidence for the presence of multiple independent association signals in the region. This example shows how CARAT can leverage known disease risk factors to shed light on the genetic architecture of complex traits. PMID:26833331

  10. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: Broadening the matrix approach.

    PubMed

    van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie

    2017-09-01

    This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Oxidation behaviour of zirconium alloys and their precipitates - A mechanistic study

    NASA Astrophysics Data System (ADS)

    Proff, C.; Abolhassani, S.; Lemaignan, C.

    2013-01-01

    The precipitate oxidation behaviour of binary zirconium alloys containing 1 wt.% Fe, Ni, Cr or 0.6 wt.% Nb was characterised in TEM on FIB prepared transverse sections of the oxide and reported in previous studies [1,2]. In the present study the following alloys: Zr1%Cu, Zr0.5%Cu0.5%Mo and pure Zr are analysed to add to the available information. In all cases, the observed precipitate oxidation behaviour in the oxide close to the metal-oxide interface could be described either with delayed oxidation with respect to the matrix or simultaneous oxidation as the surrounding zirconium matrix. Attempt was made to explain these observations, with different parameters such as precipitate size and structure, composition and thermodynamic properties. It was concluded that the thermodynamics with the new approach presented could explain most precisely their behaviour, considering the precipitate stoichiometry and the free energy of oxidation of the constituting elements. The surface topography of the oxidised materials, as well as the microstructure of the oxide presenting microcracks have been examined. A systematic presence of microcracks above the precipitates exhibiting delayed oxidation has been found; the height of these crack calculated using the Pilling-Bedworth ratios of different phases present, can explain their origin. The protrusions at the surface in the case of materials containing large precipitates can be unambiguously correlated to the presence of these latter, and the height can be correlated to the Pilling-Bedworth ratios of the phases present as well as the diffusion of the alloying elements to the surface and their subsequent oxidation. This latter behaviour was much more considerable in the case of Fe and Cu with Fe showing systematically diffusion to the outer surface.

  12. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  13. Beyond logistic regression: structural equations modelling for binary variables and its application to investigating unobserved confounders.

    PubMed

    Kupek, Emil

    2006-03-15

    Structural equation modelling (SEM) has been increasingly used in medical statistics for solving a system of related regression equations. However, a great obstacle for its wider use has been its difficulty in handling categorical variables within the framework of generalised linear models. A large data set with a known structure among two related outcomes and three independent variables was generated to investigate the use of Yule's transformation of odds ratio (OR) into Q-metric by (OR-1)/(OR+1) to approximate Pearson's correlation coefficients between binary variables whose covariance structure can be further analysed by SEM. Percent of correctly classified events and non-events was compared with the classification obtained by logistic regression. The performance of SEM based on Q-metric was also checked on a small (N = 100) random sample of the data generated and on a real data set. SEM successfully recovered the generated model structure. SEM of real data suggested a significant influence of a latent confounding variable which would have not been detectable by standard logistic regression. SEM classification performance was broadly similar to that of the logistic regression. The analysis of binary data can be greatly enhanced by Yule's transformation of odds ratios into estimated correlation matrix that can be further analysed by SEM. The interpretation of results is aided by expressing them as odds ratios which are the most frequently used measure of effect in medical statistics.

  14. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  15. A poroplastic model of structural reorganisation in porous media of biomechanical interest

    NASA Astrophysics Data System (ADS)

    Grillo, Alfio; Prohl, Raphael; Wittum, Gabriel

    2016-03-01

    We present a poroplastic model of structural reorganisation in a binary mixture comprising a solid and a fluid phase. The solid phase is the macroscopic representation of a deformable porous medium, which exemplifies the matrix of a biological system (consisting e.g. of cells, extracellular matrix, collagen fibres). The fluid occupies the interstices of the porous medium and is allowed to move throughout it. The system reorganises its internal structure in response to mechanical stimuli. Such structural reorganisation, referred to as remodelling, is described in terms of "plastic" distortions, whose evolution is assumed to obey a phenomenological flow rule driven by stress. We study the influence of remodelling on the mechanical and hydraulic behaviour of the system, showing how the plastic distortions modulate the flow pattern of the fluid, and the distributions of pressure and stress inside it. To accomplish this task, we solve a highly nonlinear set of model equations by elaborating a previously developed numerical procedure, which is implemented in a non-commercial finite element solver.

  16. Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating

    PubMed Central

    Wang, Bingkun; Huang, Yongfeng; Li, Xing

    2016-01-01

    E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods. PMID:26880879

  17. Cosmic matrix in the jubilee of relativistic astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruffini, R., E-mail: ruffini@icra.it; ICRANet, Piazza della Repubblica 10, I–65122 Pescara; Université de Nice Sophie Antipolis, Nice, CEDEX 2, Grand Château Parc Valrose

    2015-12-17

    Following the classical works on Neutron Stars, Black Holes and Cosmology, I outline some recent results obtained in the IRAP-PhD program of ICRANet on the “Cosmic Matrix”: a new astrophysical phenomenon recorded by the X- and Gamma-Ray satellites and by the largest ground based optical telescopes all over our planet. In 3 minutes it has been recorded the occurrence of a “Supernova”, the “Induced-Gravitational-Collapse” on a Neutron Star binary, the formation of a “Black Hole”, and the creation of a “Newly Born Neutron Star”. This presentation is based on a document describing activities of ICRANet and recent developments of themore » paradigm of the Cosmic Matrix in the comprehension of Gamma Ray Bursts (GRBs) presented on the occasion of the Fourteenth Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Gravitation, and Relativistic Field Theory. A Portuguese version of this document can be downloaded at: http://www.icranet.org/documents/brochure{sub i}cranet{sub p}t.pdf.« less

  18. THz Beam Shaper Realizing Fan-Out Patterns

    NASA Astrophysics Data System (ADS)

    Liebert, K.; Rachon, M.; Siemion, A.; Suszek, J.; But, D.; Knap, W.; Sypek, M.

    2017-08-01

    Fan-out elements create an array of beams radiating at particular angles along the propagation axis. Therefore, they are able to form a matrix of equidistant spots in the far-field diffraction region. In this work, we report on the first fan-out structures designed for the THz range of radiation. Two types of light-dividing fan-out structures are demonstrated: (i) the 3×1 matrix fan-out structure based on the optimized binary phase grating and (ii) the 3×3 fan-out structure designed on the basis of the well-known Dammann grating. The structures were generated numerically and manufactured using the 3D printing technique with polyamide PA12. To obtain equal powers and symmetry of diffracted beams, the computer-aided optimization algorithm was used. Diffractive optical elements designed for 140 and 282 GHz were evaluated experimentally at both these frequencies using illumination with the wavefront coming from the point-like source. Described fan-out elements formed uniform intensity and equidistant energy distribution in agreement with the numerical simulations.

  19. Rapid solidification of high-conductivity copper alloys. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bloom, Theodore Atlas

    1989-01-01

    The main objective was to develop improved copper alloys of high strength and high thermal and electric conductivity. Chill block melt spinning was used to produce binary alloys of Cu-Cr and Cu-Zr, and ternary alloys of Cu-Cr-Ag. By quenching from the liquid state, up to 5 atomic percent of Cr and Zr were retained in metastable extended solid solution during the rapid solidification process. Eutectic solidification was avoided and the full strengthening benefits of the large volume fraction of precipitates were realized by subsequent aging treatment. The very low solid solubility of Cr and Zr in Cu result in a high conductivity Cu matrix strengthened by second phase precipitates. Tensile properties on as-cast and aged ribbons were measured at room and elevated temperatures. Precipitate coarsening of Cr in Cu was studied by changes in electrical resistance during aging. X-ray diffraction was used to measure the lattice parameter and the degree of supersaturation of the matrix. The microstructures were characterized by optical and electron microscopy.

  20. Non-moving Hadamard matrix diffusers for speckle reduction in laser pico-projectors

    NASA Astrophysics Data System (ADS)

    Thomas, Weston; Middlebrook, Christopher

    2014-12-01

    Personal electronic devices such as cell phones and tablets continue to decrease in size while the number of features and add-ons keep increasing. One particular feature of great interest is an integrated projector system. Laser pico-projectors have been considered, but the technology has not been developed enough to warrant integration. With new advancements in diode technology and MEMS devices, laser-based projection is currently being advanced for pico-projectors. A primary problem encountered when using a pico-projector is coherent interference known as speckle. Laser speckle can lead to eye irritation and headaches after prolonged viewing. Diffractive optical elements known as diffusers have been examined as a means to lower speckle contrast. This paper presents a binary diffuser known as a Hadamard matrix diffuser. Using two static in-line Hadamard diffusers eliminates the need for rotation or vibration of the diffuser for temporal averaging. Two Hadamard diffusers were fabricated and contrast values measured showing good agreement with theory and simulated values.

  1. Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating.

    PubMed

    Wang, Bingkun; Huang, Yongfeng; Li, Xing

    2016-01-01

    E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods.

  2. Atom Probe Tomography Analysis of the Distribution of Rhenium in Nickel Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mottura, A.; Warnken, N; Miller, Michael K

    2010-01-01

    Atom probe tomography (APT) is used to characterise the distributions of rhenium in a binary Ni-Re alloy and the nickel-based single-crystal CMSX-4 superalloy. A purpose-built algorithm is developed to quantify the size distribution of solute clusters, and applied to the APT datasets to critique the hypothesis that rhenium is prone to the formation of clusters in these systems. No evidence is found to indicate that rhenium forms solute clusters above the level expected from random fluctuations. In CMSX-4, enrichment of Re is detected in the matrix phase close to the matrix/precipitate ({gamma}/{gamma}{prime}) phase boundaries. Phase field modelling indicates that thismore » is due to the migration of the {gamma}/{gamma}{prime} interface during cooling from the temperature of operation. Thus, neither clustering of rhenium nor interface enrichments can be the cause of the enhancement in high temperature mechanical properties conferred by rhenium alloying.« less

  3. Protocol: a rapid and economical procedure for purification of plasmid or plant DNA with diverse applications in plant biology

    PubMed Central

    2010-01-01

    Research in plant molecular biology involves DNA purification on a daily basis. Although different commercial kits enable convenient extraction of high-quality DNA from E. coli cells, PCR and agarose gel samples as well as plant tissues, each kit is designed for a particular type of DNA extraction work, and the cost of purchasing these kits over a long run can be considerable. Furthermore, a simple method for the isolation of binary plasmid from Agrobacterium tumefaciens cells with satisfactory yield is lacking. Here we describe an easy protocol using homemade silicon dioxide matrix and seven simple solutions for DNA extraction from E. coli and A. tumefaciens cells, PCR and restriction digests, agarose gel slices, and plant tissues. Compared with the commercial kits, this protocol allows rapid DNA purification from diverse sources with comparable yield and purity at negligible cost. Following this protocol, we have demonstrated: (1) DNA fragments as small as a MYC-epitope tag coding sequence can be successfully recovered from an agarose gel slice; (2) Miniprep DNA from E. coli can be eluted with as little as 5 μl water, leading to high DNA concentrations (>1 μg/μl) for efficient biolistic bombardment of Arabidopsis seedlings, polyethylene glycol (PEG)-mediated Arabidopsis protoplast transfection and maize protoplast electroporation; (3) Binary plasmid DNA prepared from A. tumefaciens is suitable for verification by restriction analysis without the need for large scale propagation; (4) High-quality genomic DNA is readily isolated from several plant species including Arabidopsis, tobacco and maize. Thus, the silicon dioxide matrix-based DNA purification protocol offers an easy, efficient and economical way to extract DNA for various purposes in plant research. PMID:20180960

  4. The fourfold way of the genetic code.

    PubMed

    Jiménez-Montaño, Miguel Angel

    2009-11-01

    We describe a compact representation of the genetic code that factorizes the table in quartets. It represents a "least grammar" for the genetic language. It is justified by the Klein-4 group structure of RNA bases and codon doublets. The matrix of the outer product between the column-vector of bases and the corresponding row-vector V(T)=(C G U A), considered as signal vectors, has a block structure consisting of the four cosets of the KxK group of base transformations acting on doublet AA. This matrix, translated into weak/strong (W/S) and purine/pyrimidine (R/Y) nucleotide classes, leads to a code table with mixed and unmixed families in separate regions. A basic difference between them is the non-commuting (R/Y) doublets: AC/CA, GU/UG. We describe the degeneracy in the canonical code and the systematic changes in deviant codes in terms of the divisors of 24, employing modulo multiplication groups. We illustrate binary sub-codes characterizing mutations in the quartets. We introduce a decision-tree to predict the mode of tRNA recognition corresponding to each codon, and compare our result with related findings by Jestin and Soulé [Jestin, J.-L., Soulé, C., 2007. Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs. J. Theor. Biol. 247, 391-394], and the rearrangements of the table by Delarue [Delarue, M., 2007. An asymmetric underlying rule in the assignment of codons: possible clue to a quick early evolution of the genetic code via successive binary choices. RNA 13, 161-169] and Rodin and Rodin [Rodin, S.N., Rodin, A.S., 2008. On the origin of the genetic code: signatures of its primordial complementarity in tRNAs and aminoacyl-tRNA synthetases. Heredity 100, 341-355], respectively.

  5. Knee cartilage segmentation using active shape models and local binary patterns

    NASA Astrophysics Data System (ADS)

    González, Germán.; Escalante-Ramírez, Boris

    2014-05-01

    Segmentation of knee cartilage has been useful for opportune diagnosis and treatment of osteoarthritis (OA). This paper presents a semiautomatic segmentation technique based on Active Shape Models (ASM) combined with Local Binary Patterns (LBP) and its approaches to describe the surrounding texture of femoral cartilage. The proposed technique is tested on a 16-image database of different patients and it is validated through Leave- One-Out method. We compare different segmentation techniques: ASM-LBP, ASM-medianLBP, and ASM proposed by Cootes. The ASM-LBP approaches are tested with different ratios to decide which of them describes the cartilage texture better. The results show that ASM-medianLBP has better performance than ASM-LBP and ASM. Furthermore, we add a routine which improves the robustness versus two principal problems: oversegmentation and initialization.

  6. Urinary bladder cancer T-staging from T2-weighted MR images using an optimal biomarker approach

    NASA Astrophysics Data System (ADS)

    Wang, Chuang; Udupa, Jayaram K.; Tong, Yubing; Chen, Jerry; Venigalla, Sriram; Odhner, Dewey; Guzzo, Thomas J.; Christodouleas, John; Torigian, Drew A.

    2018-02-01

    Magnetic resonance imaging (MRI) is often used in clinical practice to stage patients with bladder cancer to help plan treatment. However, qualitative assessment of MR images is prone to inaccuracies, adversely affecting patient outcomes. In this paper, T2-weighted MR image-based quantitative features were extracted from the bladder wall in 65 patients with bladder cancer to classify them into two primary tumor (T) stage groups: group 1 - T stage < T2, with primary tumor locally confined to the bladder, and group 2 - T stage < T2, with primary tumor locally extending beyond the bladder. The bladder was divided into 8 sectors in the axial plane, where each sector has a corresponding reference standard T stage that is based on expert radiology qualitative MR image review and histopathologic results. The performance of the classification for correct assignment of T stage grouping was then evaluated at both the patient level and the sector level. Each bladder sector was divided into 3 shells (inner, middle, and outer), and 15,834 features including intensity features and texture features from local binary pattern and gray-level co-occurrence matrix were extracted from the 3 shells of each sector. An optimal feature set was selected from all features using an optimal biomarker approach. Nine optimal biomarker features were derived based on texture properties from the middle shell, with an area under the ROC curve of AUC value at the sector and patient level of 0.813 and 0.806, respectively.

  7. Novel ID-based anti-collision approach for RFID

    NASA Astrophysics Data System (ADS)

    Zhang, De-Gan; Li, Wen-Bin

    2016-09-01

    Novel correlation ID-based (CID) anti-collision approach for RFID under the banner of the Internet of Things (IOT) has been presented in this paper. The key insights are as follows: according to the deterministic algorithms which are based on the binary search tree, we propose a method to increase the association between tags so that tags can initiatively send their own ID under certain trigger conditions, at the same time, we present a multi-tree search method for querying. When the number of tags is small, by replacing the actual ID with the temporary ID, it can greatly reduce the number of times that the reader reads and writes to tag's ID. Active tags send data to the reader by the way of modulation binary pulses. When applying this method to the uncertain ALOHA algorithms, the reader can determine the locations of the empty slots according to the position of the binary pulse, so it can avoid the decrease in efficiency which is caused by reading empty slots when reading slots. Theory and experiment show that this method can greatly improve the recognition efficiency of the system when applied to either the search tree or the ALOHA anti-collision algorithms.

  8. Adjustable repetition-rate multiplication of optical pulses using fractional temporal Talbot effect with preceded binary intensity modulation

    NASA Astrophysics Data System (ADS)

    Xie, Qijie; Zheng, Bofang; Shu, Chester

    2017-05-01

    We demonstrate a simple approach for adjustable multiplication of optical pulses in a fiber using the temporal Talbot effect. Binary electrical patterns are used to control the multiplication factor in our approach. The input 10 GHz picosecond pulses are pedestal-free and are shaped directly from a CW laser. The pulses are then intensity modulated by different sets of binary patterns prior to entering a fiber of fixed dispersion. Tunable repetition-rate multiplication by different factors of 2, 4, and 8 have been achieved and up to 80 GHz pulse train has been experimentally generated. We also evaluate numerically the influence of the extinction ratio of the intensity modulator on the performance of the multiplied pulse train. In addition, the impact of the modulator bias on the uniformity of the output pulses has also been analyzed through simulation and experiment and a good agreement is reached. Last, we perform numerical simulation on the RF spectral characteristics of the output pulses. The insensitivity of the signal-to-subharmonic noise ratio (SSNR) to the laser linewidth shows that our multiplication scheme is highly tolerant to the incoherence of the input optical pulses.

  9. Matrix approaches to assess terrestrial nitrogen scheme in CLM4.5

    NASA Astrophysics Data System (ADS)

    Du, Z.

    2017-12-01

    Terrestrial carbon (C) and nitrogen (N) cycles have been commonly represented by a series of balance equations to track their influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C and N cycle processes well but makes it difficult to track model behaviors. To overcome these challenges, we developed a matrix approach, which reorganizes the series of terrestrial C and N balance equations in the CLM4.5 into two matrix equations based on original representation of C and N cycle processes and mechanisms. The matrix approach would consequently help improve the comparability of models and data, evaluate impacts of additional model components, facilitate benchmark analyses, model intercomparisons, and data-model fusion, and improve model predictive power.

  10. Constraining the equation of state of neutron stars from binary mergers.

    PubMed

    Takami, Kentaro; Rezzolla, Luciano; Baiotti, Luca

    2014-08-29

    Determining the equation of state of matter at nuclear density and hence the structure of neutron stars has been a riddle for decades. We show how the imminent detection of gravitational waves from merging neutron star binaries can be used to solve this riddle. Using a large number of accurate numerical-relativity simulations of binaries with nuclear equations of state, we find that the postmerger emission is characterized by two distinct and robust spectral features. While the high-frequency peak has already been associated with the oscillations of the hypermassive neutron star produced by the merger and depends on the equation of state, a new correlation emerges between the low-frequency peak, related to the merger process, and the total compactness of the stars in the binary. More importantly, such a correlation is essentially universal, thus providing a powerful tool to set tight constraints on the equation of state. If the mass of the binary is known from the inspiral signal, the combined use of the two frequency peaks sets four simultaneous constraints to be satisfied. Ideally, even a single detection would be sufficient to select one equation of state over the others. We test our approach with simulated data and verify it works well for all the equations of state considered.

  11. A Fast Optimization Method for General Binary Code Learning.

    PubMed

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  12. Binary culture of microalgae as an integrated approach for enhanced biomass and metabolites productivity, wastewater treatment, and bioflocculation.

    PubMed

    Rashid, Naim; Park, Won-Kun; Selvaratnam, Thinesh

    2018-03-01

    Ecological studies of microalgae have revealed their potential to co-exist in the natural environment. It provides an evidence of the symbiotic relationship of microalgae with other microorganisms. The symbiosis potential of microalgae is inherited with distinct advantages, providing a venue for their scale-up applications. The deployment of large-scale microalgae applications is limited due to the technical challenges such as slow growth rate, low metabolites yield, and high risk of biomass contamination by unwanted bacteria. However, these challenges can be overcome by exploring symbiotic potential of microalgae. In a symbiotic system, photosynthetic microalgae co-exist with bacteria, fungi, as well as heterotrophic microalgae. In this consortium, they can exchange nutrients and metabolites, transfer gene, and interact with each other through complex metabolic mechanism. Microalgae in this system, termed as a binary culture, are reported to exhibit high growth rate, enhanced bio-flocculation, and biochemical productivity without experiencing contamination. Binary culture also offers interesting applications in other biotechnological processes including bioremediation, wastewater treatment, and production of high-value metabolites. The focus of the study is to provide a perspective to enhance the understanding about microalgae binary culture. In this review, the mechanism of binary culture, its potential, and limitations are briefly discussed. A number of queries are evolved through this study, which needs to be answered by executing future research to assess the real potential of binary culture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.

    PubMed

    Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin

    2014-01-01

    This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.

  14. Plunge waveforms from inspiralling binary black holes.

    PubMed

    Baker, J; Brügmann, B; Campanelli, M; Lousto, C O; Takahashi, R

    2001-09-17

    We study the coalescence of nonspinning binary black holes from near the innermost stable circular orbit down to the final single rotating black hole. We use a technique that combines the full numerical approach to solve the Einstein equations, applied in the truly nonlinear regime, and linearized perturbation theory around the final distorted single black hole at later times. We compute the plunge waveforms, which present a non-negligible signal lasting for t approximately 100M showing early nonlinear ringing, and we obtain estimates for the total gravitational energy and angular momentum radiated.

  15. Binary logic based purely on Fresnel diffraction

    NASA Astrophysics Data System (ADS)

    Hamam, H.; de Bougrenet de La Tocnaye, J. L.

    1995-09-01

    Binary logic operations on two-dimensional data arrays are achieved by use of the self-imaging properties of Fresnel diffraction. The fields diffracted by periodic objects can be considered as the superimposition of weighted and shifted replicas of original objects. We show that a particular spatial organization of the input data can result in logical operations being performed on these data in the considered diffraction planes. Among various advantages, this approach is shown to allow the implementation of dual-track, nondissipative logical operators. Image algebra is presented as an experimental illustration of this principle.

  16. How can we make stable linear monoatomic chains? Gold-cesium binary subnanowires as an example of a charge-transfer-driven approach to alloying.

    PubMed

    Choi, Young Cheol; Lee, Han Myoung; Kim, Woo Youn; Kwon, S K; Nautiyal, Tashi; Cheng, Da-Yong; Vishwanathan, K; Kim, Kwang S

    2007-02-16

    On the basis of first-principles calculations of clusters and one dimensional infinitely long subnanowires of the binary systems, we find that alkali-noble metal alloy wires show better linearity and stability than either pure alkali metal or noble metal wires. The enhanced alternating charge buildup on atoms by charge transfer helps the atoms line up straight. The cesium doped gold wires showing significant charge transfer from cesium to gold can be stabilized as linear or circular monoatomic chains.

  17. Constraints on the Dynamical Environments of Supermassive Black-Hole Binaries Using Pulsar-Timing Arrays.

    PubMed

    Taylor, Stephen R; Simon, Joseph; Sampson, Laura

    2017-05-05

    We introduce a technique for gravitational-wave analysis, where Gaussian process regression is used to emulate the strain spectrum of a stochastic background by training on population-synthesis simulations. This leads to direct Bayesian inference on astrophysical parameters. For pulsar timing arrays specifically, we interpolate over the parameter space of supermassive black-hole binary environments, including three-body stellar scattering, and evolving orbital eccentricity. We illustrate our approach on mock data, and assess the prospects for inference with data similar to the NANOGrav 9-yr data release.

  18. III-V semiconductor solid solution single crystal growth

    NASA Technical Reports Server (NTRS)

    Gertner, E. R.

    1982-01-01

    The feasibility and desirability of space growth of bulk IR semiconductor crystals for use as substrates for epitaxial IR detector material were researched. A III-V ternary compound (GaInSb) and a II-VI binary compound were considered. Vapor epitaxy and quaternary epitaxy techniques were found to be sufficient to permit the use of ground based binary III-V crystals for all major device applications. Float zoning of CdTe was found to be a potentially successful approach to obtaining high quality substrate material, but further experiments were required.

  19. The formation mechanism of binary semiconductor nanomaterials: shared by single-source and dual-source precursor approaches.

    PubMed

    Yu, Kui; Liu, Xiangyang; Zeng, Qun; Yang, Mingli; Ouyang, Jianying; Wang, Xinqin; Tao, Ye

    2013-10-11

    One thing in common: The formation of binary colloidal semiconductor nanocrystals from single- (M(EEPPh2 )n ) and dual-source precursors (metal carboxylates M(OOCR)n and phosphine chalcogenides such as E=PHPh2 ) is found to proceed through a common mechanism. For CdSe as a model system (31) P NMR spectroscopy and DFT calculations support a reaction mechanism which includes numerous metathesis equilibriums and Se exchange reactions. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Binary YORP Effect and Evolution of Binary Asteroids

    NASA Astrophysics Data System (ADS)

    Steinberg, Elad; Sari, Re'em

    2011-02-01

    The rotation states of kilometer-sized near-Earth asteroids are known to be affected by the Yarkevsky O'Keefe-Radzievskii-Paddack (YORP) effect. In a related effect, binary YORP (BYORP), the orbital properties of a binary asteroid evolve under a radiation effect mostly acting on a tidally locked secondary. The BYORP effect can alter the orbital elements over ~104-105 years for a Dp = 2 km primary with a Ds = 0.4 km secondary at 1 AU. It can either separate the binary components or cause them to collide. In this paper, we devise a simple approach to calculate the YORP effect on asteroids and the BYORP effect on binaries including J 2 effects due to primary oblateness and the Sun. We apply this to asteroids with known shapes as well as a set of randomly generated bodies with various degrees of smoothness. We find a strong correlation between the strengths of an asteroid's YORP and BYORP effects. Therefore, statistical knowledge of one could be used to estimate the effect of the other. We show that the action of BYORP preferentially shrinks rather than expands the binary orbit and that YORP preferentially slows down asteroids. This conclusion holds for the two extremes of thermal conductivities studied in this work and the assumption that the asteroid reaches a stable point, but may break down for moderate thermal conductivity. The YORP and BYORP effects are shown to be smaller than could be naively expected due to near cancellation of the effects at small scales. Taking this near cancellation into account, a simple order-of-magnitude estimate of the YORP and BYORP effects as a function of the sizes and smoothness of the bodies is calculated. Finally, we provide a simple proof showing that there is no secular effect due to absorption of radiation in BYORP.

Top